text_prompt
stringlengths
168
30.3k
code_prompt
stringlengths
67
124k
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Exploring the Fermi distribution Step3: In this equation Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Step5: Use interact with plot_fermidist to explore the distribution
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.display import Image from IPython.html.widgets import interact, interactive, fixed Image('fermidist.png') def fermidist(energy, mu, kT): Compute the Fermi distribution at energy, mu and kT. F = (-1) / ((np.exp((energy - mu) / (kT))) + 1) return F assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532, 0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ])) def plot_fermidist(mu, kT): energy = np.linspace(0, 10, 100) plt.figure(figsize=(15,5)) plt.plot(energy, fermidist(energy, mu, kT)) plt.grid(True) plt.xlabel('Particle Energy') plt.ylabel('Fermi-Dirac Distribution') plt.xticks([0, 2, 4, 6, 8, 10], ['0', '2$\epsilon$', '4$\epsilon$', '6$\epsilon$', '8$\epsilon$', '10$\epsilon$']) plt.ylim(-1,1) plot_fermidist(4.0, 1.0) assert True # leave this for grading the plot_fermidist function interact(plot_fermidist, mu = (0.0, 5.0, 0.5), kT = (0.1, 10.0, 0.1)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: This is will give us just the containers that we can store data into. Step2: Since we did not define a method to show the value stored, we cannot see the value. However the values we passed to each variable is stored. Step3: After modifying the ListNode class and creating nodes for testing we need to use next field to assign it to next node to establish a connection.
<ASSISTANT_TASK:> Python Code: class ListNode: def __init__(self, data): self.data = data a = ListNode(11) b = ListNode(52) c = ListNode(18) print(a) class ListNode: def __init__(self, data): self.data = data self.next = None # Initial creation of nodes a = ListNode(11) b = ListNode(52) c = ListNode(18) a.next = b b.next = c print(a.data) # Prints the first node print(a.next.data) # Prints the b print(a.next.next.data) # Prints the c <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. A grids Step2: Figure 1 shows how land grid points and boundaries can be interpreted, depending on whether you assume a grid node is at the center or corner of a cell. When they are at the corner of a cell, boundaries are interpreted as the edges between two nodes. This is especially visible in the upper center of the figures, where the white nodes with surrounding ocean can be assumed to only have a line of land between them (figure 1A), or as entire land cells (figure 1B). Step3: 1.2 How do particles get stuck? Step4: Particles moving toward the boundary will keep slowing down as long as the cross-boundary component of one of the 4 nearest velocity vectors is directed toward the boundary. The distance traveled each dt will decrease to unrealistic values in the absence of local forces directing the flow along the boundary. If we define a particle to be stuck when it moves less than approximately 1 meter every hour, we can see how particles get stuck on the model boundary. Step5: In figure 3A you can see how particles released in a 3x3 grid keep moving toward the boundary between two land nodes. The ratio of the $u$ and $v$ components stays at a similar value due to both being linearly interpolated to zero at the boundary. Note that the interpretation of nodes at the center of grid cells (figure 3B) is clearly incompatible with Parcels interpolation. Step6: 2. C grids Step7: In figure 5 you can see how a boundary can be traced along the cell edges, where the velocity is zero, because the normal velocities are defined at the edges in a C-grid. This ensures that the most important boundary condition in many models is satisfied Step8: There are different options for the along-boundary velocity in NEMO Step9: 2.2 Particle trajectories near the boundary in a C grid Step10: As we can see in figure 6, the particles that start in the ocean move along the boundary, since the cross-boundary component goes to zero, but the along-boundary component is equal to the value away from the coast. Step11: 3. B grids Step12: 4. Diffusion
<ASSISTANT_TASK:> Python Code: from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, Variable, Field from glob import glob import numpy as np import xarray as xr from datetime import timedelta as delta import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from matplotlib.lines import Line2D import cmocean from copy import copy from scipy import interpolate import math from IPython.display import Image # --------- Open dataset --------- SMOCfile = 'SMOC_20190704_R20190705.nc' SMOCfiles = ['SMOC_20190704_R20190705.nc','SMOC_20190705_R20190706.nc','SMOC_20190706_R20190707.nc','SMOC_20190707_R20190708.nc', 'SMOC_20190708_R20190709.nc'] flowdata_SMOC = xr.open_dataset(SMOCfile) # --------- Define meshgrid coordinates to plot velocity field with matplotlib pcolormesh --------- dlon = flowdata_SMOC['longitude'][1]-flowdata_SMOC['longitude'][0] # 1/12 degree dlat = flowdata_SMOC['latitude'][1]-flowdata_SMOC['latitude'][0] # 1/12 degree # Outside corner coordinates - coordinates + 0.5 dx x_outcorners,y_outcorners = np.meshgrid(np.append((flowdata_SMOC['longitude']-0.5*dlon),(flowdata_SMOC['longitude'][-1]+0.5*dlon)), np.append((flowdata_SMOC['latitude']-0.5*dlat),(flowdata_SMOC['latitude'][-1]+0.5*dlat))) # Inside corner coordinates - coordinates + 0.5 dx - needed to plot cells inbetween velocity field nodes x_incorners,y_incorners = np.meshgrid((flowdata_SMOC['longitude']+0.5*dlon)[:-1], (flowdata_SMOC['latitude']+0.5*dlat)[:-1]) # Center coordinates x_centers, y_centers = np.meshgrid(flowdata_SMOC['longitude'], flowdata_SMOC['latitude']) # --------- Velocity fields --------- cells = np.zeros((len(flowdata_SMOC['latitude']),len(flowdata_SMOC['longitude']))) # Empty cells between coordinate nodes - essentially on inside corners umask = np.ma.masked_invalid(flowdata_SMOC['uo'][0,0]) # Masking the flowfield where U = NaN u_zeros = np.nan_to_num(flowdata_SMOC['uo'][0,0]) # Velocity field with NaN -> zero to be able to use in interp2d fu = interpolate.interp2d(flowdata_SMOC['longitude'],flowdata_SMOC['latitude'],u_zeros) # Interpolator U u_corners = fu(x_incorners[0,:],y_incorners[:,0]) # Velocity field interpolated on the inside corners udmask = np.ma.masked_values(u_corners,0) # Masking the interpolated flowfield where U = 0 # To interpolate single particle velocity later v_zeros = np.nan_to_num(flowdata_SMOC['vo'][0,0]) # Velocity field with NaN -> zero to be able to use in interp2d fv = interpolate.interp2d(flowdata_SMOC['longitude'],flowdata_SMOC['latitude'],v_zeros) # Interpolator V # --------- Plotting domain --------- lonminx = 2100 lonmaxx = 2500 latminx = 1300 latmaxx = 1750 SMOC_U = flowdata_SMOC['uo'][0,0,latminx:latmaxx,lonminx:lonmaxx].fillna(0) # Selected velocity domain to plot SMOC_V = flowdata_SMOC['vo'][0,0,latminx:latmaxx,lonminx:lonmaxx].fillna(0) # Selected velocity domain to plot ### Create seethrough colormap to show different grid interpretations cmap = plt.get_cmap('Blues') my_cmap = cmap(np.arange(cmap.N)) my_cmap[:,-1] = 0 # set alpha to zero my_cmap = ListedColormap(my_cmap) fig,(ax1,ax2,ax3) = plt.subplots(3,1,figsize=(10,15),constrained_layout=True) fig.suptitle('Figure 1', fontsize=16) ax1.set_xlim(4.6,5.4) ax1.set_ylim(52.7,53.3) ax1.set_xlabel('Longitude') ax1.set_ylabel('Latitude') ax1.set_title('A) Node at corner cell - agreement with Parcels interpolation', fontsize=14, fontweight = 'bold') ax1.pcolormesh(x_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],y_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],udmask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='orange') ax1.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,c=SMOC_U,cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax1.quiver(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],SMOC_U,SMOC_V,angles='xy', scale_units='xy', scale=8, color='w',edgecolor='k') ax2.set_xlim(4.6,5.4) ax2.set_ylim(52.7,53.3) ax2.set_xlabel('Longitude') ax2.set_ylabel('Latitude') ax2.set_title('B) Node at center cell - not according to Parcels interpolation', fontsize=14, fontweight = 'bold') ax2.pcolormesh(x_outcorners[latminx:latmaxx+1,lonminx:lonmaxx+1],y_outcorners[latminx:latmaxx+1,lonminx:lonmaxx+1],umask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='k',linewidth=1) ax2.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,c=SMOC_U,cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax2.quiver(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],SMOC_U,SMOC_V,angles='xy', scale_units='xy', scale=8, color='w',edgecolor='k') ax3.set_xlim(4.6,5.4) ax3.set_ylim(52.7,53.3) ax3.set_xlabel('Longitude') ax3.set_ylabel('Latitude') ax3.set_title('C) Different boundaries - Parcels interpolates according to orange grid', fontsize=14, fontweight = 'bold') ax3.pcolormesh(x_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],y_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],udmask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='orange') ax3.pcolormesh(x_outcorners[latminx:latmaxx+1,lonminx:lonmaxx+1],y_outcorners[latminx:latmaxx+1,lonminx:lonmaxx+1],cells[latminx:latmaxx,lonminx:lonmaxx],cmap=my_cmap,edgecolors='black') ax3.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,c=SMOC_U,cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax3.quiver(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],SMOC_U,SMOC_V,angles='xy', scale_units='xy', scale=8, color='w',edgecolor='k') fig = plt.figure(figsize=(8,6)) fig.suptitle('Figure 2', fontsize=16) ax=plt.axes() ax.set_xlim(5.0,5.18) ax.set_ylim(52.9,53.02) ax.set_xlabel('Longitude') ax.set_ylabel('Latitude') ax.set_title('Bilinear interpolation', fontsize=14, fontweight = 'bold') ax.pcolormesh(x_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],y_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],udmask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='orange') ax.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,c=SMOC_U,cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax.quiver(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],SMOC_U,SMOC_V,angles='xy', scale_units='xy', scale=3, color='w',width=0.01) plon = 5.12 plat = 52.93 pU = fu(plon,plat) pV = fv(plon,plat) ax.scatter(plon,plat,s=50, color='cyan',marker='h') ax.quiver(plon,plat,pU,pV,angles='xy', scale_units='xy', scale=3, color='w') color_land = copy(plt.get_cmap('Reds'))(0) color_ocean = copy(plt.get_cmap('Reds'))(256) custom_lines = [Line2D([0], [0], c='cyan', marker='h', markersize=10, lw=0), Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0), Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)] ax.legend(custom_lines, ['Particle', 'Ocean nodes', 'Land nodes'],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.,framealpha=1) SMOCfile = 'SMOC_201907*.nc' filenames = {'U': SMOCfile, 'V': SMOCfile} variables = {'U': 'uo', 'V': 'vo'} dimensions = {'U': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'}, 'V': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'}} fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) npart = 3 # number of particles to be released lon = np.linspace(7, 7.2, npart, dtype=np.float32) lat = np.linspace(53.45, 53.6, npart, dtype=np.float32) lons, lats = np.meshgrid(lon,lat) time = np.zeros(lons.size) pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time) kernels = pset.Kernel(AdvectionRK4) output_file = pset.ParticleFile(name="SMOC.nc", outputdt=delta(hours=1)) pset.execute(kernels, runtime=delta(hours=119), dt=delta(minutes=12), output_file=output_file) output_file.close() # export the trajectory data to a netcdf file ds_SMOC = xr.open_dataset('SMOC.nc') # Calculating when a particle is stuck pdx = np.diff(ds_SMOC['lon'],axis=1,prepend=0) pdy = np.diff(ds_SMOC['lat'],axis=1,prepend=0) distance = np.sqrt(np.square(pdx)+np.square(pdy)) # approximation of the distance travelled stuck = distance<1e-5 # [degrees] 1e-5 degrees ~~ 1 m in dt = 1 hour. fig = plt.figure(figsize=(18,7)) fig.suptitle('Figure 3. SMOC A-grid', fontsize=16) gs = gridspec.GridSpec(ncols=2, nrows=1, figure=fig) ax1 = fig.add_subplot(gs[0, 0]) ax1.set_ylabel('Latitude [degrees]') ax1.set_xlabel('Longitude [degrees]') ax1.set_title('A) Parcels bilinear interpolation', fontsize=14, fontweight = 'bold') ax1.set_xlim(6.9, 7.5) ax1.set_ylim(53.4, 53.8) ax1.pcolormesh(x_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],y_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],udmask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='orange') ax1.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,c=SMOC_U,cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax1.plot(ds_SMOC['lon'].T, ds_SMOC['lat'].T,linewidth=3, zorder=1) ax1.scatter(ds_SMOC['lon'], ds_SMOC['lat'], c= stuck, cmap='viridis_r', zorder=2) ax2 = fig.add_subplot(gs[0, 1]) ax2.set_ylabel('Latitude [degrees]') ax2.set_xlabel('Longitude [degrees]') ax2.set_title('B) Incompatible A grid interpretation', fontsize=14, fontweight = 'bold') ax2.set_xlim(6.9, 7.5) ax2.set_ylim(53.4, 53.8) ax2.pcolormesh(x_outcorners[latminx:latmaxx+1,lonminx:lonmaxx+1],y_outcorners[latminx:latmaxx+1,lonminx:lonmaxx+1],umask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='k',linewidth=1) ax2.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,c=SMOC_U,cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax2.plot(ds_SMOC['lon'].T, ds_SMOC['lat'].T,linewidth=3, zorder=1) ax2.scatter(ds_SMOC['lon'], ds_SMOC['lat'], c=stuck, cmap='viridis_r', zorder=2) color_stuck = copy(plt.get_cmap('viridis'))(0) color_moving = copy(plt.get_cmap('viridis'))(256) custom_lines = [Line2D([0], [0], c = color_stuck, marker='o', markersize=10), Line2D([0], [0], c = color_moving, marker='o', markersize=10)] ax2.legend(custom_lines, ['Stuck', 'Moving'],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.,framealpha=1) plt.show() fig = plt.figure(figsize=(8,6)) fig.suptitle('Figure 4. Tolerance for getting stuck = red line', fontsize=16) ax=plt.axes() ax.plot(distance.T, color='b') ax.hlines(1e-5,0,130, color='r') ax.set_ylabel('Approximate distance travelled [degrees]') ax.set_xlabel('Timestep') ax.set_ylim(-0.00002,0.0008) ax.set_xlim(-10, 130) custom_lines = [Line2D([0], [0], c = 'b'), Line2D([0], [0], c = 'r')] plt.show() cufields = xr.open_dataset('NemoNorthSeaORCA025-N006_data/ORCA025-N06_20000104d05U.nc') cvfields = xr.open_dataset('NemoNorthSeaORCA025-N006_data/ORCA025-N06_20000104d05V.nc') xu_corners, yu_corners = np.meshgrid(np.arange(cufields['x'].values[0], cufields['x'].values[-1]+1, 1), np.arange(cufields['y'].values[0]-0.5, cufields['y'].values[-1]+0.5, 1)) xv_corners, yv_corners = np.meshgrid(np.arange(cvfields['x'].values[0]-0.5, cvfields['x'].values[-1]+0.5, 1), np.arange(cvfields['y'].values[0], cvfields['y'].values[-1]+1, 1)) cx_centers, cy_centers = np.meshgrid(np.arange(cvfields['x'].values[0]-0.5, cvfields['x'].values[-1]+1.5, 1), np.arange(cvfields['y'].values[0]-0.5, cvfields['y'].values[-1]+1.5, 1)) fx_corners, fy_corners = np.meshgrid(np.arange(cufields['x'].values[0]-1, cufields['x'].values[-1]+1, 1), np.arange(cufields['y'].values[0]-1, cufields['y'].values[-1]+1, 1)) c_cells = np.zeros((len(cufields['y']),len(cufields['x']))) cu_zeros = np.nan_to_num(cufields['uos'][0]) # Velocity field with NaN -> zero to be able to use in interp2d f = interpolate.interp2d(xu_corners[0,:],yu_corners[:,0],cu_zeros) # Interpolator cu_centers = f(cx_centers[0,:-1],cy_centers[:-1,0]) # Velocity field interpolated on the T-points - center cudmask = np.ma.masked_values(cu_centers,0) # Masking the interpolated flowfield where U = 0 fig = plt.figure(figsize=(8,6)) fig.suptitle('Figure 5 - C grid structure', fontsize=16) ax1=plt.axes() ax1.set_xlim(105, 112) ax1.set_ylim(50, 54) ax1.pcolormesh(fx_corners,fy_corners,cudmask,cmap='Blues',edgecolors='k',linewidth=1) ax1.scatter(xu_corners,yu_corners, s=80,c=cufields['uos'][0],cmap='seismic',vmin=-0.1,vmax=0.1,edgecolor='k', label = 'U') ax1.scatter(xv_corners,yv_corners, s=80,c=cvfields['vos'][0],cmap='PRGn',vmin=-0.1,vmax=0.1,edgecolor='k', label = 'V') ax1.scatter(cx_centers,cy_centers, s=80,c='orange',edgecolor='k', label = 'T') ax1.quiver(xu_corners,yu_corners,cufields['uos'][0],np.zeros(xu_corners.shape),angles='xy', scale_units='xy', scale=0.1, width=0.007) ax1.quiver(xv_corners,yv_corners,np.zeros(xv_corners.shape),cvfields['vos'][0],angles='xy', scale_units='xy', scale=0.3, width=0.007) custom_lines = [Line2D([0], [0], marker='o', color='r',lw=0), Line2D([0], [0], marker='o', color='g', lw=0), Line2D([0], [0], marker='o', color='orange', lw=0)] ax1.legend(custom_lines, ['U', 'V', 'T'],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) Image(filename='NEMO_latBC.png') Image(filename='NEMO_ghost_vel.png') data_path = 'NemoNorthSeaORCA025-N006_data/' ufiles = sorted(glob(data_path+'ORCA*U.nc')) vfiles = sorted(glob(data_path+'ORCA*V.nc')) wfiles = sorted(glob(data_path+'ORCA*W.nc')) mesh_mask = data_path + 'coordinates.nc' coords = xr.open_dataset(mesh_mask, decode_times=False) filenames = {'U': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': ufiles}, 'V': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': vfiles}, 'W': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': wfiles}} variables = {'U': 'uo', 'V': 'vo', 'W': 'wo'} dimensions = {'U': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}, 'V': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}, 'W': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}} fieldset = FieldSet.from_nemo(filenames, variables, dimensions) npart = 10 # number of particles to be released lon = np.linspace(3 , 4, npart, dtype=np.float32) lat = 51.5 * np.ones(npart) time = np.zeros(npart) pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lon, lat=lat, time=time) output_file = pset.ParticleFile(name="Cgrid-stuck.nc", outputdt=delta(hours=1)) pset.execute(AdvectionRK4, runtime=delta(days=10), dt=delta(minutes=5), output_file=output_file) output_file.close() # export the trajectory data to a netcdf file data_NEMO = xr.open_dataset('Cgrid-stuck.nc') pdx = np.diff(data_NEMO['lon'],axis=1,prepend=0) pdy = np.diff(data_NEMO['lat'],axis=1,prepend=0) distance = np.sqrt(np.square(pdx)+np.square(pdy)) stuck = distance<(1e-5) fig,(ax1,ax2) = plt.subplots(1,2,figsize=(14,6),constrained_layout=True) fig.suptitle('Figure 6. - NEMO model movement along solid cells in a C grid', fontsize=16) ax1.set_ylabel('Latitude [degrees]') ax1.set_xlabel('Longitude [degrees]') ax1.set_title('North Sea Domain') ax1.set_xlim(-5, 5) ax1.set_ylim(48, 58) ax1.pcolormesh(coords['glamf'][0],coords['gphif'][0],cvfields['vo'][0,0,1:,1:], cmap='RdBu', vmin=-0.1, vmax=0.1) ax1.plot(data_NEMO['lon'].T, data_NEMO['lat'].T,linewidth=3,zorder=1) ax1.scatter(data_NEMO['lon'], data_NEMO['lat'], c=stuck, cmap='viridis_r', zorder=2) ax2.set_ylabel('Latitude [degrees]') ax2.set_xlabel('Longitude [degrees]') ax2.set_title('Particle getting stuck') ax2.set_xlim(2.9, 4.1) ax2.set_ylim(51.4, 52.2) ax2.pcolormesh(coords['glamf'][0],coords['gphif'][0],cvfields['vo'][0,0,1:,1:], cmap='RdBu', vmin=-0.1, vmax=0.1,edgecolors='k',linewidth=1) ax2.plot(data_NEMO['lon'].T, data_NEMO['lat'].T,linewidth=3,zorder=1) ax2.scatter(data_NEMO['lon'], data_NEMO['lat'], c= stuck, s=30, cmap='viridis_r', alpha=0.3, zorder=2) color_stuck = copy(plt.get_cmap('viridis'))(0) color_moving = copy(plt.get_cmap('viridis'))(256) custom_lines = [Line2D([0], [0], c = color_stuck, marker='o', markersize=10), Line2D([0], [0], c = color_moving, marker='o', markersize=10)] ax2.legend(custom_lines, ['Stuck', 'Moving'],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.,framealpha=1) plt.show() fig = plt.figure(figsize=(8,6)) fig.suptitle('Figure 8. Tolerance for getting stuck = red line', fontsize=16) ax=plt.axes() ax.plot(distance.T, color='b',zorder=1) ax.hlines(1e-5,0,250, color='r',zorder=2) ax.set_ylabel('Approximate distance travelled [degrees]') ax.set_xlabel('Timestep') ax.set_ylim(-0.00002,0.003) ax.set_xlim(-10, 250) plt.show() fig = plt.figure(figsize=(8,6)) fig.suptitle('Figure 9. - B grid structure', fontsize=16) ax=plt.axes() ax.set_xlim(5.0,5.18) ax.set_ylim(52.9,53.02) ax.set_xlabel('Longitude') ax.set_ylabel('Latitude') ax.pcolormesh(x_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],y_centers[latminx:latmaxx+1,lonminx:lonmaxx+1],udmask.mask[latminx:latmaxx,lonminx:lonmaxx],cmap='Reds_r',edgecolors='orange') ax.scatter(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],s=100,color='green',cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax.scatter(x_outcorners[latminx:latmaxx,lonminx:lonmaxx],y_outcorners[latminx:latmaxx,lonminx:lonmaxx],s=100,color='orange',cmap='cmo.balance_r',vmin=-0.05,vmax=0.05,edgecolors='k') ax.quiver(x_centers[latminx:latmaxx,lonminx:lonmaxx],y_centers[latminx:latmaxx,lonminx:lonmaxx],SMOC_U,SMOC_V,angles='xy', scale_units='xy', scale=10, color='w',width=0.01) plon = 5.1 plat = 52.95 pU = fu(plon,plat) pV = fv(plon,plat) ax.scatter(plon,plat,s=50, color='cyan',marker='h') ax.quiver(plon,plat,pU,pV,angles='xy', scale_units='xy', scale=10, color='w') custom_lines = [Line2D([0], [0], c='green', marker='o', markersize=10, lw=0), Line2D([0], [0], c='orange', marker='o', markersize=10, lw=0), Line2D([0], [0], c='cyan', marker='h', markersize=10, lw=0)] ax.legend(custom_lines, ['Velocity points', 'Scalar points', 'Particle'],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.,framealpha=1) from parcels import DiffusionUniformKh, Kernel class LandParticle(JITParticle): on_land = Variable('on_land') def Sample_land(particle, fieldset, time): particle.on_land = fieldset.landmask[time, particle.depth, particle.lat, particle.lon] landmask = np.logical_or(np.ma.masked_equal(cufields['uo'][0, 0],0.).mask, np.ma.masked_equal(cvfields['vo'][0, 0],0.).mask) filenames = {'U': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': ufiles}, 'V': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': vfiles}, 'W': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': wfiles}} variables = {'U': 'uo', 'V': 'vo', 'W': 'wo'} dimensions = {'U': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}, 'V': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}, 'W': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}} fieldset = FieldSet.from_nemo(filenames, variables, dimensions) fieldset.add_constant_field('Kh_zonal', 5, mesh='spherical') fieldset.add_constant_field('Kh_meridional', 5, mesh='spherical') # fieldset.add_constant('dres', 0.00005) fieldset.add_field(Field('landmask',data=landmask,lon=coords['glamf'],lat=coords['gphif'],mesh='spherical')) npart = 10 # number of particles to be released lon = np.linspace(3 , 4, npart, dtype=np.float32) lat = 51.5 * np.ones(npart) time = np.zeros(npart) pset = ParticleSet(fieldset=fieldset, pclass=LandParticle, lon=lon, lat=lat, time=time) output_file = pset.ParticleFile(name="Cgrid-diffusion.nc", outputdt=delta(hours=1)) pset.execute(pset.Kernel(AdvectionRK4)+pset.Kernel(DiffusionUniformKh)+pset.Kernel(Sample_land), runtime=delta(days=10), dt=delta(minutes=5), output_file=output_file) output_file.close() # export the trajectory data to a netcdf file data_diff = xr.open_dataset('Cgrid-diffusion.nc') stuck = data_diff['on_land']==1 fig = plt.figure(figsize=(14,8)) fig.suptitle('Figure 10. Diffusion added to C grid advection', fontsize=16) ax=plt.axes() ax.set_ylabel('Latitude [degrees]') ax.set_xlabel('Longitude [degrees]') ax.set_title('Particle getting stuck') ax.set_xlim(2.9, 4.1) ax.set_ylim(51.4, 52.2) ax.pcolormesh(coords['glamf'][0],coords['gphif'][0],cvfields['vo'][0,0,1:,1:], cmap='RdBu', vmin=-0.1, vmax=0.1,edgecolors='k',linewidth=1) ax.plot(data_diff['lon'].T, data_diff['lat'].T,linewidth=3,zorder=1) ax.scatter(data_diff['lon'], data_diff['lat'], c= stuck, s=30, cmap='viridis_r', alpha=0.3, zorder=2) ax.scatter(coords['glamf'][0],coords['gphif'][0],c=landmask,cmap='Reds_r',s=100,edgecolors='k',vmin=-1,linewidth=0.5) color_stuck = copy(plt.get_cmap('viridis'))(0) color_moving = copy(plt.get_cmap('viridis'))(256) color_land = copy(plt.get_cmap('Reds'))(0) color_ocean = copy(plt.get_cmap('Reds'))(128) custom_lines = [Line2D([0], [0], c = color_stuck, marker='o', markersize=10, lw=0), Line2D([0], [0], c = color_moving, marker='o', markersize=10, lw=0), Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0), Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)] ax.legend(custom_lines, ['On land', 'In ocean', 'Landmask ocean', 'Landmask land'],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.,framealpha=1) plt.show() data_diff.close() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Point Interpolation Step2: Scipy.interpolate linear Step3: Natural neighbor interpolation (MetPy implementation) Step4: Cressman interpolation Step5: Barnes Interpolation Step6: Radial basis function interpolation
<ASSISTANT_TASK:> Python Code: import cartopy.crs as ccrs import cartopy.feature as cfeature from matplotlib.colors import BoundaryNorm import matplotlib.pyplot as plt import numpy as np from metpy.cbook import get_test_data from metpy.interpolate import (interpolate_to_grid, remove_nan_observations, remove_repeat_coordinates) from metpy.plots import add_metpy_logo def basic_map(proj): Make our basic default map for plotting fig = plt.figure(figsize=(15, 10)) add_metpy_logo(fig, 0, 80, size='large') view = fig.add_axes([0, 0, 1, 1], projection=proj) view._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility view.set_extent([-120, -70, 20, 50]) view.add_feature(cfeature.STATES.with_scale('50m')) view.add_feature(cfeature.OCEAN) view.add_feature(cfeature.COASTLINE) view.add_feature(cfeature.BORDERS, linestyle=':') return fig, view def station_test_data(variable_names, proj_from=None, proj_to=None): with get_test_data('station_data.txt') as f: all_data = np.loadtxt(f, skiprows=1, delimiter=',', usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19), dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'), ('slp', 'f'), ('air_temperature', 'f'), ('cloud_fraction', 'f'), ('dewpoint', 'f'), ('weather', '16S'), ('wind_dir', 'f'), ('wind_speed', 'f')])) all_stids = [s.decode('ascii') for s in all_data['stid']] data = np.concatenate([all_data[all_stids.index(site)].reshape(1, ) for site in all_stids]) value = data[variable_names] lon = data['lon'] lat = data['lat'] if proj_from is not None and proj_to is not None: try: proj_points = proj_to.transform_points(proj_from, lon, lat) return proj_points[:, 0], proj_points[:, 1], value except Exception as e: print(e) return None return lon, lat, value from_proj = ccrs.Geodetic() to_proj = ccrs.AlbersEqualArea(central_longitude=-97.0000, central_latitude=38.0000) levels = list(range(-20, 20, 1)) cmap = plt.get_cmap('magma') norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True) x, y, temp = station_test_data('air_temperature', from_proj, to_proj) x, y, temp = remove_nan_observations(x, y, temp) x, y, temp = remove_repeat_coordinates(x, y, temp) gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='linear', hres=75000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='natural_neighbor', hres=75000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='cressman', minimum_neighbors=1, hres=75000, search_radius=100000) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) gx, gy, img1 = interpolate_to_grid(x, y, temp, interp_type='barnes', hres=75000, search_radius=100000) img1 = np.ma.masked_where(np.isnan(img1), img1) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear', rbf_smooth=0) img = np.ma.masked_where(np.isnan(img), img) fig, view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Note Step2: Include the input file that contains all input parameters needed for all components. This file can either be a python dictionary or a text file that can be converted into a python dictionary. If a text file is provided, it will be converted to a Python dictionary. Here we use an existing text file prepared for this exercise. Step3: Instantiate landlab components to simulate corresponding attributes. In this example, we shall demonstrate the use of seasonal rainfall and PFT-specific potential evapotranspiration. The instantiated objects are Step4: Lets look at the initial organization of PFTs Step5: Specify an approximate number of years for the model to run. For this example, we will run the simulation for 600 years. It might take less than 2+ minutes to run. Step6: Create empty arrays to store spatio-temporal data over multiple iterations. The captured data can be used for plotting model outputs. Step7: To reduce computational overhead, we shall create a lookup array for plant-specific PET values for each day of the year. Step8: Specify current_time (in years). current_time is the current time in the simulation. Step9: The loop below couples the components introduced above in a for loop until all "n" number of storms are generated. Time is advanced by the soil moisture object based on storm and interstorm durations that are estimated by the strom generator object. The ecohydrologic model is run each storm whereas cellular automaton vegetation component is run once every year. Step10: Time_Consumed is an optional variable that gives information about computer running time Step11: Save the outputs using numpy.save(). These files have '.nc' extension, which can be loaded using numpy.load(). Step12: Let's look at outputs.
<ASSISTANT_TASK:> Python Code: from __future__ import print_function %matplotlib inline import time import numpy as np from landlab import RasterModelGrid as rmg from landlab import load_params from Ecohyd_functions_flat import ( Initialize_, Empty_arrays, Create_PET_lookup, Save_, Plot_, ) grid1 = rmg((100, 100), spacing=(5.0, 5.0)) grid = rmg((5, 4), spacing=(5.0, 5.0)) InputFile = "Inputs_Vegetation_CA_flat.txt" data = load_params(InputFile) # Create dictionary that holds the inputs PD_D, PD_W, Rad, PET_Tree, PET_Shrub, PET_Grass, SM, VEG, vegca = Initialize_( data, grid, grid1 ) import matplotlib.pyplot as plt import matplotlib as mpl cmap = mpl.colors.ListedColormap(["green", "red", "black", "white", "red", "black"]) bounds = [-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5] norm = mpl.colors.BoundaryNorm(bounds, cmap.N) description = "green: grass; red: shrub; black: tree; white: bare" plt.figure(101) grid1.imshow( "vegetation__plant_functional_type", at="cell", cmap=cmap, grid_units=("m", "m"), norm=norm, limits=[0, 5], allow_colorbar=False, ) plt.figtext(0.2, 0.0, description, weight="bold", fontsize=10) n_years = 600 # Approx number of years for model to run # Calculate approximate number of storms per year fraction_wet = (data["doy__end_of_monsoon"] - data["doy__start_of_monsoon"]) / 365.0 fraction_dry = 1 - fraction_wet no_of_storms_wet = ( 8760 * (fraction_wet) / (data["mean_interstorm_wet"] + data["mean_storm_wet"]) ) no_of_storms_dry = ( 8760 * (fraction_dry) / (data["mean_interstorm_dry"] + data["mean_storm_dry"]) ) n = int(n_years * (no_of_storms_wet + no_of_storms_dry)) P, Tb, Tr, Time, VegType, PET_, Rad_Factor, EP30, PET_threshold = Empty_arrays( n, grid, grid1 ) Create_PET_lookup(Rad, PET_Tree, PET_Shrub, PET_Grass, PET_, Rad_Factor, EP30, grid) # # Represent current time in years current_time = 0 # Start from first day of Jan # Keep track of run time for simulation - optional Start_time = time.clock() # Recording time taken for simulation # declaring few variables that will be used in the storm loop time_check = 0.0 # Buffer to store current_time at previous storm yrs = 0 # Keep track of number of years passed WS = 0.0 # Buffer for Water Stress Tg = 270 # Growing season in days # # Run storm Loop for i in range(0, n): # Update objects # Calculate Day of Year (DOY) Julian = int(np.floor((current_time - np.floor(current_time)) * 365.0)) # Generate seasonal storms # for Dry season if Julian < data["doy__start_of_monsoon"] or Julian > data["doy__end_of_monsoon"]: PD_D.update() P[i] = PD_D.storm_depth Tr[i] = PD_D.storm_duration Tb[i] = PD_D.interstorm_duration # Wet Season - Jul to Sep - NA Monsoon else: PD_W.update() P[i] = PD_W.storm_depth Tr[i] = PD_W.storm_duration Tb[i] = PD_W.interstorm_duration # Spatially distribute PET and its 30-day-mean (analogous to degree day) grid["cell"]["surface__potential_evapotranspiration_rate"] = PET_[Julian] grid["cell"]["surface__potential_evapotranspiration_30day_mean"] = EP30[Julian] # Assign spatial rainfall data grid["cell"]["rainfall__daily_depth"] = P[i] * np.ones(grid.number_of_cells) # Update soil moisture component current_time = SM.update(current_time, Tr=Tr[i], Tb=Tb[i]) # Decide whether its growing season or not if Julian != 364: if EP30[Julian + 1, 0] > EP30[Julian, 0]: PET_threshold = 1 # 1 corresponds to ETThresholdup (begin growing season) else: PET_threshold = 0 # 0 corresponds to ETThresholddown (end growing season) # Update vegetation component VEG.update(PETThreshold_switch=PET_threshold, Tb=Tb[i], Tr=Tr[i]) # Update yearly cumulative water stress data WS += (grid["cell"]["vegetation__water_stress"]) * Tb[i] / 24.0 # Record time (optional) Time[i] = current_time # Update spatial PFTs with Cellular Automata rules if (current_time - time_check) >= 1.0: if yrs % 100 == 0: print("Elapsed time = {time} years".format(time=yrs)) VegType[yrs] = grid1["cell"]["vegetation__plant_functional_type"] WS_ = np.choose(VegType[yrs], WS) grid1["cell"]["vegetation__cumulative_water_stress"] = WS_ / Tg vegca.update() time_check = current_time WS = 0 yrs += 1 VegType[yrs] = grid1["cell"]["vegetation__plant_functional_type"] Final_time = time.clock() Time_Consumed = (Final_time - Start_time) / 60.0 # in minutes print("Time_consumed = {time} minutes".format(time=Time_Consumed)) # # Saving sim = "Sim_26Jul16_" # Save_(sim, Tb, Tr, P, VegType, yrs, Time_Consumed, Time) Plot_(grid1, VegType, yrs, yr_step=100) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Read in Data Step2: Range Time Plots Step3: Range vs Parameter Plots
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib import os import scipy as sp import matplotlib.pyplot as plt from GeoData.GeoData import GeoData from GeoData.utilityfuncs import readMad_hdf5 from GeoData.plotting import rangevsparam, rangevstime madfile = os.path.join( 'pfa140105.004.hdf5') data1 = GeoData(readMad_hdf5,[madfile,['nel','te','ti','vo','dnel','dte','dti','dvo']]) data1.data['ne']=sp.power(10.,data1.data['nel']) data1.data['dne']=sp.power(10.,data1.data['dnel']) fig1,axmat =plt.subplots(2,2,figsize = (12,9),sharey=True,sharex=True) axvec = axmat.flatten() desired_beam = data1.dataloc[0,1:] rangevstime(data1,desired_beam,vbounds=[1e9,8e11],gkey='ne',fig=fig1,ax=axvec[0]) rangevstime(data1,desired_beam,vbounds=[500.,2500.],gkey='te',fig=fig1,ax=axvec[1]) rangevstime(data1,desired_beam,vbounds=[500.,2500.],gkey='ti',fig=fig1,ax=axvec[2]) rangevstime(data1,desired_beam,vbounds=[-50.,50.],gkey='vo',fig=fig1,ax=axvec[3]) plt.tight_layout() plt.subplots_adjust(top=0.9) fig2,axmat2 =plt.subplots(2,2,figsize = (12,9),sharey=True,sharex=False) axvec2 = axmat2.flatten() paramlist = ['ne','te','ti','vo'] time1=335 handvec = [] pbounds = [[1e10,8e11],[200.,3500.],[200.,3500.],[-100.,100.]] for inum, iax in enumerate(axvec2): ploth = rangevsparam(data1,desired_beam,time1,gkey=paramlist[inum],gkeyerr='d'+paramlist[inum],fig=fig1,ax=iax,it=False) iax.grid(True) iax.set_xlim(pbounds[inum]) handvec.append(ploth) plt.tight_layout() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Great! Now we can run calls to query ReferenceSets, References, Datasets, VariantSets, CallSets, Variants, ReadGroups, ReadGroupSets, & Reads. Step2: Search/Get References Step3: In addition to fetching metadata about the reference, you can access the base sequence Step4: Search/Get Dataset Step5: Search/Get VariantSets Step6: Search/Get Callset Step7: Search/Get Variants Step8: Search/Get Readgroupsets Step9: Get ReadGroups Step10: Get Reads
<ASSISTANT_TASK:> Python Code: baseURL = "http://localhost:8000" client = ga4gh.client.HttpClient(baseURL) referenceSets = list(client.searchReferenceSets()) print("ReferenceSets") for referenceSet in referenceSets: print("NCBI Taxon Id: {}".format(referenceSet.ncbiTaxonId)) referenceSet = client.getReferenceSet(referenceSets[0].id) referenceSet.toJsonDict() references = list(client.searchReferences(referenceSet.id)) print("References") for reference in references: print("Name: {}, Length: {}".format(reference.name, reference.length)) reference = client.getReference(references[0].id) reference.toJsonDict() client.listReferenceBases(references[0].id, start=10000, end=10100) datasets = list(client.searchDatasets()) print("Datasets") for dataset in datasets: print("Name: {}".format(dataset.name)) dataset = client.getDataset(datasets[0].id) dataset.toJsonDict() variantSets = list(client.searchVariantSets(dataset.id)) print("VariantSets") for variantSet in variantSets: print("Name: {}".format(variantSet.name)) variantSetId = variantSets[0].id variantSet = client.getVariantSet(variantSetId) variantSet.toJsonDict() callSets = list(client.searchCallSets(variantSetId)) print("CallSets") for callSet in callSets: print("Name: {}".format(callSet.name)) callSet = client.getCallSet(callSets[0].id) callSet.toJsonDict() variants = list(client.searchVariants(variantSetId, start=100000, end=101000, referenceName = "1")) print("Variants") for variant in variants: print("Reference Name: {}, Start: {}, Name: {}".format(variant.referenceName, variant.start, variant.names[0])) variant = client.getVariant(variants[0].id) variant.toJsonDict() readGroupSets = list(client.searchReadGroupSets(dataset.id)) print("ReadGroupSets") for readGroup in readGroupSets: print("Name: {}".format(readGroup.name)) readGroupSet = client.getReadGroupSet(readGroupSets[0].id) readGroupSet.toJsonDict() readGroup = client.getReadGroup(readGroupSet.readGroups[0].id) readGroup.toJsonDict() readGroupIds = [readGroup.id for readGroup in readGroupSet.readGroups] readGroupDescriptions = [readGroup.description for readGroup in readGroupSet.readGroups] reads = client.searchReads(readGroupIds, reference.id) print("Read Alignments") # The server paginates reads by default; here we iterate over the reads # generate to make the appropriate requests to the server count = 0 for read in reads: count += 1 print("{} reads in Readgroups: {} on Reference: {}".format( count, ', '.join(readGroupDescriptions), reference.name)) read.toJsonDict() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Let's set up our plot so as to make it look more professional. Step2: Not too shabby. I find it annoying that the label for the y-axis is tilted sideways. Maybe someone more familiar with Matplotlib than myself could propose a solution for this. Step3: As you can see, we have created two layers Step4: Here is another issue that I can't figure out. If all the plotting code is executed simultaneously in the notebook, the plot generates fine. If, however, the code is run in individual cells, the plots are never generated. I'm not sure if this is a Matplotlib-Jupyter issue or a Retina issue. I think it's probably the former, but maybe some of the Jupyter devs can address this. This seems to happen regardless of whether %matplotlib inline or %matplotlib notebook magics are used, with the caveat that %matplotlib notebook allows generated plots to interact effectively with calls from subsequent cells provided that the plot is generated correctly the first time. It seems that splitting the plotting code across cells causes major hiccups. Step5: Let's try adding some lines to our plot. Step6: And now let's try toggling the display of the sin layer. Step7: We can also make plots in layers "boldfaced". Step8: Bolding effects can be applied in succession. Step9: And they can be undone. Step10: I'm not sure if the Jupyter team has considered this, but it would be nice if the %matplotlib notebook mode generated new plots each time a cell was run. The way things currently stand, I have to scroll up to the original plot in order to watch my changes propagate. This scrolling can quickly become infeasible in lengthier notebooks. Step11: We can add planes to 3D layers by specifying a point on the plane and a normal vector to the plane. Step12: We can bound the data contained in layers, either in a box (for 3D Layers) or in a rectangle/circle (for 2D layers). Step13: And we can remove the bounds. Step14: We can also set arbitrary properties for the plots contained in layers. For example, let's edit the alpha value of our 3D plots. Step15: Diagnostic Tracking
<ASSISTANT_TASK:> Python Code: import retina.core.axes import matplotlib.pyplot as plt import numpy as np %matplotlib inline fig = plt.figure() ax1 = plt.subplot('111', projection='Fovea2D') plt.xlabel('x') plt.ylabel('y') plt.title('An Introduction to Retina') x = np.linspace(-2 * np.pi, 2 * np.pi) sin_y = np.sin(x) cos_y = np.cos(x) sin = ax1.add_layer("sin") cos = ax1.add_layer("cos") sin.add_data(x, sin_y) cos.add_data(x, cos_y) ax1.build_layer("sin", color="green", linestyle="dashed", label="sin(x)") ax1.build_layer("cos", color="blue", linestyle="dashed", label="cos(x)") plt.show() import retina.core.axes import matplotlib.pyplot as plt import numpy as np %matplotlib notebook fig2 = plt.figure() ax2 = plt.subplot('111', projection='Fovea2D') plt.xlabel('x') plt.ylabel('y') plt.title('An Introduction to Retina') ax2.set_xlim(-2 * np.pi, 2 * np.pi) x = np.linspace(-2 * np.pi, 2 * np.pi) sin_y = np.sin(x) cos_y = np.cos(x) sin = ax2.add_layer("sin") cos = ax2.add_layer("cos") sin.add_data(x, sin_y) cos.add_data(x, cos_y) ax2.build_layer("sin", color="green", linestyle="dashed", label="sin(x)") ax2.build_layer("cos", color="blue", linestyle="dashed", label="cos(x)") sin.add_vline(0) sin.add_hline(0) sin.hide() sin.show() sin.toggle_display() sin.toggle_display() cos.bold() cos.bold() cos.unbold() cos.unbold() import retina.core.axes import matplotlib.pyplot as plt import numpy as np from scipy.special import jn %matplotlib notebook fig = plt.figure() ax = plt.subplot('111', projection='Fovea3D') points = np.linspace(-10, 10, 51) X, Y = np.meshgrid(points, points) R = np.sqrt(X**2 + Y**2) Z = jn(0,R) surfing_usa = ax.add_layer('surfing_usa') surfing_usa.add_data(X, Y, Z) ax.build_layer('surfing_usa', plot=ax.plot_surface, rstride=1, cstride=1, cmap='jet') surfing_usa.hide() surfing_usa.show() surfing_usa.add_plane([0, 3, 5], [1, 2, 1]) surfing_usa.bound() surfing_usa.unbound() surfing_usa.set_prop(alpha=0.5) % matplotlib notebook from IPython.display import display from ipywidgets import widgets textbox = widgets.Text() display(textbox) def handle_submit(sender): fn = text.value from __future__ import division import PyDSTool as dst #import PyDSTool.Toolbox.phaseplane as pp import matplotlib.pyplot as plt import retina.core.axes import retina.core.layer import retina.core.calc_context as cc import math import numpy as np fig = plt.figure() ax = plt.subplot('111', projection='Fovea2D') layer = ax.add_layer('test_layer') tracker = layer.tracker def make_vel_ics(speed, ang): rad = math.pi*(ang)/180. return {'vx': speed*math.cos(rad), 'vy': speed*math.sin(rad)} def make_shooter(): # no friction # cos(atan(x)) = 1/(sqrt(1+x^2)) Fx_str = '0' # '-speed_fn()*cos(atan2(vy,vx))' Fy_str = '-10' DSargs = dst.args() DSargs.varspecs = {'vx': Fx_str, 'x': 'vx', 'vy': Fy_str, 'y': 'vy', 'Fx_out': 'Fx(x,y)', 'Fy_out': 'Fy(x,y)', 'speed': 'speed_fn(vx, vy)', 'bearing': '90-180*atan2(vy,vx)/pi'} auxfndict = {'Fx': (['x', 'y'], Fx_str), 'Fy': (['x', 'y'], Fy_str), 'speed_fn': (['vx', 'vy'], 'sqrt(vx*vx+vy*vy)'), } DSargs.auxvars = ['Fx_out', 'Fy_out', 'speed', 'bearing'] DSargs.fnspecs = auxfndict DSargs.algparams = {'init_step':0.001, 'max_step': 0.1, 'max_pts': 20000, 'maxevtpts': 2, 'refine': 5} ground_event = dst.Events.makeZeroCrossEvent('y', -1, {'name': 'ground', 'eventtol': 1e-3, 'precise': True, 'term': True}, varnames=['y'], targetlang='python') peak_event = dst.Events.makeZeroCrossEvent('vy', -1, {'name': 'peak', 'eventtol': 1e-3, 'precise': True, 'term': False}, varnames=['vy'], targetlang='python') DSargs.events = [ground_event, peak_event] DSargs.checklevel = 2 DSargs.ics = {'x': 0, 'y': 0, 'vx': 0, 'vy': 0} DSargs.ics.update(make_vel_ics(5,20)) DSargs.name = 'cannon' DSargs.tdomain = [0, 100000] DSargs.tdata = [0, 10] return dst.embed(dst.Generator.Vode_ODEsystem(DSargs)) shooter = make_shooter() # sim.model is a PyDSTool Model sim = dst.args(tracked_objects=[], model=shooter, name='sim_cannon_traj', pts=None) calc = cc.calc_context(sim, 'cannon_traj') w = calc.workspace shot_num = 0 def go(speed, angle, do_tracker=True): global shot_num, w shot_num += 1 w.angle = angle w.speed = speed sim.model.set(ics=make_vel_ics(speed, angle)) sim.model.compute('shot%i' % shot_num) sim.pts = sim.model.sample('shot%i' % shot_num) if do_tracker: ax.cla() ax.plot(sim.pts['x'], sim.pts['y'], 'b-', lw=3) ax.hlines(0, 0, max(sim.pts['x'])) plt.show() calc() tracker.show() plt.show() # initialize go(30, 10, do_tracker=False) # call tracker every loop to show all sim_stub tracked objects # (= tracker_plotter objects) #fig = plt.figure(1) #ax = plt.gca() # fig, ax = plt.subplots() max_dist = cc.make_measure('maxdist', 'max(sim.pts["x"])') max_height = cc.make_measure('maxheight', 'max(sim.pts["y"])') calc.attach((max_dist, max_height)) tracker(calc, 2, ('angle', 'maxdist', 'ko'), clear_on_refresh=False) tracker(calc, 2, ('angle', 'maxheight', 'ro'), clear_on_refresh=False) def make_iter_angle(): for angle in np.linspace(5, 85, 10): yield angle iter_angle = make_iter_angle() # rerun this ten times! go(10, iter_angle.__next__()) # or call directly go(10, 25) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2. Set Configuration Step2: 3. Enter DV360 SDF To BigQuery Recipe Parameters Step3: 4. Execute DV360 SDF To BigQuery
<ASSISTANT_TASK:> Python Code: !pip install git+https://github.com/google/starthinker from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) FIELDS = { 'auth_write':'service', # Credentials used for writing data. 'partner_id':'', # The sdf file types. 'file_types':[], # The sdf file types. 'filter_type':'', # The filter type for the filter ids. 'filter_ids':[], # Comma separated list of filter ids for the request. 'dataset':'', # Dataset to be written to in BigQuery. 'version':'5', # The sdf version to be returned. 'table_suffix':'', # Optional: Suffix string to put at the end of the table name (Must contain alphanumeric or underscores) 'time_partitioned_table':False, # Is the end table a time partitioned 'create_single_day_table':False, # Would you like a separate table for each day? This will result in an extra table each day and the end table with the most up to date SDF. } print("Parameters Set To: %s" % FIELDS) from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dataset':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'dataset','kind':'string','order':6,'default':'','description':'Dataset to be written to in BigQuery.'}} } }, { 'sdf':{ 'auth':'user', 'version':{'field':{'name':'version','kind':'choice','order':6,'default':'5','description':'The sdf version to be returned.','choices':['SDF_VERSION_5','SDF_VERSION_5_1']}}, 'partner_id':{'field':{'name':'partner_id','kind':'integer','order':1,'description':'The sdf file types.'}}, 'file_types':{'field':{'name':'file_types','kind':'string_list','order':2,'default':[],'description':'The sdf file types.'}}, 'filter_type':{'field':{'name':'filter_type','kind':'choice','order':3,'default':'','description':'The filter type for the filter ids.','choices':['FILTER_TYPE_ADVERTISER_ID','FILTER_TYPE_CAMPAIGN_ID','FILTER_TYPE_INSERTION_ORDER_ID','FILTER_TYPE_MEDIA_PRODUCT_ID','FILTER_TYPE_LINE_ITEM_ID']}}, 'read':{ 'filter_ids':{ 'single_cell':True, 'values':{'field':{'name':'filter_ids','kind':'integer_list','order':4,'default':[],'description':'Comma separated list of filter ids for the request.'}} } }, 'time_partitioned_table':{'field':{'name':'time_partitioned_table','kind':'boolean','order':7,'default':False,'description':'Is the end table a time partitioned'}}, 'create_single_day_table':{'field':{'name':'create_single_day_table','kind':'boolean','order':8,'default':False,'description':'Would you like a separate table for each day? This will result in an extra table each day and the end table with the most up to date SDF.'}}, 'dataset':{'field':{'name':'dataset','kind':'string','order':6,'default':'','description':'Dataset to be written to in BigQuery.'}}, 'table_suffix':{'field':{'name':'table_suffix','kind':'string','order':6,'default':'','description':'Optional: Suffix string to put at the end of the table name (Must contain alphanumeric or underscores)'}} } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. Numero Dataset PA Step2: 2. Numero Dataset PA senza Metadati Step3: 3. Group by Catalogo Padre Step4: 4. Group by Gruppo Step5: Group by Ultima Modifica -> Time series chart Step7: 6. Word Cloud su Titolo Step8: Word Cloud su Notes
<ASSISTANT_TASK:> Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt from datetime import datetime import os plt.style.use('ggplot') get_ipython().magic('pylab inline') import nltk from nltk.tokenize import word_tokenize import re from collections import Counter from nltk.corpus import stopwords import string dir_in = os.path.join(os.path.abspath(''),'input') dir_out = os.path.join(os.path.abspath(''),'output') df_file = os.path.join(dir_in, '2017-10-19_DSMetadatiPA.csv') df = pd.read_csv(df_file, delimiter=';') df['ds_name'].count() df[pd.isnull(df['ds_title'])]['ds_name'].count() # Check DS senza metadati # http://www.dati.gov.it/api/3/action/package_show?id=popolazione-straniera df_test = df[pd.isnull(df['ds_title'])][-5:] df_test df.head(2) df_cat = df.groupby(['_catalog_parent_name']).count().reset_index().sort_values(by='ds_name', ascending=False).set_index('_catalog_parent_name') df_cat.head(2) # Primi 10 elementi per numero di dataset df_cat_10 = df_cat['ds_name'][:10] df_cat_10.sum() # resto degli elementi df_cat['ds_name'][11:].count() # Numero di dataset sul resto degli elementi df_cat['ds_name'][11:].sum() # Lista Cataloghi -> Occhio alle label settate a mano df_cat_10 #Grafico style.use('fivethirtyeight') colors = [['red']] tp = df_cat_10.plot( kind='bar', legend = False, figsize = (10,4), color = colors) for p in tp.patches: tp.annotate(str(int(round(p.get_height()))), (p.get_x() * 1.005, p.get_height() * 1.005), ha='center', va='center', xytext=(15, 5), textcoords='offset points',fontsize=9) tp.plot() tp.tick_params(axis = 'both', which = 'major', labelsize = 10) tp.set_xlabel('Catalogo',fontsize=8) tp.set_ylabel('n. dataset',fontsize=8) tp.axhline(y = 0, color = 'black', linewidth = 4, alpha = 0.7) labels = [item.get_text() for item in tp.get_xticklabels()] labels[0] = 'Dati \nTrentino' labels[1] = 'Regione \nToscana' labels[2] = 'Dati \nLombardia' labels[3] = 'INPS' labels[4] = 'Comune di \nAlbano Laziale' labels[5] = 'Regione \nSardegna' labels[6] = 'Dati \nVeneto' labels[7] = 'Open Data \nAlto Adige' labels[8] = 'Dati Emilia \nRomagna' labels[9] = 'Comune di \nLecce' tp.set_xticklabels(labels) plt.xticks(rotation=0) # Adding a title and a subtitle tp.text(x = -1.0, y = 7640, s = "I dati aperti della Pubblica Amministrazione", fontsize = 14, weight = 'bold', alpha = .75) tp.text(x = -1.0, y = 6700, s = u"I primi 10 cataloghi sorgente producono oltre l' 80% dei dataset. Su un totale di 18.000 dataset, 15.600 sono prodotti \ndai primi 10 Enti Pubblici.", fontsize = 10, alpha = .85) text = tp.text(x = -1.2, y = -1500, s = 'www.ildatomancante.it Fonte: dati.gov.it', fontsize = 10, color = '#f0f0f0', backgroundcolor = 'grey') text.set_url('http://www.ildatomancante.it') fig_posts = tp.get_figure() df_file_cat = os.path.join(dir_out, 'Cataloghi Dataset.png') fig_posts.savefig(df_file_cat, format='png', dpi=300,bbox_inches='tight') df_grp= df.groupby(['gruppo']).count().reset_index().sort_values(by='ds_name', ascending=False).set_index('gruppo') df_grp.head(2) # Primi 10 elementi per numero di dataset df_grp_10 = df_grp['ds_name'][:10] df_grp_10.sum() # resto degli elementi df_grp['ds_name'][11:].count() # Numero di dataset sul resto degli elementi df_grp['ds_name'][11:].sum() df_grp_10 #Grafico style.use('fivethirtyeight') colors = [['red']] tp = df_grp_10.plot( kind='bar', legend = False, figsize = (10,4), color = colors) for p in tp.patches: tp.annotate(str(int(round(p.get_height()))), (p.get_x() * 1.005, p.get_height() * 1.005), ha='center', va='center', xytext=(15, 5), textcoords='offset points',fontsize=9) tp.plot() tp.tick_params(axis = 'both', which = 'major', labelsize = 10) tp.set_xlabel('Gruppo',fontsize=8) tp.set_ylabel('n. dataset',fontsize=8) tp.axhline(y = 0, color = 'black', linewidth = 4, alpha = 0.7) labels = [item.get_text() for item in tp.get_xticklabels()] labels[0] = u'Popolazione \ne società' labels[1] = 'Ambiente' labels[2] = 'Istruzione, \ncultura \ne sport' labels[3] = 'Governo e \nsettore \npubblico' labels[4] = 'Economia \ne finanze' labels[5] = 'Agricoltura' labels[6] = 'Trasporti' labels[7] = 'Salute' labels[8] = 'Giustizia' labels[9] = u'Regioni \ne città' tp.set_xticklabels(labels) plt.xticks(rotation=0) # Adding a title and a subtitle tp.text(x = -1.0, y = 2150, s = "I dati aperti della Pubblica Amministrazione", fontsize = 14, weight = 'bold', alpha = .75) tp.text(x = -1.0, y = 2000, s = u"Su ca.18.000 dataset solo in 6.400 hanno il campo gruppo popolato.", fontsize = 10, alpha = .85) text = tp.text(x = -1.2, y = -500, s = 'www.ildatomancante.it Fonte: dati.gov.it', fontsize = 10, color = '#f0f0f0', backgroundcolor = 'grey') text.set_url('http://www.ildatomancante.it') fig_posts = tp.get_figure() df_file_grp = os.path.join(dir_out, 'Gruppi Dataset.png') fig_posts.savefig(df_file_grp, format='png', dpi=300,bbox_inches='tight') np.isnan(df['ultima_modifica'][0]) from datetime import datetime def calcolo_anno(x): try: np.isnan(x['ultima_modifica']) anno = '9999' except: anno = x['ultima_modifica'][0:4] return anno df['dt_ultima_modifica'] = df.apply(lambda x: calcolo_anno(x), axis=1) df.head(2) df_tms = df.groupby(['dt_ultima_modifica']).count().reset_index().sort_values(by='dt_ultima_modifica', ascending=False).set_index('dt_ultima_modifica') df_tms df.head(2) emoticons_str = r (?: [:=;] # Eyes [oO\-]? # Nose (optional) [D\)\]\(\]/\\OpP] # Mouth ) regex_str = [ emoticons_str, r'<[^>]+>', # HTML tags r'(?:@[\w_]+)', # @-mentions r"(?:\#+[\w_]+[\w\'_\-]*[\w_]+)", # hash-tags r'http[s]?://(?:[a-z]|[0-9]|[$-_@.&amp;+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+', # URLs r'(?:(?:\d+,?)+(?:\.?\d+)?)', # numbers r"(?:[a-z][a-z'\-_]+[a-z])", # words with - and ' r'(?:[\w_]+)', # other words r'(?:\S)' # anything else ] tokens_re = re.compile(r'('+'|'.join(regex_str)+')', re.VERBOSE | re.IGNORECASE) emoticon_re = re.compile(r'^'+emoticons_str+'$', re.VERBOSE | re.IGNORECASE) def tokenize(s): return tokens_re.findall(s) def preprocess(s, lowercase=False): tokens = tokenize(s) if lowercase: tokens = [token if emoticon_re.search(token) else token.lower() for token in tokens] return tokens count_only = Counter() punctuation = list(string.punctuation) # stop = stopwords.words('english') + punctuation + ['rt', 'via'] stop = punctuation for i, row in df['ds_title'].iteritems(): try: np.isnan(row) except: terms_only = [term for term in preprocess(row) if term not in stop and not term.startswith(('per','\xc3','di','del','e','\xa0','della' ,'Anno','a','in','#039','al','dei', 'con','nel','12','10','1','da','31'))] count_only.update(terms_only) word_freq = count_only.most_common(30) words_json = [{'text': item[0], 'weight': item[1]} for item in word_freq] words_json count_only = Counter() punctuation = list(string.punctuation) # stop = stopwords.words('english') + punctuation + ['rt', 'via'] stop = punctuation for i, row in df['note'].iteritems(): try: np.isnan(row) except: terms_only = [term for term in preprocess(row) if term not in stop and not term.startswith(('per','\xc3','di','del','e','\xa0','della' ,'Anno','a','in','#039','al','dei', 'con','nel','12','10','1','da','31', '<p>','</p>','<br />', 'i', 'sul', '</strong>','<strong>', 'la', 'le','\x80', 'xe2', 'Tutti', 'il', 'sono', '\xa8', '\x99', 'che', 'una', 'che', 'gli','Il','n','o', '\xe2','si','un','\xc2','su','questo','come', 'stesso','I','Tutte','\x93','Tipo','La','\xb9', 'Per','degli','formati','pi'))] count_only.update(terms_only) word_freq_note = count_only.most_common(30) wordsnote__json = [{'text': item[0], 'weight': item[1]} for item in word_freq_note] wordsnote__json <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Examples and tests Step2: Sympy can be a little tricky because it caches things, which means that the first implementation of this code silently changed tensors in place, without meaning to. Let's just check that our variables haven't changed
<ASSISTANT_TASK:> Python Code: from __future__ import division import sympy from sympy import * from sympy import Rational as frac import simpletensors from simpletensors import Vector, TensorProduct, SymmetricTensorProduct, Tensor init_printing() var('vartheta, varphi') var('nu, m, delta, c, t') # These are related scalar functions of time var('r, v, Omega', cls=Function) r = r(t) v = v(t) Omega = Omega(t) # These get redefined momentarily, but have to exist first var('nHat, lambdaHat, ellHat', cls=Function) # And now we define them as vector functions of time nHat = Vector('nHat', r'\hat{n}', [cos(Omega*t),sin(Omega*t),0,])(t) lambdaHat = Vector('lambdaHat', r'\hat{\lambda}', [-sin(Omega*t),cos(Omega*t),0,])(t) ellHat = Vector('ellHat', r'\hat{\ell}', [0,0,1,])(t) # These are the spin functions -- first, the individual components as regular sympy.Function objects; then the vectors themselves var('S_n, S_lambda, S_ell', cls=Function) var('Sigma_n, Sigma_lambda, Sigma_ell', cls=Function) SigmaVec = Vector('SigmaVec', r'\vec{\Sigma}', [Sigma_n(t), Sigma_lambda(t), Sigma_ell(t)])(t) SVec = Vector('S', r'\vec{S}', [S_n(t), S_lambda(t), S_ell(t)])(t) nHat diff(nHat, t) diff(lambdaHat, t) diff(lambdaHat, t).components diff(lambdaHat, t).subs(t,0).components diff(lambdaHat, t, 2).components diff(lambdaHat, t, 2).subs(t,0).components diff(ellHat, t) diff(nHat, t, 2) diff(nHat,t, 3) diff(nHat,t, 4) diff(SigmaVec,t, 0) SigmaVec.fdiff() diff(SigmaVec,t, 1) diff(SigmaVec,t, 2) diff(SigmaVec,t, 2) | nHat T1 = TensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1) T2 = TensorProduct(SigmaVec, nHat, lambdaHat, coefficient=1) tmp = Tensor(T1,T2) display(T1, T2, tmp) diff(tmp, t, 1) T1+T2 T2*ellHat ellHat*T2 T1.trace(0,1) T2*ellHat for k in range(1,4): display((T2*ellHat).trace(0,k)) for k in range(1,4): display((T2*ellHat).trace(0,k).subs(t,0)) T1.trace(0,1) * T2 display(T1, T2) T3 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1) display(T3) T3.trace(0,1) diff(T3, t, 1) T3.symmetric T3*ellHat ellHat*T3 T1+T3 T1 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, nHat, coefficient=1) display(T1) display(T1.trace()) T1*T2 type(_) import simpletensors isinstance(__, simpletensors.TensorProductFunction) SymmetricTensorProduct(nHat, nHat, nHat).trace() diff(T1.trace(), t, 1) diff(T1.trace(), t, 2) diff(T1.trace(), t, 2).subs(t,0) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We can plot the absolute noise levels
<ASSISTANT_TASK:> Python Code: # Author: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) import os.path as op import mne data_path = mne.datasets.sample.data_path() raw_erm = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample', 'ernoise_raw.fif'), preload=True) raw_erm.plot_psd(tmax=10., average=True, dB=False, xscale='log') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Description Step2: The open-circuit test performed on the low-voltage side of the transformer yielded the following data Step3: (a) Step4: $\theta = \arccos\frac{P_{OC}}{V_{OC}I_{OC}}$ Step5: $$Y_{EX} = G_C - jB_M$$ Step6: The base impedance of this transformer referred to the secondary side is Step7: Thus Step8: The resulting equivalent circuit is shown below Step9: (b) Step10: The voltage on the primary side of the transformer (referred to the secondary side) is Step11: There is a voltage drop under these load conditions. Step12: The transformer copper losses and core losses are Step13: Therefore the efficiency of this transformer at these conditions is
<ASSISTANT_TASK:> Python Code: %pylab notebook Sbase = 5000e3 # [VA] Vp0 = 230e3 # [V] Vs0 = 13.8e3 # [V] Req_pu = 0.01 Xeq_pu = 0.05 Voc = 13.8e3 # [V] Ioc = 21.1 # [A] Poc = 90.8e3 # [W] yex = Ioc / Voc print('yex = {:.6f}'.format(yex)) theta = arccos(Poc/(Voc*Ioc)) print('theta = {:.2f}°'.format(theta/pi*180)) # print the angle in degrees Yex = yex * (cos(theta) - sin(theta)*1j) print('Yex = {:.7f} S'.format(Yex)) Gc = Yex.real Bm = -Yex.imag Rc = 1/Gc Xm = 1/Bm print('Rc = {:4.0f} Ω'.format(Rc)) print('Xm = {:4.0f} Ω'.format(Xm)) Vbase = Vs0 Zbase = Vbase**2 / Sbase print('Zbase = {:.2f} Ω'.format(Zbase)) Req = Req_pu * Zbase Xeq = Xeq_pu * Zbase print('Req = {:4.2f} Ω'.format(Req)) print('Xeq = {:4.2f} Ω'.format(Xeq)) print(''' Req,s = {:.2f} Ω Xeq,s = {:.1f} Ω Rc,s = {:.0f} Ω Xm,s = {:.0f} Ω ============================== '''.format(Req, Xeq, Rc, Xm)) Pload_b = 4000e3 # [W] VS_b = 13.8e3 # [V] PF = 0.8 Is_b = Pload_b / (abs(VS_b) * PF) print('Is_b = {:.1f} A'.format(Is_b)) Is_b_angle = -arccos(PF) # [rad] IS_b = Is_b * (cos(Is_b_angle) + sin(Is_b_angle)*1j) print('IS_b = {:.1f} A ∠{:.2f}°'.format( abs(IS_b), Is_b_angle/pi*180)) Zeq = Req + Xeq*1j V_P = VS_b + IS_b*Zeq V_P_angle = arctan(V_P.imag/V_P.real) print('V_P = {:.0f} V ∠{:.1f}°'.format( abs(V_P), V_P_angle/pi*180)) VR = (abs(V_P)-abs(VS_b)) / abs(VS_b) *100 print('VR = {:.2f} %'.format(VR)) print('===========') Pcu = abs(IS_b)**2 * Req Pcore = abs(V_P)**2 / Rc print('Pcu = {:.1f} kW'.format(Pcu/1000)) print('Pcore = {:.1f} kW'.format(Pcore/1000)) Pout = Pload_b eta = Pout / (Pout + Pcu +Pcore) * 100 print('η = {:.1f} %'.format(eta)) print('==========') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The goal of the extra space to make the code more (visually) readable. Step2: Both a and b are now VARIABLES. Each variable has a type. In this case, they are both INTEGERS (whole numbers). To write the value of a variable to the screen, use the print statement. Step3: You can add some text to the print statement by putting the text between quotes (either single or double quotes work as long as you use the same at the beginning and end), and separate the text string and the variable by a comma Step4: A variable can be raised to a power by using ** Step5: <a name="ex1"></a> Exercise 1 Step6: Once you have created a variable in an Jupyter session, it will remain in memory, so you can use it in other cells as well. For example, the variable a, which was defined earlier in this Notebook, still exist. It will be 6 unless you changed it in Exercise 1. Step7: The user decides the order in which code blocks are executed. For example, In [6] means that it is the sixth execution of a code block. If you change the same code block and run it again, it will get number 7. If you define the variable a in code block 7, it will overwrite the value of a defined in a previous code block. Step8: Packages only have to be imported once in an IPython session. After the import, any plotting function may be called from any code cell as plt.function. For example Step9: Let's try to plot $y$ vs $x$ for $x$ going from $-4$ to $4$ for the polynomial in the exercise above. To do that, we need to evaluate $y$ at a bunch of points. Step10: To create an array x consisting of, for example, 5 equally spaced points between -4 and 4 use the linspace command Step11: In the above cell, x is an array of 5 floats (-4. is a float, -4 is an integer). Step12: Note that one hundred y values are computed in the simple line y = a*x**2 + b*x + c. The text after the # is a comment in the code. Any text on the line after the # is ignored by Python. Python treats arrays in the same fashion as it treats regular variables when you perform mathematical operations. The math is simply applied to every value in the array (and it runs much faster than when you would do every calculation separately). Step13: Keyword arguments should come after regular arguments. plot(linewidth = 6, [1, 2, 3], [2, 4, 3]) gives an error. Step14: <a href="#ex1">Back to Exercise 1</a> Step15: <a href="#ex2">Back to Exercise 2</a> Step16: <a href="#ex3">Back to Exercise 3</a> Step17: <a href="#ex4">Back to Exercise 4</a> Step18: <a href="#ex5">Back to Exercise 5</a>
<ASSISTANT_TASK:> Python Code: 6 * 2 a = 6 b = 2 a * b print a print b print a * b print a / b print 'the value of a is', a a**b print '1/3 gives', 1 / 3 print '1.0 / 3 gives', 1.0 / 3 print '1 / 3.0 gives', 1 / 3.0 print '1.0 / 3.0 gives', 1.0 / 3.0 print 'a:', a import matplotlib.pyplot as plt %matplotlib inline plt.plot([1,2,3,2]) import numpy as np x = np.linspace(-4, 4, 5) print x a = 1 b = 1 c = -6 x = np.linspace(-4, 4, 100) y = a*x**2 + b*x + c # Compute y for all x values plt.plot(x, y) plt.plot([1, 2, 3], [2, 4, 3], linewidth=6); a = 1 b = 1 c = -6 x = -2 y = a * x**2 + b * x + c print 'y evaluated at x=-2 is', y x = 0 y = a * x**2 + b * x + c print 'y evaluated at x=0 is', y x = 2 y = a * x**2 + b * x + c print 'y evaluated at x=2 is', y x = np.linspace(-3, 3, 100) y = (x + 2) * (x - 1) * (x - 2) plt.plot(x, y, 'r--') plt.plot([-2, 1, 2], [0, 0, 0], 'bo', markersize=10) plt.xlabel('x-axis') plt.ylabel('y-axis') plt.title('First Python Figure of Mark Bakker') holland = np.loadtxt('holland_temperature.dat') newyork= np.loadtxt('newyork_temperature.dat') beijing = np.loadtxt('beijing_temperature.dat') plt.plot(np.linspace(1, 12, 12), holland) plt.plot(np.linspace(1, 12, 12), newyork) plt.plot(np.linspace(1, 12, 12), beijing) plt.xlabel('Number of the month') plt.ylabel('Mean monthly temperature (Celcius)') plt.xlim(1, 12) plt.legend(['Holland','New York','Beijing'], loc='best'); air = np.loadtxt('holland_temperature.dat') sea = np.loadtxt('holland_seawater.dat') plt.subplot(211) plt.plot(air, 'b', label='air temp') plt.plot(sea, 'r', label='sea temp') plt.legend(loc='best') plt.ylabel('temp (Celcius)') plt.xlim(0, 11) plt.xticks([]) plt.subplot(212) plt.plot(air-sea, 'ko') plt.xticks(np.linspace(0, 11, 12), ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec']) plt.xlim(0, 11) plt.ylabel('air - sea temp (Celcius)'); gold = [46, 38, 29, 24, 13, 11, 11, 8, 8, 7, 107] countries = ['USA', 'CHN', 'GBR', 'RUS', 'KOR', 'GER', 'FRA', 'ITA', 'HUN', 'AUS', 'OTHER'] plt.pie(gold, labels = countries, colors = ['Gold', 'MediumBlue', 'SpringGreen', 'BlueViolet']) plt.axis('equal'); air = np.loadtxt('holland_temperature.dat') sea = np.loadtxt('holland_seawater.dat') plt.fill_between(range(1,13), air, color='b', alpha=0.3) plt.fill_between(range(1,13), sea, color='r', alpha=0.3) plt.xticks(np.linspace(0, 11, 12), ['jan', 'feb', 'mar', 'apr',\ 'may', 'jun', 'jul', 'aug', 'sep', ' oct', 'nov', 'dec']) plt.xlim(1, 12) plt.ylim(0, 20) plt.xlabel('Month') plt.ylabel('Temperature (Celcius)'); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Recall that a basic neural network in PyTorch can be set up like this Step2: We load CIFAR-10 dataset and reshape them as individual vectors. This is easy to do using TorchVision as all the classes are essentially reshapen already. Here we use the DataLoader to create an instance of the training dataset and the test set. The DataLoader also has an option to shuffle the dataset, which helps avoid overfitting. Step3: Let's see some of our samples. Step4: We can compile the model using categorical-cross-entropy loss, and train it for 30 epochs. Step5: At this point, there is practically no need to evaluate the model as our neural net is simply not cutting it. With a loss function in the millions, we cannot use a single layer simple neural network on images. Step6: Let's compile the model and test it again. Step7: Let's evaluate the model again. Step8: 58% accuracy is a big improvement on practically nothing! All of that is accomplished in just 2 epochs using convolutional layers and ReLUs. Step9: 61% accuracy! Our best yet. Looks heavily overfit though... Step10: Let's also review here how to save and load trained PyTorch models. It's easy!
<ASSISTANT_TASK:> Python Code: import random import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torchvision import torchvision.transforms as transforms device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(32, 64) self.fc2 = nn.Linear(64, 1) def forward(self, x): x = F.sigmoid(self.fc1(x)) x = self.fc2(x) return x net = Net().to(device) transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=8) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=128, shuffle=False, num_workers=8) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) from torch import optim optimizer = optim.SGD(net.parameters(), lr=0.01) criterion = nn.MSELoss() def fullPass(loader): running_loss = 0.0 for i, data in enumerate(loader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) outputs = net(inputs) loss = criterion(outputs, labels.float()) loss.backward() optimizer.step() # print statistics running_loss += loss if i % inputs.size()[0] == inputs.size()[0]-1: print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / inputs.size()[0])) running_loss = 0.0 net.train() for epoch in range(30): fullPass(trainloader); print('Finished Training') class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net().to(device) print(net) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(30): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 128 == 127: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 128)) running_loss = 0.0 print('Finished Training') correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(net.parameters(), lr=0.001) for epoch in range(5): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 128 == 127: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 128)) running_loss = 0.0 print('Finished Training') correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) torch.load(PATH) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: More Complicated Data Types Step2: Basic Things to do with Variables. Especially Floats. Step3: Conditionals in Python Step4: Functions in Python Step5: Loops in Python Step6: Numpy - "The Fundamental Package for Scientific Computing with Python"
<ASSISTANT_TASK:> Python Code: #A variable stores a piece of data and gives it a name #syntax of the form: #variable_name = variable_value #What are some types of variables you will need to use? answer = 42 print(answer) is_it_tuesday = True is_it_wednesday = False print(is_it_tuesday) pi_approx = 3.1415 print(pi_approx) my_name = "Dillon" print(my_name) #Lists #Defining a list and printing is very simple: prices = [10, 20, 30, 40, 50] print(prices) colors = [] colors.append("Green") colors.append("Blue") colors.append("Red") print(colors) colors[1] = "Yellow" print(colors[1]) print(colors) float1 = 5.75 float2 = 2.25 #Addition, subtraction, multiplication, division are as you expect print(float1 + float2) print(float1 - float2) print(float1 * float2) print(float1 / float2) #Here's an interesting one that showed up in your first homework. What does this do? result = 5 % 2 #What are some other math functions you would like to do with floats? import math as m print(m.log(10)) print(m.exp(m.log(float2))) print(m.pow(2,5)) print(2**5) print(2 * prices) #Sometimes you want to execute code only in certain circumstances. We saw this on HW1. #The basic syntax is: #if boolean_statement_1: # _code to do if 1 is true_ #elif boolean_statement_2: # _code to do if 2 is true_ #else: # _code to do in all other cases_ #A boolean statement is something that can be True or False. Named after mathematician George boole. (5 < 3) is an #example, and in this case evaluates to False. #The tabs are mandatory. Let's do some more examples answer = 43 if answer == 42: print("This is the answer") print("No it really is") elif answer < 42: print("This is too low") else: print("This is too high") print("This is always run") rainy = True day = "Tuesday" if(rainy == False) and (day != "Tuesday"): print("Price is $10") elif(rainy == True) and (day == "Tuesday"): print("Price is $5") elif(rainy == True) or (day == "Tuesday"): print("Price is $7.50") #We can separate off code into functions, that can take input and can give output. They serve as black boxes from the #perspective of the rest of our code #use the def keyword, and indent because this creates a new block def print_me( string ): print(string) #End with the "return" keyword return print_me("My name is Jacob") def my_favorite_song(): ans = "Amsterdam - Imgaine Dragons" return ans print(my_favorite_song()) #What if we want to compute something and return the result? How would you write that return statement? #Repeat code until a conditional statement (or boolean expression, same thing!) ends the loop #Let's try printing a list fib = [1, 1, 2, 3, 5, 8] #Most of your loops can be written in the following way. Before we get to this, we'll go over two less neat ways to write #the same thing for e in fib: print(e) print("spacer") i = 0 while(i<len(fib)): print(fib[i]) i = i + 1 j=0 print("spacer") for j in range(0,len(fib)): print(fib[j]) j=j+1 import numpy as np #Here, we grab all of the functions and tools from the numpy package and store them in a local variable called np. #You can call that variable whatever you like, but 'np' is standard. #numpy has arrays, which function similarly to python lists. a = np.array( [1,2,3] ) b = np.array( [9,8,7] ) #Be careful with syntax. The parentheses and brackets are both required! print(a) #Access elements from them just like you would a regular list print(a[1:3]) #Element-wise operations are a breeze! c = a + b d = a - b e = a * b f = a / b g = np.matmul(a,b) print(g) #What are other things you would like to do with these arrays? #If we don't get to it, be sure to look at the notes once uploaded for Numeric ODE Integration! def deriv(y,t): return t IC = 0 start_time = 0 end_time = 10 num_times = 101 times = np.linspace(start_time,end_time, num_times) print(times) from scipy.integrate import odeint integrated_vector = odeint(deriv,IC,times) %matplotlib inline import matplotlib.pyplot as plt from ipywidgets import interact plt.plot(times,integrated_vector) plt.title("y = (1/2)t^2") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Numerical evaluations Step3: Substitution of equations and values into equations Step4: We can substitute a range of equations into each other by using the custom function subs_eq Step5: We can also use subs_eq to substitute equations into each other and a dictionary with values. We will first add an entry for T_a into the dictionary and then substitute Step6: Evaluation of equations for long lists of variable sets Step7: We will now create two long lists of values representing T_g and n_g respectively and show how long it takes to compute ideal gas law values. Step8: Both approaches give identical results, but theano_function makes it a lot faster. Step9: Now applying to a long list of Nu-values Step10: We will now again use a theano function to make it faster. First we import optimize from scipy and preapre the theano_function Step11: Using theano and scipy makes it 2 orders of magnitude faster and the results are different only by 10$^{-10}$%! Step12: We will now test that we can process Nuvals bit by bit and re-create it consistently Step13: Now we will run fsolve for portions of Nuvals bit by bit Step14: It is strange that resvals1 and resvals2 are different at all, but anyway, it is clear that slicing the data in relatively small portions is important to keep scipy.optimize.fsolve time-efficient. Step15: Use of autowrap made the calculation 3 orders of magnitude faster than substitution of values into the original expression!
<ASSISTANT_TASK:> Python Code: from IPython.display import display from sympy import init_printing, latex init_printing() from sympy.printing import StrPrinter StrPrinter._print_Quantity = lambda self, expr: str(expr.abbrev) # displays short units (m instead of meter) %run -i 'test_equation_definitions.py' for eq in Equation.__registry__.keys(): print(eq.definition.name + ': ' + str(eq)) def print_dict(vdict, list_vars=None): Print values and units of variables in vdict. if not list_vars: list_vars = vdict.keys() for var1 in list_vars: unit1 = var1.definition.unit if unit1 == 1: unit1 = '' if vdict[var1] is not None: print('{0}: {1} {2}'.format(var1.name, str(vdict[var1]), str(unit1))) vdict = Variable.__defaults__.copy() print_dict(vdict) from essm.variables.utils import subs_eq subs_eq(eq_Le, [eq_alphaa, eq_Dva]) vdict[T_a] = 300. subs_eq(eq_Le, [eq_alphaa, eq_Dva], vdict) #import theano from sympy.printing.theanocode import theano_function import numpy as np npoints = 10000 xmin = 290. xmax = 310. Tvals = np.arange(xmin, xmax, (xmax - xmin)/npoints) xmin = 0.1 xmax = 0.5 nvals = np.arange(xmin, xmax, (xmax-xmin)/npoints) %%time # looping expr = eq_ideal_gas_law.rhs.subs(Variable.__defaults__) resvals0 = [] for i in range(len(Tvals)): resvals0.append(expr.subs({T_g: Tvals[i], n_g: nvals[i]})) %%time # Using theano f1 = theano_function([T_g, n_g], [eq_ideal_gas_law.rhs.subs(Variable.__defaults__)], dims={T_g:1, n_g:1}) resvals1 = f1(Tvals,nvals) list(resvals0) == list(resvals1) from sympy import nsolve vdict = Variable.__defaults__.copy() vdict[Pr] = 0.71 vdict[Re_c] = 3000. vdict[Nu] = 1000. expr = eq_Nu_forced_all.subs(vdict) nsolve(expr, 1000.) npoints = 100 xmin = 1000. xmax = 1200. Nuvals = np.arange(xmin, xmax, (xmax - xmin)/npoints) %%time # Solving for a range of Nu values vdict = Variable.__defaults__.copy() vdict[Pr] = 0.71 vdict[Re_c] = 3000. resvals = [] for Nu1 in Nuvals: vdict[Nu] = Nu1 resvals.append(nsolve(eq_Nu_forced_all.subs(vdict), 1000.)) import scipy.optimize as sciopt vdict = Variable.__defaults__.copy() vdict[Pr] = 0.71 vdict[Re_c] = 3000. expr = eq_Nu_forced_all.subs(vdict) expr1 = expr.rhs - expr.lhs fun_tf = theano_function([Re, Nu], [expr1], dims={Nu:1, Re:1}) x0vals = np.full(Nuvals.shape, fill_value=2000.) # array of same shape as Nuvals, with initial guess %%time # Solving for a range of Nu values resvals1 = sciopt.fsolve(fun_tf, args=Nuvals, x0=x0vals) np.mean(abs((resvals - resvals1)/resvals)) npoints = 1000 xmin = 1000. xmax = 1200. Nuvals = np.arange(xmin, xmax, (xmax - xmin)/npoints) x0vals = np.full(Nuvals.shape, fill_value=2000.) %%time # Solving for a range of Nu values resvals1 = sciopt.fsolve(fun_tf, args=Nuvals, x0=x0vals) # Solving for a range of Nu values imax = len(Nuvals) i0 = 0 idiff = 100 i1 = i0 resvals2 = [] while i1 < imax - 1: i0 = i1 # note that resvals[0:2] + resvals[2:4] = resvals[0:4] i1 = min(i0+idiff, imax) resvals0 = Nuvals[i0:i1] resvals2 = np.append(resvals2,resvals0) print(list(resvals2) == list(Nuvals)) %%time # Solving for a range of Nu values imax = len(Nuvals) i0 = 0 idiff = 100 i1 = i0 resvals2 = [] while i1 < imax - 1: i0 = i1 # note that resvals[0:2] + resvals[2:4] = resvals[0:4] i1 = min(i0+idiff, imax) resvals0 = sciopt.fsolve(fun_tf, args=Nuvals[i0:i1], x0=x0vals[i0:i1]) resvals2 = np.append(resvals2,resvals0) np.mean(abs((resvals1 - resvals2)/resvals1)) from sympy.utilities.autowrap import autowrap from sympy import symbols x, y, z = symbols('x y z') expr = ((x - y + z)**(13)).expand() autowrap_func = autowrap(expr) %%time autowrap_func(1, 4, 2) %%time expr.subs({x:1, y:4, z:2}) from sympy.utilities.autowrap import binary_function f = binary_function('f', expr) %%time f(x,y,z).evalf(2, subs={x:1, y:4, z:2}) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Set parameters Step2: Read epochs for the channel of interest Step3: Compute statistic Step4: Plot
<ASSISTANT_TASK:> Python Code: # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD-3-Clause import matplotlib.pyplot as plt import mne from mne import io from mne.stats import permutation_cluster_test from mne.datasets import sample print(__doc__) data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif' event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif' tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) channel = 'MEG 1332' # include only this channel in analysis include = [channel] picks = mne.pick_types(raw.info, meg=False, eog=True, include=include, exclude='bads') event_id = 1 reject = dict(grad=4000e-13, eog=150e-6) epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject) condition1 = epochs1.get_data() # as 3D matrix event_id = 2 epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject) condition2 = epochs2.get_data() # as 3D matrix condition1 = condition1[:, 0, :] # take only one channel to get a 2D array condition2 = condition2[:, 0, :] # take only one channel to get a 2D array threshold = 6.0 T_obs, clusters, cluster_p_values, H0 = \ permutation_cluster_test([condition1, condition2], n_permutations=1000, threshold=threshold, tail=1, n_jobs=1, out_type='mask') times = epochs1.times fig, (ax, ax2) = plt.subplots(2, 1, figsize=(8, 4)) ax.set_title('Channel : ' + channel) ax.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0), label="ERF Contrast (Event 1 - Event 2)") ax.set_ylabel("MEG (T / m)") ax.legend() for i_c, c in enumerate(clusters): c = c[0] if cluster_p_values[i_c] <= 0.05: h = ax2.axvspan(times[c.start], times[c.stop - 1], color='r', alpha=0.3) else: ax2.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3), alpha=0.3) hf = plt.plot(times, T_obs, 'g') ax2.legend((h, ), ('cluster p-value < 0.05', )) ax2.set_xlabel("time (ms)") ax2.set_ylabel("f-values") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: HTML tag adder. Step2: Arbitrary tag adder. Step3: Check if all parameters (keyword too) are of a given type. Step4: Write a decorator that makes sure that a function only raises one type of error. If it would raise another exception, raise this type instead.
<ASSISTANT_TASK:> Python Code: import functools from datetime import datetime def timer(func): @functools.wraps(func) def timed_func(*args, **kwargs): start = datetime.now() ret = func(*args, **kwargs) tm = datetime.now() - start print("Running time: {}".format(tm.total_seconds())) return ret return timed_func @timer def fast_func(): for _ in range(1000): pass @timer def slow_func(): for _ in range(10000000): pass fast_func() slow_func() def html_wrapper(func): @functools.wraps(func) def wrapped(*args, **kwargs): return "<html>{}</html>".format(func(*args, **kwargs)) return wrapped @html_wrapper def greeter(name): return "Hello {}".format(name) greeter("Judit") def tag_wrapper(tag_name): def concrete_decorator(func): @functools.wraps(func) def wrapped(*args, **kwargs): return "<{0}>{1}</{0}>".format(tag_name, func(*args, **kwargs)) return wrapped return concrete_decorator @tag_wrapper("h1") def greeter(name): return "Hello {}".format(name) greeter("John") def check_type(param_type): def concrete_decorator(func): @functools.wraps(func) def wrapped(*args, **kwargs): for idx, arg in enumerate(args): if not isinstance(arg, param_type): raise TypeError("Positional arg {0} is of wrong type ({1})".format(idx, type(arg))) for kw, arg in kwargs.items(): if not isinstance(arg, param_type): raise TypeError("{0} is of wrong type ({1})".format(kw, type(arg))) return func(*args, **kwargs) return wrapped return concrete_decorator @check_type(int) def age_printer(age): print("I am {} old".format(age)) age_printer(1) # def only_one_exception(exc_type): <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load 120 seconds of an audio file Step2: Plot the time-domain waveform of the audio signal Step3: Play the audio file Step4: Step 2 Step5: We transpose the result to accommodate scikit-learn which assumes that each row is one observation, and each column is one feature dimension Step6: Scale the features to have zero mean and unit variance Step7: Verify that the scaling worked Step8: Step 2b Step9: Load 120 seconds of an audio file Step10: Listen to the second audio file. Step11: Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain. Step12: Extract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file. Step13: Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above. Step14: Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one. Step15: Step 3 Step16: Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file. Step17: Create a classifer model object Step18: Train the classifier Step19: Step 4 Step20: Listen to both of the test audio excerpts Step21: Compute MFCCs from both of the test audio excerpts Step22: Scale the MFCCs using the previous scaler Step23: Concatenate all test features together Step24: Concatenate all test labels together Step25: Compute the predicted labels Step26: Finally, compute the accuracy score of the classifier on the test data Step27: Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt? Step28: Step 5 Step29: Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated? Step30: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions. Step31: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions. Step32: Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers
<ASSISTANT_TASK:> Python Code: filename_brahms = 'brahms_hungarian_dance_5.mp3' url = "http://audio.musicinformationretrieval.com/" + filename_brahms if not os.path.exists(filename_brahms): urllib.urlretrieve(url, filename=filename_brahms) librosa.load? x_brahms, fs_brahms = librosa.load(filename_brahms, duration=120) librosa.display.waveplot? # Your code here: IPython.display.Audio? # Your code here: librosa.feature.mfcc? n_mfcc = 12 mfcc_brahms = librosa.feature.mfcc(x_brahms, sr=fs_brahms, n_mfcc=n_mfcc).T mfcc_brahms.shape scaler = sklearn.preprocessing.StandardScaler() mfcc_brahms_scaled = scaler.fit_transform(mfcc_brahms) mfcc_brahms_scaled.mean(axis=0) mfcc_brahms_scaled.std(axis=0) filename_busta = 'busta_rhymes_hits_for_days.mp3' url = "http://audio.musicinformationretrieval.com/" + filename_busta urllib.urlretrieve? # Your code here. Download the second audio file in the same manner as the first audio file above. librosa.load? # Your code here. Load the second audio file in the same manner as the first audio file. # x_busta, fs_busta = IPython.display.Audio? plt.plot? # See http://musicinformationretrieval.com/stft.html for more details on displaying spectrograms. librosa.feature.melspectrogram? librosa.amplitude_to_db? librosa.display.specshow? librosa.feature.mfcc? # Your code here: # mfcc_busta = mfcc_busta.shape scaler.transform? # Your code here: # mfcc_busta_scaled = mfcc_busta_scaled.mean? mfcc_busta_scaled.std? features = numpy.vstack((mfcc_brahms_scaled, mfcc_busta_scaled)) features.shape labels = numpy.concatenate((numpy.zeros(len(mfcc_brahms_scaled)), numpy.ones(len(mfcc_busta_scaled)))) # Support Vector Machine model = sklearn.svm.SVC() model.fit? # Your code here x_brahms_test, fs_brahms = librosa.load(filename_brahms, duration=10, offset=120) x_busta_test, fs_busta = librosa.load(filename_busta, duration=10, offset=120) IPython.display.Audio? IPython.display.Audio? librosa.feature.mfcc? librosa.feature.mfcc? scaler.transform? scaler.transform? numpy.vstack? numpy.concatenate? model.predict? score = model.score(test_features, test_labels) score # Your code here. df_brahms = pandas.DataFrame(mfcc_brahms_test_scaled) df_brahms.shape df_brahms.head() df_busta = pandas.DataFrame(mfcc_busta_test_scaled) df_brahms.corr() df_busta.corr() df_brahms.plot.scatter? df_busta.plot.scatter? df_brahms[0].plot.hist() df_busta[11].plot.hist() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Firstly, let's setup the notebook and import the Waterfall() class to read the data. Step2: Now, let's read the observation data using Waterfall() Step3: We can get basic info about the file by reading the filterbank header. The info() method will do this Step4: The specifics are discussed in the detailed in the SIGPROC user guide. Briefly, astronomers use a Celestial coordinate system to specify the location of objects in outer space. The src_raj and src_dej specify the J2000 coordinates, in terms of Right Ascension and Declination (RA & DEC), toward which the telescope is pointing. tstart specifies the Julian Date of the observation, and fch1 and foff specify the frequency of the first and frequency increment of each data channel respectively, in MHz. Step5: The data has the shape (16, 1, 1048576), meaning that is has 16 time integrations, and 1048576 channels, over a bandwidth of 2.9 MHz. For the record, that's a lot of channels! The corresponding channel bandwidth is far narrower (Hz) than that required for most astrophysical observations (kHz). Step6: What we see here is the power spectral density (or simply "spectrum") of the data contained in the filterbank file. The logged=True tells the plotting script to display the signal on a decibel scale, which makes it easier to see high dynamic range. Step7: This is it Step8: Because Voyager is not fixed on the surface of Earth, it has a relative motion with respect to the telescope, which causes its apparent frequency to drift. We can see this by plotting what's known as a 'Waterfall', or dynamic spectrum, which shows the spectrum as a function of time.
<ASSISTANT_TASK:> Python Code: !wget http://blpd0.ssl.berkeley.edu/Voyager_data/Voyager1.single_coarse.fine_res.h5 %matplotlib inline import pylab as plt from blimpy import Waterfall file_path = 'Voyager1.single_coarse.fine_res.h5' obs = Waterfall(file_path) obs.info() print(obs.header) print(obs.data.shape) obs.plot_spectrum(logged=True) obs.plot_spectrum(f_start=8419.26, f_stop=8419.34) plt.figure(figsize=(8, 6)) plt.subplot(3,1,1) obs.plot_spectrum(f_start=8419.274, f_stop=8419.275) # left sideband plt.subplot(3,1,2) obs.plot_spectrum(f_start=8419.296, f_stop=8419.298) # carrier plt.subplot(3,1,3) obs.plot_spectrum(f_start=8419.319, f_stop=8419.320) # right sideband plt.tight_layout() obs.plot_waterfall(f_start=8419.296, f_stop=8419.298) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Hamiltonian Step2: Software version
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from qutip import * def integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver): si = qeye(2) sx = sigmax() sy = sigmay() sz = sigmaz() sx_list = [] sy_list = [] sz_list = [] for n in range(N): op_list = [] for m in range(N): op_list.append(si) op_list[n] = sx sx_list.append(tensor(op_list)) op_list[n] = sy sy_list.append(tensor(op_list)) op_list[n] = sz sz_list.append(tensor(op_list)) # construct the hamiltonian H = 0 # energy splitting terms for n in range(N): H += - 0.5 * h[n] * sz_list[n] # interaction terms for n in range(N-1): H += - 0.5 * Jx[n] * sx_list[n] * sx_list[n+1] H += - 0.5 * Jy[n] * sy_list[n] * sy_list[n+1] H += - 0.5 * Jz[n] * sz_list[n] * sz_list[n+1] # collapse operators c_op_list = [] # spin dephasing for n in range(N): if gamma[n] > 0.0: c_op_list.append(np.sqrt(gamma[n]) * sz_list[n]) # evolve and calculate expectation values if solver == "me": result = mesolve(H, psi0, tlist, c_op_list, sz_list) elif solver == "mc": ntraj = 250 result = mcsolve(H, psi0, tlist, c_op_list, sz_list, ntraj) return result.expect # # set up the calculation # solver = "me" # use the ode solver #solver = "mc" # use the monte-carlo solver N = 10 # number of spins # array of spin energy splittings and coupling strengths. here we use # uniform parameters, but in general we don't have too h = 1.0 * 2 * np.pi * np.ones(N) Jz = 0.1 * 2 * np.pi * np.ones(N) Jx = 0.1 * 2 * np.pi * np.ones(N) Jy = 0.1 * 2 * np.pi * np.ones(N) # dephasing rate gamma = 0.01 * np.ones(N) # intial state, first spin in state |1>, the rest in state |0> psi_list = [] psi_list.append(basis(2,1)) for n in range(N-1): psi_list.append(basis(2,0)) psi0 = tensor(psi_list) tlist = np.linspace(0, 50, 200) sz_expt = integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver) fig, ax = plt.subplots(figsize=(10,6)) for n in range(N): ax.plot(tlist, np.real(sz_expt[n]), label=r'$\langle\sigma_z^{(%d)}\rangle$'%n) ax.legend(loc=0) ax.set_xlabel(r'Time [ns]') ax.set_ylabel(r'\langle\sigma_z\rangle') ax.set_title(r'Dynamics of a Heisenberg spin chain'); from qutip.ipynbtools import version_table version_table() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Background Step2: Distribution of PRs' open time (age) Step3: PRs which need attention Step4: PRs which need authors' attentions Step5: PRs which need editors' attentions Step6: PRs whcih have been quiet for over a week Step7: PRs whcih have been open for over a mounth
<ASSISTANT_TASK:> Python Code: import argparse import datetime from dateutil import parser as date_parser import json import logging import numpy as np import os import pandas as pd import pprint import requests from pandas.io.json import json_normalize query_template={{ search(query: "org:kubeflow is:pr is:open created:>2019-01-01", type: ISSUE, first: 100, {cursor}) {{ edges {{ node {{ ... on PullRequest {{ title url number createdAt lastEditedAt updatedAt editor {{ login }} author {{ login }} }} }} }} pageInfo {{ endCursor hasNextPage }} }} }} headers = {"Authorization": "Bearer {0}".format(os.getenv("GITHUB_TOKEN"))} search_cursor = None hasNextPage = True df = pd.DataFrame() cursor_text = "" pageno=0 while hasNextPage: print('getting data fo page ',pageno) if search_cursor: cursor_text = "after:\"{0}\"".format(search_cursor) query = query_template.format(cursor=cursor_text) request = requests.post('https://api.github.com/graphql', json={'query': query}, headers=headers) print('request code ',request.status_code) rtext=request.text rdict=json.loads(rtext) prnodes = rdict["data"]["search"]["edges"] search_cursor = rdict["data"]["search"]["pageInfo"]["endCursor"] hasNextPage = rdict["data"]["search"]["pageInfo"]["hasNextPage"] print('nomer of nodes found in this page', len(prnodes)) rawdf = pd.DataFrame.from_dict(prnodes)[['node']] newdf = pd.DataFrame.from_records(rawdf['node']) #decompose column node to multiple columns df = pd.concat([df,newdf]) pageno = pageno +1 df['createdAt'] = df['createdAt'].apply(lambda x: pd.Timestamp(x)) df['lastEditedAt'] = df['lastEditedAt'].apply(lambda x: pd.Timestamp(x)) df['updatedAt'] = df['updatedAt'].apply(lambda x: pd.Timestamp(x)) def getTimeDeltaFromNowH(x): td = pd.Timestamp.utcnow() - pd.Timestamp(x) days, hours, minutes = td.days, td.seconds // 3600, td.seconds % 3600 / 60.0 return days*24 + hours df['creationAge'] = df['createdAt'].apply(lambda x: getTimeDeltaFromNowH(x)) df['lastEditAge'] = df['lastEditedAt'].apply(lambda x: getTimeDeltaFromNowH(x)) df['updateAge'] = df['updatedAt'].apply(lambda x: getTimeDeltaFromNowH(x)) hist = df.hist(column='creationAge', bins=200,figsize=(12,8), color='#86bf91', zorder=2, rwidth=0.9) ax = hist[0] for x in ax: x.set_xlabel("Age in hours") x.set_ylabel("total number of PRs") MAX_AGE = 24*30 MAX_QUIET_TIME = 7*24 needs_attention = df[(df.creationAge > MAX_AGE) | ((df.updateAge > MAX_QUIET_TIME) & (df.lastEditAge > MAX_QUIET_TIME*1.0)) | ((df.updateAge > MAX_QUIET_TIME) & (df.lastEditAge>df.updateAge*1.0)) | ((df.lastEditAge > MAX_QUIET_TIME*1.0) & (df.lastEditAge < df.updateAge*1.0)) ] needs_attention needs_attention[(needs_attention.updateAge > MAX_QUIET_TIME) & (needs_attention.lastEditAge>needs_attention.updateAge*1.0)] needs_attention[(needs_attention.lastEditAge > MAX_QUIET_TIME*1.0) & (needs_attention.lastEditAge<needs_attention.updateAge*1.0)] needs_attention[(needs_attention.lastEditAge > MAX_QUIET_TIME*1.0) & (needs_attention.updateAge > MAX_QUIET_TIME)] needs_attention[needs_attention.creationAge > MAX_AGE] <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Defining the cost fuction Step2: Setting the optimizer Step3: Checking the solution Step4: Another way of solving non-linear equations is by using non-linear solvers implemented in libraries such as scipy. There are different solvers that one can choose which correspond to different numerical methods. We are going to use fsolve, which is a general non-linear solver that finds the root of a given function.
<ASSISTANT_TASK:> Python Code: # Import modules import sys import numpy as np import matplotlib.pyplot as plt # Import PySwarms import pyswarms as ps print('Running on Python version: {}'.format(sys.version)) def cost_function(I): #Fixed parameters U = 10 R = 100 I_s = 9.4e-12 v_t = 25.85e-3 c = abs(U - v_t * np.log(abs(I[:, 0] / I_s)) - R * I[:, 0]) return c %%time # Set-up hyperparameters options = {'c1': 0.5, 'c2': 0.3, 'w':0.3} # Call instance of PSO optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=1, options=options) # Perform optimization cost, pos = optimizer.optimize(cost_function, iters=30) print(pos[0]) print(cost) x = np.linspace(0.001, 0.1, 100).reshape(100, 1) y = cost_function(x) plt.plot(x, y) plt.xlabel('Current I [A]') plt.ylabel('Cost'); # Import non-linear solver from scipy.optimize import fsolve c = lambda I: abs(10 - 25.85e-3 * np.log(abs(I / 9.4e-12)) - 100 * I) initial_guess = 0.09 current_I = fsolve(func=c, x0=initial_guess) print(current_I[0]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Gaussian Processes Step2: Support Vector Machines Step3: Feature engineering and two classification algorithms Step4: Working in Feature Space Step5: Classification using inner products in Feature Space Step6: Classification using density estimation Step7: Classification using density estimation Step8: Applications Step9: Conditional mean embedding (3)
<ASSISTANT_TASK:> Python Code: ### FIRST SOME CODE #### from __future__ import division, print_function, absolute_import from IPython.display import SVG, display, Image, HTML import numpy as np, scipy as sp, pylab as pl, matplotlib.pyplot as plt, scipy.stats as stats, sklearn, sklearn.datasets from scipy.spatial.distance import squareform, pdist, cdist import rkhs_operators as ro import distributions as dist #commit 480cf98 of https://github.com/ingmarschuster/distributions pl.style.use(u'seaborn-talk') display(Image(filename="GP_uq.png", width=630)) #source: http://scikit-learn.org/0.17/modules/gaussian_process.html display(Image(filename="SVM.png", width=700)) #source: https://en.wikipedia.org/wiki/Support_vector_machine display(Image(filename="monomials_small.jpg", width=800)) #source: Berhard Schölkopf figkw = {"figsize":(4,4), "dpi":150} np.random.seed(5) samps_per_distr = 20 data = np.vstack([stats.multivariate_normal(np.array([-2,0]), np.eye(2)*1.5).rvs(samps_per_distr), stats.multivariate_normal(np.array([2,0]), np.eye(2)*1.5).rvs(samps_per_distr)]) distr_idx = np.r_[[0]*samps_per_distr, [1]*samps_per_distr] f = pl.figure(**figkw); for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: plt.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha = 0.4) pl.show() pl.figure(**figkw) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2) pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c) pl.title(r"Mean embeddings for $\Phi(x)=x$"); pl.figure(**figkw) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2) pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c) pl.title(r"Mean embeddings for $\Phi(x)=x$"); pl.scatter(np.ones(1), np.ones(1), c='k', marker='D', alpha=0.8); # Some plotting code def apply_to_mg(func, *mg): #apply a function to points on a meshgrid x = np.vstack([e.flat for e in mg]).T return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape) def plot_with_contour(samps, data_idx, cont_func, method_name = None, delta = 0.025, pl = pl, colormesh_cmap = pl.cm.Pastel2, contour_classif = True): x = np.arange(samps.T[0].min()-delta, samps.T[1].max()+delta, delta) y = np.arange(samps.T[1].min()-delta, samps.T[1].max()+delta, delta) X, Y = np.meshgrid(x, y) Z = apply_to_mg(cont_func, X,Y) Z = Z.reshape(X.shape) fig = pl.figure(**figkw) if colormesh_cmap is not None: bound = np.abs(Z).max() pl.pcolor(X, Y, Z , cmap=colormesh_cmap, alpha=0.5, edgecolors=None, vmin=-bound, vmax=bound) if contour_classif is True: c = pl.contour(X, Y, Z, colors=['k', ], alpha = 0.5, linestyles=[ '--'], levels=[0], linewidths=0.7) else: pl.contour(X, Y, Z, linewidths=0.7) if method_name is not None: pl.title(method_name) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha = 0.4) pl.show() pl.close() est_dens_1 = dist.mixt(2, [dist.mvnorm(x, np.eye(2)*0.1) for x in data[:4]], [1./4]*4) plot_with_contour(data, distr_idx, lambda x: exp(est_dens_1.logpdf(x)), colormesh_cmap=None, contour_classif=False) est_dens_1 = dist.mixt(2, [dist.mvnorm(x, np.eye(2)*0.1,10) for x in data[:samps_per_distr]], [1./samps_per_distr]*samps_per_distr) plot_with_contour(data, distr_idx, lambda x: exp(est_dens_1.logpdf(x)), colormesh_cmap=None, contour_classif=False) class KMEclassification(object): def __init__(self, samps1, samps2, kernel): self.de1 = ro.RKHSDensityEstimator(samps1, kernel, 0.1) self.de2 = ro.RKHSDensityEstimator(samps2, kernel, 0.1) def classification_score(self, test): return (self.de1.eval_kme(test) - self.de2.eval_kme(test)) data, distr_idx = sklearn.datasets.make_circles(n_samples=400, factor=.3, noise=.05) kc = KMEclassification(data[distr_idx==0,:], data[distr_idx==1,:], ro.LinearKernel()) plot_with_contour(data, distr_idx, kc.classification_score, 'Inner Product classif. '+"Linear", pl = plt, contour_classif = True, colormesh_cmap = pl.cm.bwr) kc = KMEclassification(data[distr_idx==0,:], data[distr_idx==1,:], ro.GaussianKernel(0.3)) plot_with_contour(data, distr_idx, kc.classification_score, 'Inner Product classif. '+"Gaussian", pl = plt, contour_classif = True, colormesh_cmap = pl.cm.bwr) out_samps = data[distr_idx==0,:1] + 1 inp_samps = data[distr_idx==0,1:] + 1 def plot_mean_embedding(cme, inp_samps, out_samps, p1 = 0., p2 = 1., offset = 0.5): x = np.linspace(inp_samps.min()-offset,inp_samps.max()+offset,200) fig = pl.figure(figsize=(10, 5)) ax = [pl.subplot2grid((2, 2), (0, 1)), pl.subplot2grid((2, 2), (0, 0), rowspan=2), pl.subplot2grid((2, 2), (1, 1))] ax[1].scatter(out_samps, inp_samps, alpha=0.3, color = 'r') ax[1].set_xlabel('Output') ax[1].set_ylabel('Input') ax[1].axhline(p1, 0, 8, color='g', linestyle='--') ax[1].axhline(p2, 0, 8, color='b', linestyle='--') ax[1].set_title("%d input-output pairs"%len(out_samps)) ax[1].set_yticks((p1, p2)) e = cme.lhood(np.array([[p1], [p2]]), x[:, None]).T #ax[0].plot(x, d[0], '-', label='cond. density') ax[2].plot(x, e[0], 'g--', label='cond. mean emb.') ax[2].set_title(r"p(outp | inp=%.1f)"%p1) ax[0].plot(x, e[1], 'b--', label='cond. mean emb.') ax[0].set_title(r"p(outp | inp=%.1f)"%p2) #ax[2].legend(loc='best') fig.tight_layout() cme = ro.ConditionMeanEmbedding(inp_samps, out_samps, ro.GaussianKernel(0.3), ro.GaussianKernel(0.3), 5) plot_mean_embedding(cme, inp_samps, out_samps, 0.3, 2.,) HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/MpzaCCbX-z4?rel=0&amp;showinfo=0&amp;start=148" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>') display(Image(filename="Pendulum_eigenfunctions.png", width=700)) display(Image(filename="KeywordClustering.png", width=700)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
<ASSISTANT_TASK:> Python Code: class Edge : def __init__(self ) : self . src = 0 self . dest = 0 self . weight = 0   class Graph : def __init__(self ) : self . V = 0 self . E = 0 self . edge =[]   def createGraph(V , E ) : graph = Graph() ; graph . V = V ; graph . E = E ; graph . edge =[Edge() for i in range(graph . E ) ] return graph ;  def NegCycleBellmanFord(graph , src ) : V = graph . V ; E = graph . E ; dist =[1000000 for i in range(V ) ] parent =[- 1 for i in range(V ) ] dist[src ] = 0 ; for i in range(1 , V ) : for j in range(E ) : u = graph . edge[j ] . src ; v = graph . edge[j ] . dest ; weight = graph . edge[j ] . weight ; if(dist[u ] != 1000000 and dist[u ] + weight < dist[v ] ) : dist[v ] = dist[u ] + weight ; parent[v ] = u ;    C = - 1 ; for i in range(E ) : u = graph . edge[i ] . src ; v = graph . edge[i ] . dest ; weight = graph . edge[i ] . weight ; if(dist[u ] != 1000000 and dist[u ] + weight < dist[v ] ) : C = v ; break ;   if(C != - 1 ) : for i in range(V ) : C = parent[C ] ;  cycle =[] v = C while(True ) : cycle . append(v ) if(v == C and len(cycle ) > 1 ) : break ;  v = parent[v ]  cycle . reverse() for v in cycle : print(v , end = "▁ ") ;  print()  else : print(- 1 ) ;   if __name__== ' __main __' : V = 5 ; E = 5 ; graph = createGraph(V , E ) ; graph . edge[0 ] . src = 0 ; graph . edge[0 ] . dest = 1 ; graph . edge[0 ] . weight = 1 ; graph . edge[1 ] . src = 1 ; graph . edge[1 ] . dest = 2 ; graph . edge[1 ] . weight = 2 ; graph . edge[2 ] . src = 2 ; graph . edge[2 ] . dest = 3 ; graph . edge[2 ] . weight = 3 ; graph . edge[3 ] . src = 3 ; graph . edge[3 ] . dest = 4 ; graph . edge[3 ] . weight = - 3 ; graph . edge[4 ] . src = 4 ; graph . edge[4 ] . dest = 1 ; graph . edge[4 ] . weight = - 3 ; NegCycleBellmanFord(graph , 0 ) ;  <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
<ASSISTANT_TASK:> Python Code: import numpy as np x = np.array([-2, -1.4, -1.1, 0, 1.2, 2.2, 3.1, 4.4, 8.3, 9.9, 10, 14, 16.2]) result = x[x >=0] <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2. Potential differences and Apparent Resistivities Step2: 3. Building Pseudosections Step3: DC pseudo-section app Step4: 4. Parametric Inversion
<ASSISTANT_TASK:> Python Code: cylinder_app() plot_layer_potentials_app() MidpointPseudoSectionWidget() DC2DPseudoWidget() DC2DfwdWidget() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Example 2 Step2: Example 3 Step3: Example 4 Step4: Example 5 Step5: Example 6 Step6: Example 7
<ASSISTANT_TASK:> Python Code: import numpy as np import pymc as mc H = mc.Normal('H', 2.00, (0.03)**-2) h = mc.Normal('h', 0.88, (0.04)**-2) @mc.deterministic() def Q(H=H, h=h): return H-h model = mc.MCMC((H,h,Q)) model.sample(1e4, burn=100, burn_till_tuned=True) # mc.Matplot.plot(model) # mc.Matplot.plot(Q) print(Q.summary()) print("MCMC gives {0:.2f} +/- {1:.2f}, analytic gives {2} +/- {3}".format(np.mean(Q.trace()), np.std(Q.trace()), 1.12, 0.05)) d = mc.Normal('d', 123, (3)**-2) t = mc.Normal('t', 20.0, (1.2)**-2) @mc.deterministic() def v(d=d, t=t): return d/t model = mc.MCMC((d, t, v)) model.sample(1e5, burn=1000, burn_till_tuned=True) print(v.summary()) print("MCMC gives {0:.2f}, analytic gives {1}".format(np.std(v.trace()), 0.39)) T = mc.Normal('T', 0.20, (0.01)**-2) @mc.deterministic() def f(T=T): return 1/T model = mc.MCMC((T, f)) model.sample(1e4, burn=100, burn_till_tuned=True) print(f.summary()) print("MCMC gives {0:.1f} +/- {1:.1f}, analytic gives {2} +/- {3}".format(np.mean(f.trace()), np.std(f.trace()), 5.0, 0.3)) g = 9.80 t = mc.Normal('t', 0.60, (0.06)**-2) v0 = mc.Normal('v0', 4.0, (0.2)**-2) @mc.deterministic() def h(t=t, v0=v0): return v0*t - 0.5*g*t**2 model = mc.MCMC((t, v0, h)) model.sample(1e5, burn=100, burn_till_tuned=True) print(h.summary()) print("MCMC gives {0:.1f} +/- {1:.1f}, analytic gives {2} +/- {3}".format(np.mean(h.trace()), np.std(h.trace()), 0.6, 0.4)) A = mc.Normal('A', 3.6, (0.2)**-2) B = mc.Normal('B', 3.3, (0.3)**-2) @mc.deterministic() def D(A=A, B=B): return A-B model = mc.MCMC((A, B, D)) model.sample(1e5, burn=100, burn_till_tuned=True) print(D.summary()) print("MCMC gives {0:.1f} +/- {1:.1f}, analytic gives {2} +/- {3}".format(np.mean(D.trace()), np.std(D.trace()), 0.3, 0.4)) # mc.Matplot.plot(model) # https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm a = mc.Normal('a', 30.00, (0.5)**-2) @mc.deterministic() def sina(a=a): return np.sin(np.deg2rad(a)) model = mc.MCMC((a, sina)) model.sample(1e5, burn=5000, burn_till_tuned=False) # mc.Matplot.plot(model) # mc.Matplot.plot(Q) print(sina.summary()) print("MCMC gives {0:.3f} +/- {1:.3f}, analytic gives {2} +/- {3}".format(np.mean(sina.trace()), np.std(sina.trace()), 0.5, 0.008)) for i in range(4): model.sample(1e5, burn=5000, burn_till_tuned=False) mc.Matplot.summary_plot(sina) X = mc.Normal('X', 38.2, (0.3)**-2) Y = mc.Normal('Y', 12.1, (0.2)**-2) @mc.deterministic() def R(X=X, Y=Y): return X*Y model = mc.MCMC((X, Y, R)) model.sample(1e5, burn=5000, burn_till_tuned=False) # mc.Matplot.plot(model) # mc.Matplot.plot(Q) print(R.summary()) print("MCMC gives {0:.3f} +/- {1:.3f}, analytic gives {2} +/- {3}".format(np.mean(R.trace()), np.std(R.trace()), 462, 12)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: MAP FUCNTION Step2: We get lazy evaluation here and so map is not computed here so we get just the map object back Step3: Lambda Step4: Example of list comprehension
<ASSISTANT_TASK:> Python Code: class Person: ##SCOPE OF CLASS DOWN BELOW institute="IIT" def __init__(self,name,department): self.name=name self.department=department def getName(self): return self.name p=Person("mangesh","physics") p.getName() help(map) store1=[10,12,8,3,5] store2=[5,8,12,3,6] cheapest=map(min,store1,store2) cheapest for c in cheapest: print(c) people = ['Chu. Mangesh Joshi', 'Chu. Harsh Ranjan', 'Dr. Manoj Pawar', 'Dr. Ashish Shettywar'] def split_title_and_name(person): return person.split(" ")[0]+person.split(" ")[-1] list(map(split_title_and_name,people)) my_func=lambda a,b:a*b a=my_func(2,3) a for p in people: print( (lambda person:split_title_and_name(person))(p) ) list(map(split_title_and_name, people)) == list(map( (lambda person:person.split()[0] + person.split()[-1]),people)) ##Method 1 #Taking range as 10 to avoid a lot of printing l=[] for i in range(1,10): if i%2==0: l.append(i) print(l) #Method 2 ## format is [ i for expressions [conditionals]] print([ i for i in range(1,10) if i%2==0]) ## convert this function list comprehensions: def times_tables(): lst = [] for i in range(10): for j in range (10): lst.append(i*j) return lst l=[i*j for i in range(10) for j in range(10)] <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Use this to automate the process. Be carefull it can overwrite current results Step2: Now we will obtain the data from the calculated empirical variogram. Step3: Instantiating the variogram object Step4: Instantiating theoretical variogram model
<ASSISTANT_TASK:> Python Code: ls # Load Biospytial modules and etc. %matplotlib inline import sys sys.path.append('/apps/external_plugins/spystats/spystats/') sys.path.append('..') import django django.setup() import pandas as pd import matplotlib.pyplot as plt import numpy as np ## Use the ggplot style plt.style.use('ggplot') import tools from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv", "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",-130,-60,30,40) #section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv", # "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv", # -85,-80,30,35) # IN HEC #section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35) gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section) gvg.plot(refresh=False,with_envelope=True) resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=50,distance_threshold=1000000) variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv" thrs_dist = 100000 emp_var_log_log = pd.read_csv(variogram_data_path) gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist) gvg.envelope = emp_var_log_log gvg.empirical = emp_var_log_log.variogram gvg.lags = emp_var_log_log.lags #emp_var_log_log = emp_var_log_log.dropna() #vdata = gvg.envelope.dropna() matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4) whittle_model = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.0,alpha=3) exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33) gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33) spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33) gvg.model = whittle_model #gvg.model = matern_model #models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model]) gvg.fitVariogramModel(whittle_model) import numpy as np xx = np.linspace(0,1000000,1000) gvg.plot(refresh=False,with_envelope=True) plt.plot(xx,whittle_model.f(xx),lw=2.0,c='k') plt.title("Empirical Variogram with fitted Whittle Model") def randomSelection(n,p): idxs = np.random.choice(n,p,replace=False) random_sample = new_data.iloc[idxs] return random_sample ################# n = len(new_data) p = 3000 # The amount of samples taken (let's do it without replacement) random_sample = randomSelection(n,100) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load HTML Code Step2: Using xpath to find nodes in a document Step3: To read the text inside that tag you can use the text variable. Step4: Another way is read the text is to use the text() function in xpath. Step5: Working with multiple items Step6: We can use text() function with multiple items. Step7: Tag selector without full path Step8: Selecting one result Step9: Attributes selector Step10: contains() function Step11: Returning attributes Step12: Real world example
<ASSISTANT_TASK:> Python Code: import requests from lxml import html %%HTML <html> <body> <h1>Favorite Python Librarires</h1> <ul> <li>Numpy</li> <li>Pandas</li> <li>requests</li> </ul> </body> </html> html_code = In[2] html_code = html_code[42:-2].replace("\\n","\n") print(html_code) doc = html.fromstring(html_code) title = doc.xpath("/html/body/h1")[0] title title.text title = doc.xpath("/html/body/h1/text()")[0] title item_list = doc.xpath("/html/body/ul/li") item_list doc = html.fromstring(html_code) item_list = doc.xpath("/html/body/ul/li/text()") item_list doc = html.fromstring(html_code) item_list = doc.xpath("//li/text()") item_list doc = html.fromstring(html_code) item_list = doc.xpath("/html/body/ul/li[1]/text()") item_list %%HTML <html> <body> <h1 class="text-muted">Favorite Python Librarires</h1> <ul class="nav nav-pills nav-stacked"> <li role="presentation"><a href="http://www.numpy.org/">Numpy</a></li> <li role="presentation"><a href="http://pandas.pydata.org/">Pandas</a></li> <li role="presentation"><a href="http://python-requests.org/">requests</a></li> </ul> <h1 class="text-success">Favorite JS Librarires</h1> <ul class="nav nav-tabs"> <li role="presentation"><a href="http://getbootstrap.com/">Bootstrap</a></li> <li role="presentation"><a href="https://jquery.com/">jQuery</a></li> <li role="presentation"><a href="http://d3js.org/">d3.js</a></li> </ul> </html> html_code = In[11] html_code = html_code[42:-2].replace("\\n","\n") print(html_code) doc = html.fromstring(html_code) title = doc.xpath("/html/body/h1[@class='text-muted']/text()")[0] title item_list = doc.xpath("/html/body/ul[contains(@class,'nav-stacked')]/li/a/text()") item_list item_list = doc.xpath("/html/body/ul[contains(@class,'nav-stacked')]/li/a/@href") item_list response = requests.get("http://www.wikipedia.org") doc = html.fromstring(response.content, parser=html.HTMLParser(encoding="utf-8")) lang_list = doc.xpath("//div[@class='langlist langlist-large hlist'][1]/ul/li/a/text()") lang_list <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: load data and convert it to graphs Step2: 2 Vectorization Step3: extract features and build data matrix Step4: 3 Modelling
<ASSISTANT_TASK:> Python Code: from eden.util import load_target y = load_target( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.target' ) from eden.converter.graph.gspan import gspan_to_eden graphs = gspan_to_eden( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.gspan' ) from eden.graph import Vectorizer vectorizer = Vectorizer( r=2,d=0 ) %%time X = vectorizer.transform( graphs ) print 'Instances: %d Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0]) %%time #induce a predictive model from sklearn.linear_model import SGDClassifier predictor = SGDClassifier(average=True, class_weight='auto', shuffle=True, n_jobs=-1) from sklearn import cross_validation scores = cross_validation.cross_val_score(predictor, X, y, cv=10, scoring='roc_auc') import numpy as np print('AUC ROC: %.4f +- %.4f' % (np.mean(scores),np.std(scores))) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: You can navigate the network structure as you would in C++. You can use similar methods to create more complex network architectures than possible with make_net. Step2: This cell generally illustrates how to invoke the CLSTM library from Python Step3: Here is a training loop that generates a delayed-by-one from a random input sequence and trains the network to learn this task.
<ASSISTANT_TASK:> Python Code: net = clstm.make_net_init("lstm1","ninput=1:nhidden=4:noutput=2") print net net.setLearningRate(1e-4,0.9) print clstm.network_info(net) print net.sub.size() print net.sub[0] print net.sub[0].name N = 20 xs = array(randn(N,1,1)<0.2, 'f') net.inputs.aset(xs) net.forward() N = 20 test = array(rand(N)<0.3, 'f') plot(test, '--', c="black") ntrain = 30000 for i in range(ntrain): xs = array(rand(N)<0.3, 'f') ys = roll(xs, 1) ys[0] = 0 ys = array([1-ys, ys],'f').T.copy() net.inputs.aset(xs.reshape(N,1,1)) net.forward() net.d_outputs.aset(ys.reshape(N,2,1)-net.outputs.array()) net.backward() net.update() if i%1000==0: net.inputs.aset(test.reshape(N,1,1)) net.forward() plot(net.outputs.array()[:,1,0],c=cm.jet(i*1.0/ntrain)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 2. Key Properties --&gt; Variables Step7: 3. Key Properties --&gt; Seawater Properties Step8: 3.2. Ocean Freezing Point Value Step9: 4. Key Properties --&gt; Resolution Step10: 4.2. Canonical Horizontal Resolution Step11: 4.3. Number Of Horizontal Gridpoints Step12: 5. Key Properties --&gt; Tuning Applied Step13: 5.2. Target Step14: 5.3. Simulations Step15: 5.4. Metrics Used Step16: 5.5. Variables Step17: 6. Key Properties --&gt; Key Parameter Values Step18: 6.2. Additional Parameters Step19: 7. Key Properties --&gt; Assumptions Step20: 7.2. On Diagnostic Variables Step21: 7.3. Missing Processes Step22: 8. Key Properties --&gt; Conservation Step23: 8.2. Properties Step24: 8.3. Budget Step25: 8.4. Was Flux Correction Used Step26: 8.5. Corrected Conserved Prognostic Variables Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Step28: 9.2. Grid Type Step29: 9.3. Scheme Step30: 9.4. Thermodynamics Time Step Step31: 9.5. Dynamics Time Step Step32: 9.6. Additional Details Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Step34: 10.2. Number Of Layers Step35: 10.3. Additional Details Step36: 11. Grid --&gt; Seaice Categories Step37: 11.2. Number Of Categories Step38: 11.3. Category Limits Step39: 11.4. Ice Thickness Distribution Scheme Step40: 11.5. Other Step41: 12. Grid --&gt; Snow On Seaice Step42: 12.2. Number Of Snow Levels Step43: 12.3. Snow Fraction Step44: 12.4. Additional Details Step45: 13. Dynamics Step46: 13.2. Transport In Thickness Space Step47: 13.3. Ice Strength Formulation Step48: 13.4. Redistribution Step49: 13.5. Rheology Step50: 14. Thermodynamics --&gt; Energy Step51: 14.2. Thermal Conductivity Step52: 14.3. Heat Diffusion Step53: 14.4. Basal Heat Flux Step54: 14.5. Fixed Salinity Value Step55: 14.6. Heat Content Of Precipitation Step56: 14.7. Precipitation Effects On Salinity Step57: 15. Thermodynamics --&gt; Mass Step58: 15.2. Ice Vertical Growth And Melt Step59: 15.3. Ice Lateral Melting Step60: 15.4. Ice Surface Sublimation Step61: 15.5. Frazil Ice Step62: 16. Thermodynamics --&gt; Salt Step63: 16.2. Sea Ice Salinity Thermal Impacts Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Step65: 17.2. Constant Salinity Value Step66: 17.3. Additional Details Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Step68: 18.2. Constant Salinity Value Step69: 18.3. Additional Details Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Step72: 20.2. Additional Details Step73: 21. Thermodynamics --&gt; Melt Ponds Step74: 21.2. Formulation Step75: 21.3. Impacts Step76: 22. Thermodynamics --&gt; Snow Processes Step77: 22.2. Snow Aging Scheme Step78: 22.3. Has Snow Ice Formation Step79: 22.4. Snow Ice Formation Scheme Step80: 22.5. Redistribution Step81: 22.6. Heat Diffusion Step82: 23. Radiative Processes Step83: 23.2. Ice Radiation Transmission
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'seaice') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Extrapolation of $f(0) = a$ to the ghost point yields (see ghost4thOrder for calculation) yields Step2: Which can be rewritten to Step3: Furthermore a 4th order FD of $\partial_z f\big|_0 = b$ reads Step4: Which can be rewritten to Step5: Thus
<ASSISTANT_TASK:> Python Code: from IPython.display import display from sympy import init_printing from sympy import symbols, as_finite_diff, solve, latex from sympy import Function, Eq fg, f0, f1, f2 = symbols('f_g, f_0, f_1, f_2') z, h = symbols('z, h') a, b = symbols('a, b') f = Function('f') init_printing() extraPolate = Eq(fg, 16*a/5 - 3*f0 + f1 - f2/5) display(extraPolate) eq1 = Eq(0, extraPolate.rhs - extraPolate.lhs) display(eq1) deriv = as_finite_diff(f(z).diff(z), [z-h/2, z+h/2, z+3*h/2, z+5*h/2]) deriv = Eq(b ,deriv.subs([(f(z-h/2), fg),\ (f(z+h/2), f0),\ (f(z+3*h/2), f1),\ (f(z+5*h/2), f2),\ ]).together()) display(deriv) eq2 = Eq(0, deriv.rhs - deriv.lhs) display(eq2) full = Eq(eq1.rhs, eq2.rhs) display(full) fullSolvedForFg = Eq(fg, solve(full, fg)[0].collect(symbols('f_0, f_1, f_2, h'), exact=True).simplify()) display(fullSolvedForFg) print(fullSolvedForFg) print(latex(fullSolvedForFg)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. Step2: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set. Step3: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial. Step4: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters. Step5: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class. Step6: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies. Step7: "Upsampling" the dataset Step8: Let's check if the facies distributions still look right Step9: Looks good! We'll now use this upsampled data as our training data Step10: Conditioning the data set Step11: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie Step12: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set. Step13: Training the SVM classifier Step14: Now we can train the classifier using the training set we created above. Step15: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 23 were correctly indentified as SS, 21 were classified as CSiS and 2 were classified as FSiS. Step16: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels. Step17: SVM parameter selection Step18: Let's plot the GridSearchCV results in a heatmap Step19: C = 1000 and gamma = 1 seem to give the best F1 score. Let's try using these against the test dataset Step20: DecisionTree classifier Step21: XGBoost classifier Step22: Naive Bayes classifier Step23: AdaBoost classifier Step24: Random Forest classifier Step25: Nearest Neighbours classifier Step26: It looks like several different parameter combinations give the same score. We'll pick the one using the most neighbours Step27: TensorFlow DNN Step28: Ensemble classifier Step29: Use a homemade majority voting to use with TensorFlow Step30: Average test F1 score with leave one well out Step31: Issue I'm not sure why the leave one well out scores are so much worse than the randomised train/test split (most likely variation between wells due to a combination of geology and well log quality). Need more work in this area
<ASSISTANT_TASK:> Python Code: %matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.axes_grid1 import make_axes_locatable from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None filename = 'facies_vectors.csv' training_data = pd.read_csv(filename) training_data training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() # Drop the rows with missing PEF values training_data.dropna(inplace=True) training_data.describe() # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite # 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) def make_facies_log_plot(logs, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im=ax[5].imshow(cluster, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[5]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-1): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) make_facies_log_plot(training_data[training_data['Well Name'] == 'SHRIMPLIN'], facies_colors) #count the number of unique entries for each facies, sort them by #facies number (instead of by number of entries) facies_counts = training_data['Facies'].value_counts().sort_index() #use facies labels to index each count facies_counts.index = facies_labels facies_counts.plot(kind='bar',color=facies_colors, title='Distribution of Training Data by Facies') facies_counts #save plot display settings to change back to when done plotting with seaborn inline_rc = dict(mpl.rcParams) import seaborn as sns sns.set() sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1), hue='FaciesLabels', palette=facies_color_map, hue_order=list(reversed(facies_labels))) #switch back to default matplotlib plot style mpl.rcParams.update(inline_rc) upsampled_data = pd.DataFrame() for well in training_data['Well Name'].unique(): df = training_data[training_data['Well Name'] == well] df.index = np.arange(0, 5*len(df), 5) upsampled_df = pd.DataFrame(index=np.arange(0, 5*len(df))) upsampled_df = upsampled_df.join(df) upsampled_df.interpolate(method='cubic', limit=4, inplace=True) upsampled_df.fillna(method="pad", limit=4, inplace=True) upsampled_df.drop_duplicates(inplace=True) if len(upsampled_data) == 0: upsampled_data = upsampled_df else: upsampled_data = upsampled_data.append(upsampled_df, ignore_index=True) upsampled_data["Facies"] = upsampled_data["Facies"].round() upsampled_data["Facies"] = upsampled_data["Facies"].astype(int) upsampled_data["NM_M"] = upsampled_data["NM_M"].round() upsampled_data["NM_M"] = upsampled_data["NM_M"].astype(int) # Sometimes a small number of the facies are labelled as 0 or 10 - these need to be removed upsampled_data = upsampled_data[upsampled_data.Facies != 0] upsampled_data = upsampled_data[upsampled_data.Facies != 10] upsampled_data.loc[:,'FaciesLabels'] = upsampled_data.apply(lambda row: label_facies(row, facies_labels), axis=1) upsampled_data upsampled_data.describe() facies_counts = upsampled_data['Facies'].value_counts().sort_index() facies_counts.index = facies_labels facies_counts.plot(kind='bar',color=facies_colors, title='Distribution of Training Data by Facies') facies_counts training_data = upsampled_data correct_facies_labels = training_data['Facies'].values well_names = training_data['Well Name'] feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1) feature_vectors.describe() from sklearn import preprocessing scaler = preprocessing.StandardScaler().fit(feature_vectors) scaled_features = scaler.transform(feature_vectors) feature_vectors from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(scaled_features, correct_facies_labels, test_size=0.1, random_state=48) from sklearn.svm import SVC clf = SVC() clf.fit(X_train, y_train) predicted_labels = clf.predict(X_test) from sklearn.metrics import confusion_matrix, f1_score, accuracy_score from classification_utilities import display_cm, display_adj_cm conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) print('Facies classification accuracy = %f' % accuracy(conf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies)) from sklearn.model_selection import GridSearchCV parameters = {'C': [.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000], 'gamma': [0.0001, 0.001, 0.01, 0.1, 1, 10], 'kernel': ['rbf']} # This could be extended to the linear kernel but it takes a long time svr = SVC() clf = GridSearchCV(svr, parameters, n_jobs=-1, verbose=3, scoring="f1_micro") clf.fit(X_train, y_train) cv_results = {"C": clf.cv_results_["param_C"], "gamma": clf.cv_results_["param_gamma"], "Score": clf.cv_results_['mean_test_score']} cv_results = pd.DataFrame(cv_results) cv_results = cv_results[cv_results.columns].astype(float) cv_results = cv_results.pivot("C", "gamma", "Score") plt.figure(figsize=(10, 8)) sns.heatmap(cv_results, annot=True, square=True, cmap="YlGnBu", fmt='.3g') plt.title('F1 Score'); clf_svm = SVC(C=1000, gamma=1) clf_svm.fit(X_train, y_train) predicted_labels = clf_svm.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) from sklearn import tree parameters = {'max_depth': np.arange(2, 35)} clf_dt = tree.DecisionTreeClassifier() clf = GridSearchCV(clf_dt, parameters, n_jobs=-1, verbose=3, scoring="f1_micro") clf.fit(X_train, y_train) pd.DataFrame(clf.cv_results_).sort_values(by="rank_test_score").head() clf_dt = tree.DecisionTreeClassifier(max_depth=32) clf_dt.fit(X_train, y_train) predicted_labels = clf_dt.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) import xgboost as xgb clf_xgb = xgb.XGBClassifier() clf_xgb.fit(X_train, y_train) predicted_labels = clf_xgb.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) from sklearn.naive_bayes import GaussianNB clf_nb = GaussianNB() clf_nb.fit(X_train, y_train) predicted_labels = clf_nb.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) from sklearn.ensemble import AdaBoostClassifier clf_ab = AdaBoostClassifier(n_estimators=100) clf_ab.fit(X_train, y_train) predicted_labels = clf_ab.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) from sklearn.ensemble import RandomForestClassifier clf_rf = RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=0) clf_rf.fit(X_train, y_train) predicted_labels = clf_rf.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) from sklearn import neighbors parameters = {'n_neighbors': np.arange(1, 25), 'weights': ['uniform', 'distance']} knn = neighbors.KNeighborsClassifier() clf = GridSearchCV(knn, parameters, n_jobs=-1, verbose=3, scoring="f1_micro") clf.fit(X_train, y_train) pd.DataFrame(clf.cv_results_).sort_values(by="rank_test_score").head(10) clf_knn = neighbors.KNeighborsClassifier(1, weights="distance") clf_knn.fit(X_train, y_train) predicted_labels = clf_knn.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) import tensorflow as tf from tensorflow.contrib import learn from tensorflow.contrib import layers feature_columns = learn.infer_real_valued_columns_from_input(X_train) clf_dnn = learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[200, 400, 200], n_classes=10) clf_dnn.fit(X_train, y_train, steps=5000) predicted_labels = list(clf_dnn.predict(X_test, as_iterable=True)) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) from sklearn.ensemble import VotingClassifier eclf = VotingClassifier(estimators=[('SVM', clf_svm), ('DecisionTree', clf_dt), ('KNN', clf_knn), ('RandomForest', clf_rf), ('XGBoost', clf_xgb) ], voting='hard') eclf.fit(X_train, y_train) predicted_labels = eclf.predict(X_test) conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) display_adj_cm(conf, facies_labels, adjacent_facies, display_metrics=True, hide_zeros=True) # classifiers = { # "SVM": SVC(C=1000, gamma=1), # "DecisionTree": tree.DecisionTreeClassifier(max_depth=32), # "KNN": neighbors.KNeighborsClassifier(1, weights="distance"), # # "DNN": learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[200, 400, 200], n_classes=10), # "RandomForest": RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=0), # "XGBoost": xgb.XGBClassifier() # } # def fit_and_predict(X_train, y_train, X_test, classifiers): # predicted_values = {} # for key, classifier in classifiers.items(): # if key == "DNN": # classifier.fit(X_train, y_train, steps=5000) # list(classifier.predict(X_test, as_iterable=True)) # else: # classifier.fit(X_train, y_train) # predicted_values[key] = classifier.predict(X_test) # return pd.DataFrame(predicted_values) # predicted_values = fit_and_predict(X_train, y_train, X_test, classifiers) # majority_vote = predicted_values.mode(axis=1)[0] # conf = confusion_matrix(y_test, majority_vote.fillna(2)) # display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) # display_adj_cm(conf, facies_labels, adjacent_facies, display_metrics=True, hide_zeros=True) from sklearn.model_selection import LeaveOneGroupOut from sklearn.metrics import f1_score, classification_report f1_eclf = [] logo = LeaveOneGroupOut() for train, test in logo.split(scaled_features, correct_facies_labels, groups=well_names): eclf.fit(scaled_features[train], correct_facies_labels[train]) pred = eclf.predict(scaled_features[test]) sc = f1_score(correct_facies_labels[test], pred, labels=np.arange(10), average='micro') well_name = well_names[test[0]] print("{} {:.3f}".format(well_name, sc)) f1_eclf.append(sc) # conf = confusion_matrix(correct_facies_labels[test], pred) # display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) # print("") print("Average leave-one-well-out F1 Score: %6f" % (sum(f1_eclf)/(1.0*(len(f1_eclf))))) #Load testing data and standardise test_data = pd.read_csv('../validation_data_nofacies.csv') test_features = test_data.drop(['Well Name', 'Depth', "Formation"], axis=1) scaled_test_features = scaler.transform(test_features) eclf.fit(scaled_features, correct_facies_labels) predicted_test_labels = eclf.predict(scaled_test_features) # Save predicted labels test_data['Facies'] = predicted_test_labels test_data.to_csv('Anjum48_Prediction_Submission.csv') # predicted_test_labels = fit_and_predict(scaled_features, correct_facies_labels, # scaled_test_features, classifiers) # # Save predicted labels # test_data['Facies'] = predicted_test_labels.mode(axis=1)[0] # test_data.to_csv('Anjum48_Prediction_Submission.csv') # Plot predicted labels make_facies_log_plot( test_data[test_data['Well Name'] == 'STUART'], facies_colors=facies_colors) make_facies_log_plot( test_data[test_data['Well Name'] == 'CRAWFORD'], facies_colors=facies_colors) mpl.rcParams.update(inline_rc) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Plot dipole activations Step2: Show the evoked response and the residual for gradiometers Step3: Generate stc from dipoles Step4: View in 2D and 3D ("glass" brain like 3D plot)
<ASSISTANT_TASK:> Python Code: # Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.inverse_sparse import gamma_map, make_stc_from_dipoles from mne.viz import (plot_sparse_source_estimates, plot_dipole_locations, plot_dipole_amplitudes) print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif' # Read the evoked response and crop it condition = 'Left visual' evoked = mne.read_evokeds(evoked_fname, condition=condition, baseline=(None, 0)) evoked.crop(tmin=-50e-3, tmax=300e-3) # Read the forward solution forward = mne.read_forward_solution(fwd_fname) # Read noise noise covariance matrix and regularize it cov = mne.read_cov(cov_fname) cov = mne.cov.regularize(cov, evoked.info, rank=None) # Run the Gamma-MAP method with dipole output alpha = 0.5 dipoles, residual = gamma_map( evoked, forward, cov, alpha, xyz_same_gamma=True, return_residual=True, return_as_dipoles=True) plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') # # Plot dipole locations of all dipoles with MRI slices # for dip in dipoles: # plot_dipole_locations(dip, forward['mri_head_t'], 'sample', # subjects_dir=subjects_dir, mode='orthoview', # idx='amplitude') ylim = dict(grad=[-120, 120]) evoked.pick_types(meg='grad', exclude='bads') evoked.plot(titles=dict(grad='Evoked Response Gradiometers'), ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg='grad', exclude='bads') residual.plot(titles=dict(grad='Residuals Gradiometers'), ylim=ylim, proj=True, time_unit='s') stc = make_stc_from_dipoles(dipoles, forward['src']) scale_factors = np.max(np.abs(stc.data), axis=1) scale_factors = 0.5 * (1 + scale_factors / np.max(scale_factors)) plot_sparse_source_estimates( forward['src'], stc, bgcolor=(1, 1, 1), modes=['sphere'], opacity=0.1, scale_factors=(scale_factors, None), fig_name="Gamma-MAP") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: ------------- (semi)-Automatic ------------- Step2: Configure GPU/CPU devices Step3: Data queueing Step4: Load data Step5: Load trained model Step6: Evaluate testing set Step7: Extract the most crucial layer Step8: Look for the densely/fully connected layer nearest to the classier, which is the one that has the shape of (None, number-of-classes) Step9: Metadata for embeddings Step10: Predicted values in .TXT Step11: Note Step12: Note Step13: Confusion matrix
<ASSISTANT_TASK:> Python Code: # Location of digested data input_directory = '/digested/' # Location of saved trained model model_directory = '/model_directory/' # Desired location for outputs output_directory = '/output_directory/' %matplotlib inline import keras import pickle from keras.layers import * from keras.models import Sequential import numpy import os import os.path import matplotlib.pyplot import pandas import seaborn import sklearn.metrics import tensorflow from tensorflow.contrib.tensorboard.plugins import projector # -------- If using Tensorflow-GPU: -------- # configuration = tensorflow.ConfigProto() configuration.gpu_options.allow_growth = True configuration.gpu_options.visible_device_list = "0" session = tensorflow.Session(config=configuration) keras.backend.set_session(session) # -------- If using Tensorflow (CPU) : -------- # # configuration = tensorflow.ConfigProto() # session = tensorflow.Session(config=configuration) # keras.backend.set_session(session) if not os.path.exists(output_directory): os.makedirs(output_directory) def training_data_generator(input_x, input_y, batch_size): num_examples, num_labels = input_y.shape label_indices = [] for i in range(num_labels): indices = [j for j in range(num_examples) if input_y[j,i] > 0] label_indices.append(indices) print("Label",i,":",len(indices),"examples") samples_per_label = int(batch_size / num_labels) def generator(): while True: x_samples = [] y_samples = [] for i in range(num_labels): random.shuffle(label_indices[i]) indices = label_indices[i][0:samples_per_label] x_samples.append( input_x[indices, ...] ) y_samples.append( input_y[indices, ...] ) x_samples = numpy.concatenate( x_samples ) y_samples = numpy.concatenate( y_samples ) batch_indices = numpy.arange(x_samples.shape[0]) numpy.random.shuffle(batch_indices) x_samples = x_samples[batch_indices, ...] y_samples = y_samples[batch_indices, ...] yield (x_samples, y_samples) return generator() def prediction_data_generator(input_x, input_y, batch_size): num_examples, num_labels = input_y.shape steps = int(num_examples / batch_size) def generator(): i = 0 while True: start = i*batch_size end = (i+1)*batch_size x_sample = input_x[start:end, ...] y_sample = input_y[start:end, ...] yield (x_sample, y_sample) i = i + 1 if i < steps else 0 print("Prediction steps:",steps) return generator(), steps # This function to normalize illumination discrepancy across images def min_max_norm(x, minimum=None, maximum=None): channels = x.shape[-1] if minimum is None and maximum is None: minimum = [] maximum = [] for channel in range(channels): minimum.append( x[..., channel].min() ) maximum.append( x[..., channel].max() ) result = numpy.zeros_like(x) for ch in range(channels): result[..., ch] = 100.0*( (numpy.ndarray.astype(x[..., ch], numpy.float32) - minimum[ch])/(maximum[ch] - minimum[ch]) ) return (result, minimum, maximum) training_x = numpy.load(os.path.join(input_directory, "training_x.npy")) training_y = numpy.load(os.path.join(input_directory, "training_y.npy")) # input_directory = "/path/to/other/input_directory/if/needed" testing_x = numpy.load(os.path.join(input_directory, "testing_x.npy")) testing_y = numpy.load(os.path.join(input_directory, "testing_y.npy")) print("Loading training data") # Use this function to normalize signal intensities across images training_x, pix_min, pix_max = min_max_norm(training_x) training_generator = training_data_generator(training_x, training_y, 32) print(training_x.shape, training_y.shape) print("Loading test data") # Use this function to normalize signal intensities across images testing_x, _, _ = min_max_norm(testing_x, pix_min, pix_max) testing_generator, testing_steps = prediction_data_generator(testing_x, testing_y, 32) print(testing_x.shape) model = keras.models.load_model( os.path.join(model_directory, 'model.h5') ) model.load_weights(os.path.join(model_directory, 'model.h5')) model.evaluate_generator( generator=testing_generator, steps=256 ) layers = model.layers model.summary() print(layers[-4]) abstract_model = None # Clear cached abstract_model abstract_model = Sequential([layers[-4]]) extracted_features = abstract_model.predict_generator( generator=testing_generator, steps=256) print('Converting numeric labels into class names...') class_names = pickle.load(open(os.path.join(input_directory, "class_names.sav"), 'rb')) def save_metadata(file): with open(file, 'w') as f: for i in range(test_y.shape[0]): f.write('{}\n'.format( class_names[test_y[i]] )) save_metadata( os.path.join(output_directory, 'metadata.tsv') ) print('Done.') numpy.savetxt( os.path.join(output_directory, 'table_of_features.txt' ), extracted_features, delimiter='\t') numpy.save( os.path.join(output_directory, 'table_of_features.npy' ), extracted_features ) extracted_features = numpy.load( 'table_of_features.npy' ) embedding_var = tensorflow.Variable(extracted_features) embedSess = tensorflow.Session() # save variable in session embedSess.run(embedding_var.initializer) # save session (only used variable) to file saver = tensorflow.train.Saver([embedding_var]) saver.save(embedSess, 'tf.ckpt') summary_writer = tensorflow.summary.FileWriter('./') config = tensorflow.contrib.tensorboard.plugins.projector.ProjectorConfig() embedding = config.embeddings.add() embedding.tensor_name = embedding_var.name embedding.metadata_path = 'metadata.tsv' # this metadata_path need to be modified later. See note. tensorflow.contrib.tensorboard.plugins.projector.visualize_embeddings(summary_writer, config) embedSess.close() metrics = pandas.read_csv(os.path.join(model_directory, 'training.csv') ) print(metrics) matplotlib.pyplot.plot(metrics["acc"]) matplotlib.pyplot.plot(metrics["val_acc"]) matplotlib.pyplot.plot(metrics["loss"]) matplotlib.pyplot.plot(metrics["val_loss"]) predicted = model.predict( batch_size=50, x=testing_x ) predicted = numpy.argmax(predicted, -1) expected = numpy.argmax(testing_y[:, :], -1) confusion = sklearn.metrics.confusion_matrix(expected, predicted) confusion = pandas.DataFrame(confusion) matplotlib.pyplot.figure(figsize=(12, 8)) seaborn.heatmap(confusion, annot=True) matplotlib.pyplot.savefig( os.path.join(output_directory, 'confusion_matrix.eps') , format='eps', dpi=600) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Unit Test
<ASSISTANT_TASK:> Python Code: %run ../graph/graph.py def bfs(root, visit_func): # TODO: Implement me pass %run ../utils/results.py # %load test_bfs.py from nose.tools import assert_equal class TestBfs(object): def __init__(self): self.results = Results() def test_bfs(self): nodes = [] graph = Graph() for id in range(0, 6): nodes.append(graph.add_node(id)) graph.add_edge(0, 1, 5) graph.add_edge(0, 4, 3) graph.add_edge(0, 5, 2) graph.add_edge(1, 3, 5) graph.add_edge(1, 4, 4) graph.add_edge(2, 1, 6) graph.add_edge(3, 2, 7) graph.add_edge(3, 4, 8) bfs(nodes[0], self.results.add_result) assert_equal(str(self.results), "[0, 1, 4, 5, 3, 2]") print('Success: test_bfs') def main(): test = TestBfs() test.test_bfs() if __name__ == '__main__': main() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Vizsgáljuk meg az alábbi egyszerű integrált $$ \int_{-1}^1 (x^2+3x +2)\mathrm{d}x .$$ Step2: Most már meghívhatjuk a quad-ot. Az első változó az integrálandó függvény, a második és a harmadik pedig az integrálási határok. A kimenet két szám. Az első az integrál becsült értéke, a második az algoritmus becsült hibája. Step3: Amint az eredmény is mutatja, ez az analitikus számítás és a numerikus integrál megegyeznek. Step4: A quad függvénynek a végtelen integrálási határokat az inf jelöléssel adhatjuk meg. Step5: A fent vizsgált két példával különösebb gond nélkül meg tudott birkózni a quad. Előfordul azonban, hogy az integrálás problémákba ütközik. Erre egy jó példa, ha az integrandus szingulárissá válik a integrálási tartomány egy pontjában. Ez nem feltétlenül jelenti azt, hogy az integrál nem létezik! Az ilyen pontokra külön felhívhatjuk a quad függvény figyelmét a points kulcsszó segítségével. Step6: A quad bizony nehézségekbe ütközik, ha a 0 reciprokát kell vennie! Specifikáljuk most az $x=1$-et mint problémás pontot Step7: Így az integrál már szépen elvégezhető. Step8: A dblquad első paramétere megint az integrálandó függvény. A második és harmadik bemenő paraméter az integrandusfüggvény első paraméterének határait adja meg. A negyedik és ötödik bemenő paraméter az első integrálási változó függvényeként kifejezve a második integrálási változó határai. A legegyszerűbb esetben ezek valamilyen konstansfüggvények. Step9: A második integrálási tartományban az $x$ változó függvényében kell paraméterezni az $y$ változó szélső értékeit. Ha az integrálási tartomány az egység sugarú körlap, akkor az alsó határt a $y(x)=-\sqrt{1-x^2}$, a felső határt pedig a $y(x)=\sqrt{1-x^2}$ adja Step10: Differenciálegyenletek Step11: Egy egyenletet differenciálegyenletnek hívunk, ha a meghatározandó függvény deriváltjai szerepelnek benne. Egy egyszerű differenciálegyenletre jutunk például, ha megvizsgáljuk egy kondenzátor töltésének időbeli válltozását! Step12: Az odeint függvény hívásához szükség van a paramétereken kívül a növekményfüggvényre (azaz a fenti egyenlet jobb oldalára), definiáljuk most ezt Step13: Most már készen vagyunk arra, hogy meghívjuk a differenciál egyenlet megoldó függvényt! Az odeint alapvetően három bemenő paramétert vár. Az első a fent definiált növekményfüggvény, a második a meghatározandó függvény kezdeti értéke, a harmadik pedig azon időpontok halmaza, ahol kíváncsiak vagyunk a megoldásra. A függvény visszatérési értéke maga a keresett adatsor. Step14: Most már csak ábrázolni kell! Step15: Vizsgáljunk meg egy másik példát ! Legyen ez egy rugóval rögzített test, mely egy vonal mentén súrlódás nélkül tud mozogni. Step16: Oldjuk meg az egyenleteket úgy, hogy kezdetben a kitérés 1, a kezdősebesség pedig nulla!
<ASSISTANT_TASK:> Python Code: # a szokásos rutinok betöltése %pylab inline from scipy.integrate import * # az integráló rutinok betöltése def f(x): return (x**2+3*x+2) quad(f,-1,1) # az integrandus definiálása def gauss(x): return exp(-x**2) quad(gauss,-inf,inf) sqrt(pi) def h(x): return ((x-1.0)**(-2))**(1.0/3.0) quad(h,0,2) quad(h,0,2,points=[1.0]) # Az integrandus definiálása def func(x,y): return cos(x)*exp(-x**2-y**2) dblquad(func, -1/2, 1/2, lambda x:-1/2, lambda x:1/2) dblquad(func,-1,1,lambda x:-sqrt(1-x**2),lambda x:sqrt(1-x**2)) from scipy.integrate import * # ez kell a diffegyenletekhez is!! # Paraméterek definiálása epsilon=1 R=1.0e6 C=1.0e-7 def RCkor(q,t): return epsilon/R-q/(R*C) t=linspace(0,1,1000) # ezek az érdekes idő pillanatok q0=0 # A töltés kezdeti értéke q=odeint(RCkor,q0,t) # itt oldjuk meg a diffegyenletet plot(t,q/C,lw=3) xlabel(r'$t$[s]',fontsize=20) ylabel(r'$Q/C$[V]',fontsize=20) grid() def f(u, t): x=u[0] # az u első komponense a kitérés v=u[1] # az u második komponense a sebesség return [v,-x] # ez maga a növekmény kiértékelése t=linspace(0,20,1000); # az idő intervallum u0 = [1,0] # kezdeti érték x-re és v-re u=odeint(f,u0,t) # itt történik a diffegyenlet megoldása plot(t,u[:,0],label=r'$x(t)$ pozicio') plot(t,u[:,1],label=r'$v(t)$ sebesseg') legend(fontsize=20) xlabel(r'$t$[s]',fontsize=20) grid() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Let's define the problem constants Step2: The propagation constant of the transmission line $\gamma=\alpha+j\beta$ is Step3: The matched line loss (or power attenuation), $a=e^{2\alpha L}$, is Step4: If the line is charged with a matched load, ie. $R_L=50\Omega$, then the total line loss is $a$. The power lost in the cable will thus be Step5: Which can also be checked by the scikit-rf transmission line function zl_2_total_loss Step6: Another way to evaluate the total power dissipated in the circuit is to evaluate the power expression Step7: Non Matched load Step8: while the reflection coefficient seen at the input of the transmission line $\Gamma_{in}$ is Step9: The total loss in dB due to SWR is often stated as Step10: However, this expression is only correct if either properties are verified Step11: A more advanced example Step12: First, we can derive the propagation constant $\gamma=\alpha+j\beta$ with $\beta=\frac{\omega}{c }\sqrt{\epsilon_r}$ from the problem parameters Step13: However, the transmission line characteristic reactance is not given in the problem parameters and must be determined. It can be approximated from a high-frequency, low-loss approximation [1] Step14: Now that we have both the characteristic impedance and the propagation constant of the line, the reflection coefficients, input impedance and total loss can be deduced
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import skrf as rf rf.stylely() Pin = 400 # W z0 = 50 # Ohm freq = rf.Frequency(13.56, npoints=1, unit='MHz') VF = 0.84 RL = 50 # Ohm L = 20 # m alpha = rf.db_2_np(1.483/100) # Np/m beta = freq.w/rf.c/VF gamma = alpha + 1j*beta a = np.exp(2*alpha*L) # also simply 2.84/100*20 print('Matched line loss: a=', rf.mag_2_db10(a), 'dB') print('(Forward) Power delivered to the load:', Pin/a, 'W') print('Power lost in the cable:', Pin *( 1 - 1/a), 'W') a_skrf = rf.zl_2_total_loss(z0, zl=RL, theta=gamma*L) print('Power lost in the cable:', Pin * (1 - 1/a_skrf), 'W') # reflection coefficient and input impedance Gamma_in = rf.zl_2_Gamma_in(z0, RL, theta=gamma*L) Z_in = rf.zl_2_zin(z0, RL, theta=gamma*L) # voltage and current at the line input as a function of source power V_in = np.sqrt(2*z0*Pin)*(1 + Gamma_in) I_in = V_in/Z_in # voltage and current at z=L V,I = rf.voltage_current_propagation(V_in, I_in, z0, gamma*L) P_delivered = 1/2 * np.real(V * np.conj(I)) print('Power delivered to the load: ', P_delivered, 'W') print('Power dissipated in the cable: ',Pin - P_delivered, 'W') z0 = 50 ZL = 200 - 30j Gamma_load = rf.zl_2_Gamma0(z0, ZL) print('|Gamma_load|=', np.abs(Gamma_load)) Gamma_in = rf.zl_2_Gamma_in(z0, ZL, theta=gamma*L) SWR = rf.Gamma0_2_swr(rf.zl_2_Gamma_in(z0, ZL, theta=gamma*L)) print('|Gamma_in|=', np.abs(Gamma_in), '(SWR=', SWR,')') 10*np.log10(a) + 10*np.log10((1 - np.abs(Gamma_in)**2)/(1 - np.abs(Gamma_load)**2)) a = rf.zl_2_total_loss(z0, zl=ZL, theta=gamma * L) print('Total power loss: ', rf.mag_2_db10(a), 'dB' ) print('Delivered power:', Pin/a, 'W') print('The total power loss is the cable:', Pin*(1 - 1/a), 'W') # reflection coefficient and input impedance Gamma_in = rf.zl_2_Gamma_in(z0, ZL, theta=gamma*L) Z_in = rf.zl_2_zin(z0, ZL, theta=gamma*L) # voltage and current at the line input as a function of source power V_in = np.sqrt(2*z0*Pin)*(1 + Gamma_in) I_in = V_in/Z_in # voltage and current at z=L V,I = rf.voltage_current_propagation(V_in, I_in, z0, gamma*L) P_delivered = 1/2 * np.real(V * np.conj(I)) print('Power delivered to the load: ', P_delivered, 'W') print('Power dissipated in the cable: ',Pin - P_delivered, 'W') rf.Gamma0_2_swr(Gamma_in) 10*np.log10(P_delivered/Pin) Z_L = 4.5 - 1673j R_0 = 402.75 freq = rf.Frequency(1.83, npoints=1, unit='MHz') VF = 0.915 L = rf.feet_2_meter(100) alpha = rf.db_2_np(rf.db_per_100feet_2_db_per_100meter(0.095)/100) beta = freq.w/rf.c/VF gamma = alpha + 1j*beta print(gamma) X_0 = -alpha/beta*R_0 Z_0 = R_0 + 1j*X_0 print('X_0=', X_0) print('Gamma at load:', np.abs(rf.zl_2_Gamma0(Z_0, Z_L))) print('Gamma at input:', np.abs(rf.zl_2_Gamma_in(Z_0, Z_L, theta=gamma*L))) print('SWR at load:', rf.Gamma0_2_swr(rf.zl_2_Gamma0(Z_0, Z_L))) print('SWR at input:', rf.Gamma0_2_swr(rf.zl_2_Gamma_in(Z_0, Z_L, theta=gamma*L))) print('Input impedance:', rf.input_impedance_at_theta(Z_0, Z_L, theta=gamma*L ), 'Ohm') total_loss_db = rf.mag_2_db10(np.abs(rf.zl_2_total_loss(z0=Z_0, zl=Z_L, theta=gamma*L))) print('Total loss:', total_loss_db, 'dB') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Run simulation Step2: Plot sampled histogram against expected result Step3: Statistical test of output against expected result
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib import matplotlib.cm as cm import numpy as np import matplotlib.pyplot as plt import jinja2, json, sys from math import log, fabs, pi, cos, sin from scipy.stats import ks_2samp try: try: import ruamel.yaml as yaml except ImportError: # anaconda packs it as a standalone module (underscore instead of dot) import ruamel_yaml as yaml except ImportError: import yaml number_of_replicas = 4 scale_array = np.geomspace(1, 0.1, number_of_replicas) temper = True # run with parallel tempering or not? # generate input files from the template file; one for each replica with open("temper.yml") as template_file: template = jinja2.Template(template_file.read()) for replica, scale in enumerate(scale_array): contents = template.render(scale=scale, micro=20000) yml = yaml.safe_load(contents) # parse contents as YAML if not temper: del yml["moves"][1] file = open("mpi{}.input.json".format(replica), 'w') json.dump(yml, file, indent=4) # save JSON file file.close() %%bash if [[ -z "${FAUNUS_EXECUTABLE}" ]]; then mpirun -np 4 faunus -i input.json else echo "Seems we're running CTest - use Faunus target from CMake" "${MPIEXEC}" -np 4 "${FAUNUS_EXECUTABLE}" -i input.json --nobar fi def energy(x): s = 1 + sin(2*pi*x) if x>=-2.00 and x<=-1.25: return 1*s if x>=-1.25 and x<=-0.25: return 2*s if x>=-0.25 and x<= 0.75: return 3*s if x>= 0.75 and x<= 1.75: return 4*s if x>= 1.75 and x<= 2.00: return 5*s return 10000000 u_vec = np.vectorize(energy) offset = 1e10 # used to offset pmf to match above energy function for replica in range(number_of_replicas): if (replica==0): x = np.loadtxt("mpi{}.x.dat.gz".format(replica), usecols=[1]) hist, bins = np.histogram(x, bins=150) x = (bins[:-1] + bins[1:]) / 2 pmf = -np.log(hist) if (pmf.min() < offset): offset = pmf.min(); plt.plot(x, pmf, label='{}'.format(replica)) plt.legend(loc=0, frameon=False, title='replica') plt.plot(x, u_vec(x) + offset, 'k--', alpha=0.6) plt.xlabel(r'x') plt.ylabel(r'PMF ($k_BT$)'); statistic, pvalue = ks_2samp(pmf, u_vec(x)+offset) if (pvalue < 0.95): sys.exit(1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Building a new normal distribution Step2: The custom objects have a few requirements. Step3: Looks good so far. Looks like there might be a small difference at very smalll precisions. Step4: Great! Now let's try putting this into a more complex model. Step5: Great! It looks like both for fitting a mixture model and performing inference that our new Python distribution is identical to the built-in one. Now, how much slower is it to use the Python object versus the Cython one? Step6: It looks like it can be a fair bit slower. Another drawback of using a Python distibution is that it may be less efficient to do multi-threaded parallelism, because the Python object requires the GIL. However, many numpy operations will drop the GIL and so can actually be used with multi-threading, but that isn't guaranteed. Step7: This distribution acts exactly like any other distribution. If we want to use a different distribution to model different features, we can use a mix of custom and built-in distrbutions with ease. Step8: Building a new distribution, the Student T Distribution Step9: The primary difference between the Student T distribution and the normal distribution is the degree of freedom parameter that has to be set in advance---it is not meant to be learned from the data. The higher this parameter, the more like a normal distribution it is. Step10: Let's stick it in a mixture model now and see what the normal and the Student T versions look like. Step11: The results look reasonable. The Student T distributions have a smaller valley separating the peaks because they have heavier tails. Additionally, the peaks are closer together for the Student T distribution because the heavier tails assign more credit from points in one cluster to the distribution modeling the other cluster. This pulls the centers together slightly. Step12: In the code above, BlockGaussianDistribution creates multiple MultivariateGaussianDistribution objects, each modeling a different set of features. The probability of an example is then the product of the probability of each set of features under their respective distribution. Step13: The first four variables correspond to age, sex, BMI, and average blood pressure. The remaining six features are different blood serum measurements. In this example, let's use univariate distributions to model the first four variables, and a multivariate Gaussian with a full covariance matrix to model the last six. Step14: It looks like age and BMI may be better modeled by a normal distribution than a log normal distribution, but that average blood pressure is better modeled with a log normal distribution. Due to the binary nature of sex in this data set, a Bernoulli distribution seems like the right choice.
<ASSISTANT_TASK:> Python Code: %matplotlib inline import time import pandas import random import numpy import matplotlib.pyplot as plt import seaborn; seaborn.set_style('whitegrid') import itertools from pomegranate import * random.seed(0) numpy.random.seed(0) numpy.set_printoptions(suppress=True) %load_ext watermark %watermark -m -n -p numpy,scipy,pomegranate import scipy.stats class NormalDistribution2(): def __init__(self, mu, std): self.mu = mu self.std = std self.parameters = (self.mu, self.std) self.d = 1 self.summaries = numpy.zeros(3) def log_probability(self, X): return scipy.stats.norm.logpdf(X, self.mu, self.std) def summarize(self, X, w=None): if w is None: w = numpy.ones(X.shape[0]) X = X.reshape(X.shape[0]) self.summaries[0] += w.sum() self.summaries[1] += X.dot(w) self.summaries[2] += (X ** 2.).dot(w) def from_summaries(self, inertia=0.0): self.mu = self.summaries[1] / self.summaries[0] self.std = self.summaries[2] / self.summaries[0] - self.summaries[1] ** 2 / (self.summaries[0] ** 2) self.std = numpy.sqrt(self.std) self.parameters = (self.mu, self.std) self.clear_summaries() def clear_summaries(self, inertia=0.0): self.summaries = numpy.zeros(3) @classmethod def from_samples(cls, X, weights=None): d = NormalDistribution2(0, 0) d.summarize(X, weights) d.from_summaries() return d @classmethod def blank(cls): return NormalDistribution2(0, 0) NormalDistribution(2.532, 8.211).log_probability(102.563), NormalDistribution2(2.532, 8.211).log_probability(102.563) X = numpy.random.normal(0.872, 0.57721, size=100000) print(NormalDistribution.from_samples(X).parameters) print(NormalDistribution2.from_samples(X).parameters) X = numpy.random.normal(0, 1, size=(1000, 1)) X[::2] += 1 model1 = GeneralMixtureModel.from_samples(NormalDistribution, 2, X, max_iterations=5, init='first-k', verbose=True) model2 = GeneralMixtureModel.from_samples(NormalDistribution2, 2, X, max_iterations=5, init='first-k', verbose=True) model1.log_probability(X).sum(), model2.log_probability(X).sum() X = numpy.random.normal(0, 1, size=(300000, 1)) X[::2] += 1 %timeit GeneralMixtureModel.from_samples(NormalDistribution, 2, X, max_iterations=100, init='first-k') %timeit GeneralMixtureModel.from_samples(NormalDistribution2, 2, X, max_iterations=100, init='first-k') %timeit GeneralMixtureModel.from_samples(NormalDistribution, 2, X, max_iterations=100, init='first-k', n_jobs=2) %timeit GeneralMixtureModel.from_samples(NormalDistribution2, 2, X, max_iterations=100, init='first-k', n_jobs=2) X = numpy.random.normal(0, 1, size=(500, 3)) X[::2] += 1 X[:,1] = numpy.abs(X[:,1]) distributions = [NormalDistribution, ExponentialDistribution, NormalDistribution2] model = GeneralMixtureModel.from_samples(distributions, 2, X, verbose=True) class StudentTDistribution(): def __init__(self, mu, std, df=1.0): self.mu = mu self.std = std self.df = df self.parameters = (self.mu, self.std) self.d = 1 self.summaries = numpy.zeros(3) def probability(self, X): return numpy.exp(self.log_probability(X)) def log_probability(self, X): return scipy.stats.t.logpdf(X, self.df, self.mu, self.std) def summarize(self, X, w=None): if w is None: w = numpy.ones(X.shape[0]) X = X.reshape(X.shape[0]) self.summaries[0] += w.sum() self.summaries[1] += X.dot(w) self.summaries[2] += (X ** 2.).dot(w) def from_summaries(self, inertia=0.0): self.mu = self.summaries[1] / self.summaries[0] self.std = self.summaries[2] / self.summaries[0] - self.summaries[1] ** 2 / (self.summaries[0] ** 2) self.std = numpy.sqrt(self.std) self.parameters = (self.mu, self.std) self.clear_summaries() def clear_summaries(self, inertia=0.0): self.summaries = numpy.zeros(3) @classmethod def from_samples(cls, X, weights=None, df=1): d = StudentTDistribution(0, 0, df) d.summarize(X, weights) d.from_summaries() return d @classmethod def blank(cls): return StudentTDistribution(0, 0) dn = NormalDistribution(0, 1) dt1 = StudentTDistribution(0, 1, 1) dt3 = StudentTDistribution(0, 1, 3) dt8 = StudentTDistribution(0, 1, 8) x = numpy.arange(-6, 6, 0.1) plt.figure(figsize=(7, 4)) plt.plot(x, dn.probability(x), label="Normal") plt.plot(x, dt1.probability(x), label="Student T, df=1") plt.plot(x, dt3.probability(x), label="Student T, df=3") plt.plot(x, dt8.probability(x), label="Student T, df=8") plt.ylabel("Probability", fontsize=12) plt.legend(fontsize=12) plt.show() X = numpy.random.normal(-2, 1, size=(1000, 1)) X[::2] += 4 modeln = GeneralMixtureModel.from_samples(NormalDistribution, 2, X) modelt = GeneralMixtureModel.from_samples(StudentTDistribution, 2, X) x = numpy.arange(-15, 15, 0.1) plt.figure(figsize=(7, 4)) plt.plot(x, modeln.probability(x), label="Normal Mixture") plt.plot(x, modelt.probability(x), label="Student T Mixture") plt.ylabel("Probability", fontsize=12) plt.legend(fontsize=12) plt.show() class BlockGaussianDistribution(): def __init__(self, distributions): self.distributions = distributions self.d = sum([d.d for d in distributions]) def log_probability(self, X): i, log_probability = 0, numpy.zeros(X.shape[0]) for distribution in self.distributions: log_probability += distribution.log_probability(X[:, i:i+distribution.d].copy()) i += distribution.d return log_probability def summarize(self, X, w=None): i = 0 for distribution in self.distributions: distribution.summarize(X[:, i:i+distribution.d].copy(), w) i += distribution.d def from_summaries(self, inertia=0.0): for distribution in self.distributions: distribution.from_summaries(inertia) @classmethod def from_samples(cls, X, weights=None, ds=[]): distributions = [MultivariateGaussianDistribution.blank(d) for d in ds] d = BlockGaussianDistribution(distributions) d.summarize(X, weights) d.from_summaries() return d @classmethod def blank(cls, ds): distributions = [MultivariateGaussianDistribution.blank(d) for d in ds] return BlockGaussianDistribution(distributions) from sklearn.datasets import load_diabetes from sklearn.model_selection import train_test_split from pomegranate import LogNormalDistribution, BernoulliDistribution, NormalDistribution data = load_diabetes() X, y = data.data, data.target X[:,1] = (X[:,1] - X[:,1].min()) / (X[:,1].max() - X[:,1].min()) X[:,0] -= X[:,0].min() - 0.001 X[:,2] -= X[:,2].min() - 0.001 X[:,3] -= X[:,3].min() - 0.001 y = y > numpy.median(y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) import matplotlib.pyplot as plt names = 'Age', 'Sex', 'BMI', 'Average Blood Pressure' plt.figure(figsize=(14, 2)) for i in range(4): plt.subplot(1, 4, i+1) x = numpy.arange(0.01, 0.3 if i != 1 else 1, 0.01) plt.hist(X_train[:,i], bins=10, normed=True) plt.plot(x, LogNormalDistribution.from_samples(X_train[:,i]).probability(x), label="LogNormal") plt.plot(x, NormalDistribution.from_samples(X_train[:,i]).probability(x), label="Normal") plt.title(names[i], fontsize=12) plt.legend(loc=(1.05, 0.4)) plt.show() ds = [BlockGaussianDistribution([NormalDistribution.blank(), BernoulliDistribution.blank(), NormalDistribution.blank(), LogNormalDistribution.blank(), MultivariateGaussianDistribution.blank(6)]) for j in range(2)] model = BayesClassifier(ds) model.fit(X_train, y_train) model2 = BayesClassifier.from_samples(MultivariateGaussianDistribution, X_train, y_train) model.score(X_test, y_test), model2.score(X_test, y_test) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Use gethostbyname() to consult the operating system hostname resolution API and convert the name of a server to its numerical address. Step2: For access to more naming information about a server, use gethostbyname_ex(). It returns the canonical hostname of the server, any aliases, and all of the available IP addresses that can be used to reach it. Step3: Use getfqdn() to convert a partial name to a fully qualified domain name. Step4: When the address of a server is available, use gethostbyaddr() to do a “reverse” lookup for the name. Step5: Finding Service Information Step6: To reverse the service port lookup, use getservbyport(). Step8: The number assigned to a transport protocol can be retrieved with getprotobyname(). Step10: Looking Up Server Addresses Step11: IP Address Representations Step12: The four bytes in the packed format can be passed to C libraries, transmitted safely over the network, or saved to a database compactly.
<ASSISTANT_TASK:> Python Code: import socket print(socket.gethostname()) import socket HOSTS = [ 'apu', 'pymotw.com', 'www.python.org', 'nosuchname', ] for host in HOSTS: try: print('{} : {}'.format(host, socket.gethostbyname(host))) except socket.error as msg: print('{} : {}'.format(host, msg)) import socket HOSTS = [ 'apu', 'pymotw.com', 'www.python.org', 'nosuchname', ] for host in HOSTS: print(host) try: name, aliases, addresses = socket.gethostbyname_ex(host) print(' Hostname:', name) print(' Aliases :', aliases) print(' Addresses:', addresses) except socket.error as msg: print('ERROR:', msg) print() import socket for host in ['scott-t460', 'pymotw.com']: print('{:>10} : {}'.format(host, socket.getfqdn(host))) import socket hostname, aliases, addresses = socket.gethostbyaddr('10.104.190.53') print('Hostname :', hostname) print('Aliases :', aliases) print('Addresses:', addresses) import socket from urllib.parse import urlparse URLS = [ 'http://www.python.org', 'https://www.mybank.com', 'ftp://prep.ai.mit.edu', 'gopher://gopher.micro.umn.edu', 'smtp://mail.example.com', 'imap://mail.example.com', 'imaps://mail.example.com', 'pop3://pop.example.com', 'pop3s://pop.example.com', ] for url in URLS: parsed_url = urlparse(url) port = socket.getservbyname(parsed_url.scheme) print('{:>6} : {}'.format(parsed_url.scheme, port)) import socket from urllib.parse import urlunparse for port in [80, 443, 21, 70, 25, 143, 993, 110, 995]: url = '{}://example.com/'.format(socket.getservbyport(port)) print(url) import socket def get_constants(prefix): Create a dictionary mapping socket module constants to their names. return { getattr(socket, n): n for n in dir(socket) if n.startswith(prefix) } protocols = get_constants('IPPROTO_') for name in ['icmp', 'udp', 'tcp']: proto_num = socket.getprotobyname(name) const_name = protocols[proto_num] print('{:>4} -> {:2d} (socket.{:<12} = {:2d})'.format( name, proto_num, const_name, getattr(socket, const_name))) import socket def get_constants(prefix): Create a dictionary mapping socket module constants to their names. return { getattr(socket, n): n for n in dir(socket) if n.startswith(prefix) } families = get_constants('AF_') types = get_constants('SOCK_') protocols = get_constants('IPPROTO_') for response in socket.getaddrinfo('www.python.org', 'http'): # Unpack the response tuple family, socktype, proto, canonname, sockaddr = response print('Family :', families[family]) print('Type :', types[socktype]) print('Protocol :', protocols[proto]) print('Canonical name:', canonname) print('Socket address:', sockaddr) print() import binascii import socket import struct import sys for string_address in ['192.168.1.1', '127.0.0.1']: packed = socket.inet_aton(string_address) print('Original:', string_address) print('Packed :', binascii.hexlify(packed)) print('Unpacked:', socket.inet_ntoa(packed)) print() import binascii import socket import struct import sys string_address = '2002:ac10:10a:1234:21e:52ff:fe74:40e' packed = socket.inet_pton(socket.AF_INET6, string_address) print('Original:', string_address) print('Packed :', binascii.hexlify(packed)) print('Unpacked:', socket.inet_ntop(socket.AF_INET6, packed)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Enter "ipython notebook" to get started Step2: We import other modules to utilize other people's work. numpy enables to do fancy things with numbers and matrices Step3: We use "." to ask Python to run the function loadtxt that belongs to the numpy library. It is used everywhere in Python to refer to the parts of things as thing.component Step4: Assigning 55 to a variable weight_kg Step5: Hands-on Step6: What does the following program print out? Step7: Hands-on Step8: What is the value of element[ Step9: data.shape Step10: How do I know? try help(data) or look at Python manual Step11: Things listed are either "functions" or "attributes". Don't worry about all ___XXX ___, there are system variables or methods, not really meant to be used for usual programming. Step12: Or try help(data.xxxx) for more details. Step13: Hands-on Step14: Step15: Challenge Step16: 2.What's inside the box? Step17: (a) mass, age Step18: (a) causes an error Step19: Lesson 2 Step20: We have a dozen data sets to process and analyse. Of course, we can repeat the process over and over - but it will be frustrating to do so. If you want to become a good programmer, you have to become lazy. (If you are already lazy, chances are high you are one of those exceptional, natural-born programming talents.) We can teach the computer how to repeat things and let it do the boring job. Step21: Only in New Zealand Step22: Challenge Step23: (a) (empty) Step24: What is the value of element[ Step25: ======================================================================================================================== Step26: Somehow, a list is similar to a string Step27: Challenge Step28: ======================================================================================================================== Step29: Challenge Step30: Convert the following code (copied from above) to process the first 3 inflammation files and draw 3 graphs (ie. mean, max, min) for each file using "for" loop Step31: Challenge Step32: (1) 3 Step33: We can add some intelligence to our program to make decisions Step34: Challenge Step35: What do you think this code will output? Step36: (a) x is 1 Step37: Write some code that sums the positive and negative numbers in a list separately, using in-place operators. Step38: swap Step39: Lesson 4 Step40: return Step41: Challenge Step42: Selecting characters from strings Step43: Farenheit to Kelvin and to Celcius Step44: Something is wrong! Why? Step45: Challenge Step46: Test the function by Step47: and you expect to see 3 graphs for inflammation-01.csv Step48: Lesson 5 Step49: An assertion is simply a statement that something must be true at a certain point in a program. Python evaluates the assertion condition, if it is true, Python does nothing, but if it is false, Python halts the program immediately and prints the error message. Step50: Challenge Step58: (a) Step66: If you can run your python program in the terminal, you can combine with shell commands! Step81: Now let's learn how to write a command-line python program. Using the text editor, open argv-list.py (Run the next command in Bash terminal) Step82: sys.argv[0] is the program file name (ALWAYS!), sys.argv[1...] are all the arguments you call with the program. Step90: and let's play with the data file small-01..-03.csv. Step98: Let's run our program with one of this file as an input Step106: No output - because the main() function was defined, but not called. Step114: Challenge Step115: Note Step123: Run the program with the following command. Examine the output
<ASSISTANT_TASK:> Python Code: git clone https://github.com/sungeunbae/python-files.git cd python-files !ls import numpy numpy.loadtxt(fname='inflammation-01.csv', delimiter=',') weight_kg = 55 print weight_kg print 'weight in pounds:', 2.2 * weight_kg weight_kg = 57 weight_lb = 2.2*weight_kg print 'weight in kilograms:',weight_kg, ' and in pounds:', weight_lb weight = 70.5 age = 35 # Take a trip to the planet Neptune weight = weight * 1.14 age = age + 20 first, second = 'Grace', 'Hopper' third, fourth = second, first print third, fourth data = numpy.loadtxt(fname='inflammation-01.csv', delimiter=',') print data print type(data) print data.shape data[0,0] #top-left corner data[1,1] data[1,2] print data[0:4, 0:10] print data[5:10, 0:10] data[:3,36:] #:x is used to indicate from 0 to x (excluding x). x: indicates from x to the end. small = data[:3,36:] print 'small is' print small element = 'oxygen' print 'first three characters:', element[0:3] print 'last three characters:', element[3:6] doubledata = data*2.0 print 'original:' print data[:3, 36:] print 'doubledata:' print doubledata[:3, 36:] tripledata = doubledata+data print tripledata[:3,36:] print doubledata+2 #Creates an array of the same shape filled with 2's print data.mean() print 'maximum inflammation:', data.max() print 'minimum inflammation:', data.min() print 'standard deviation:',data.std() help(data) #gives full documents. sometimes too much! dir(data) #gives a list of functions and attributes. Less information type(data.size) type(data.shape) type(data.std) help(data.std) patient_0 = data[0,:] #row 0 and every column, extract everything from row 0 print 'maximum inflammation for patient 0:', patient_0.max() print data.mean(axis=0) #average inflmmation per day for all patients print data.mean(axis=0).shape print data.mean(axis=1) print data.mean(axis=1).shape #next line is very important - otherwise, your notebook will hang forever!!!! %matplotlib inline from matplotlib import pyplot pyplot.imshow(data) pyplot.show() #create a heatmap of our data and show ave_inflammation = data.mean(axis=0) # average inflammation over time. per day for all patients pyplot.plot(ave_inflammation) # create a line graph of these values pyplot.show() import numpy as np #alias to reduce typing from matplotlib import pyplot as plt data = np.loadtxt(fname='inflammation-01.csv', delimiter=',') plt.figure(figsize=(10.0, 3.0)) plt.subplot(1, 3, 1) plt.ylabel('average') plt.plot(data.mean(axis=0)) plt.subplot(1, 3, 2) plt.ylabel('max') plt.plot(data.max(axis=0)) plt.subplot(1, 3, 3) plt.ylabel('min') plt.plot(data.min(axis=0)) plt.tight_layout() plt.show() import numpy as np from matplotlib import pyplot as plt data = np.loadtxt(fname='inflammation-01.csv', delimiter=',') plt.figure(figsize=(3.0, 10.0)) plt.subplot(3, 1, 1) plt.ylabel('average') plt.plot(data.mean(axis=0)) plt.subplot(3, 1, 2) plt.ylabel('max') plt.plot(data.max(axis=0)) plt.subplot(3, 1, 3) plt.ylabel('min') plt.plot(data.min(axis=0)) plt.tight_layout() plt.show() mass = 47.5 age=122 mass = mass *2.0 age=age-20 print mass, age first, second = 'Grace', 'Hopper' third, fourth = second, first print third, fourth import numpy as np from matplotlib import pyplot as plt data = np.loadtxt(fname='inflammation-01.csv', delimiter=',') plt.ylabel('std') plt.plot(data.std(axis=0)) plt.show() ls *.csv s='christchurch' print s[0] print s[1] print s[2] print s[3] print s[4] print s[5] print s[6] print s[7] print s[8] print s[9] print s[10] print s[11] s='taumatawhakatangihangakoauauotamateapokaiwhenuakitanatahu' for c in s: #remember indentation print c length=0 #repeatedly updating this variable for c in s: print c length=length+1 print "There are",length,'characters' len(s) s1='Newton' s2='' for c in s1: s2=c+s2 print s2 element = 'oxygen' print 'first three characters:', element[0:3] print 'last three characters:', element[3:6] print element[:4] print element[4:] print element[:] print element[-1] print element[-2] print element[1:-1] odds = [1,3,5,7] print 'odds are:', odds print 'first and last:', odds[0], odds[-1] #last element for number in odds: print number names = ['Newton', 'Darwing','Turing'] print 'names is originally:',names names[1]='Darwin' #we can update an element of list print 'finval value of names:',names name = 'Bell' name[0]='b' #we can't update a character in a string this way odds.append(11) print 'odds after adding a value:',odds del odds[0] print 'odds after removing the first element:',odds odds.reverse() print 'odds after reversing:',odds odds.sort() print 'odds aftger sorting:',odds for i in range(1,11): print i sum = 0 for i in range(1,11): sum += i print sum import glob print glob.glob('*.csv') #collects files that match the pattern glob.glob('*.csv') %matplotlib inline #don't forget the line above! import glob import numpy as np from matplotlib import pyplot as plt #do something here to take the first 3 files from the list. data = np.loadtxt(fname='inflammation-01.csv', delimiter=',') plt.figure(figsize=(10.0, 3.0)) plt.subplot(1, 3, 1) plt.ylabel('average') plt.plot(data.mean(axis=0)) plt.subplot(1, 3, 2) plt.ylabel('max') plt.plot(data.max(axis=0)) plt.subplot(1, 3, 3) plt.ylabel('min') plt.plot(data.min(axis=0)) plt.tight_layout() plt.show() v=3 p=1 for i in range(2): p = p*v print p num=37 if num > 100: print 'greater' else: print 'not greater' print 'done' s='taumatawhakatangihangakoauauotamateapokaiwhenuakitanatahu' for c in s: if c in ['a','e','i','o','u']: print c,'is a vowel' else: print c, 'is a consonant' s='msoffice2013' for c in s: if c in ['1','2','3','4','5','6','7','8','9','0']: print c,'is a digit' elif c in ['a','e','i','o','u']: print c,'is a vowel' else: print c, 'is a consonant' if (1>0) and (-1>0): print 'both parts are true' else: print 'one part is not true' if (1<0) or (-1 < 0): print 'at least one test is true' else: print 'failed all tests' if '': print 'empty string is true' if 'word': print 'non-empty string is true' if []: print 'empty list is true' if [1,2,3]: print 'non-empty list is true' if 0: print 'zero is true' if 1: print 'non-zero is true' a=[1,2] del a[0] a.append(0) del a[0] x= a[0] if x : print "x is",x x=1 x+=1 x*=3 print x l=[-3,-1,-2,1,2,3] psum=0 nsum=0 for v in l: #do something here print "total of positive numbers is",psum,"and total of negative numbers is",nsum left=1 right=2 temp=left left=right right=temp print left, right left=1 right=2 left,right = right, left print left, right s='msoffice2013' for c in s: if c in ['1','2','3','4','5','6','7','8','9','0']: print c,'is a digit' elif c in ['a','e','i','o','u']: print c,'is a vowel' else: print c, 'is a consonant' s='windows7' for c in s: if c in ['1','2','3','4','5','6','7','8','9','0']: print c,'is a digit' elif c in ['a','e','i','o','u']: print c,'is a vowel' else: print c, 'is a consonant' def classify_char(s): print s for c in s: if c in ['1','2','3','4','5','6','7','8','9','0']: print c,'is a digit' elif c in ['a','e','i','o','u']: print c,'is a vowel' else: print c, 'is a consonant' classify_char('msoffice2013') classify_char('windows7') def double(v): return 2*v print double(2) print double(3) #what happens if you do double([1,2,3])? def say_hello(name): return "Hello "+name+", how are you?" print say_hello("Sung") print say_hello("John") def fence(a,b): #do something here return result print fence('name','*') #expecting *name* def outer(a): #do something here print outer('helium') #expecting hm def fahr_to_kelvin(temp): return ((temp-32)*(5/9))+273.15 print 'freezing point of water:',fahr_to_kelvin(32) print 'freezing point of water:',fahr_to_kelvin(212) (212-32)*(5/9) 5/9 5/9. def fahr_to_kelvin(temp): return ((temp-32)*(5/9.))+273.15 print 'freezing point of water:',fahr_to_kelvin(32) print 'freezing point of water:',fahr_to_kelvin(212) def kelvin_to_celcius(temp): return temp- 273.15 print 'absolute zero in Celcius:', kelvin_to_celcius(0.0) def fahr_to_celcius(temp): temp_k = fahr_to_kelvin(temp) result = kelvin_to_celcius(temp_k) return result print 'freezing point of water in Celcius', fahr_to_celcius(32.0) def analyze(filename): #do something here analyze('inflammation-01.csv') import numpy as np from matplotlib import pyplot as plt import glob filenames = glob.glob('*.csv') filenames= filenames[0:3] for f in filenames: print f #do something here def avg_age(ages): sum = 0.0 for v in ages: sum= sum+v print sum/len(ages) avg_age([10,30,20]) avg_age([10,-30,20]) def avg_age(ages): sum = 0.0 for v in ages: assert v >= 0, "invalid age:"+str(v) sum= sum+v print sum/len(ages) avg_age([10,30,20]) avg_age([10,-30,20]) def seq_check(s): for c in s: assert c in ['a','t','c','g'], "invalid input:"+str(c) print "all good:"+s seq_check('atcg') seq_check('bcgt') seq_check('ctga') $ python readings-02.py inflammation-01.csv $ python readings-02.py inflammation-01.csv |head -4 5.45 5.425 6.1 5.9 #argv-list.py import sys print 'sys.argv is', sys.argv $ python argv-list.py sys.argv is ['argv-list.py'] $ python argv-list.py first second third sys.argv is ['argv-list.py', 'first', 'second', 'third'] #readings-01.py import sys import numpy as np def main(): script = sys.argv[0] filename = sys.argv[1] data = np.loadtxt(filename, delimiter=',') for m in data.mean(axis=1): #mean in the same row print m $ cat small-01.csv 0,0,1 0,1,2 $ cat small-02.csv 9,17,15 20,8,5 $ cat small-03.csv 0,2,0 1,1,0 $python readings-01.py small-01.csv #readings-02.py import sys import numpy as np def main(): script = sys.argv[0] filename = sys.argv[1] data = np.loadtxt(filename, delimiter=',') for m in data.mean(axis=1): print m main() # <==== Call the function to do some action!! $ python readings-02.py small-01.csv 0.333333333333 1.0 $ python readings-02.py small-02.csv 13.6666666667 11.0 $ python readings-02.py small-03.csv 0.666666666667 0.666666666667 $ python readings.py small-01.csv small-02.csv small-03.csv 0.333333333333 1.0 13.6666666667 11.0 0.666666666667 0.666666666667 import sys import numpy as np from matplotlib import pyplot as plt import glob def display(files): plt.figure(figsize=(10.0, 3.0)) for f in files: data = np.loadtxt(fname=f, delimiter=',') plt.subplot(1, 3, 1) plt.ylabel('average') plt.plot(data.mean(axis=0)) plt.subplot(1, 3, 2) plt.ylabel('max') plt.plot(data.max(axis=0)) plt.subplot(1, 3, 3) plt.ylabel('min') plt.plot(data.min(axis=0)) plt.tight_layout() plt.show() files = sys.argv[1:] display(files) $ python show_graphs.py inflammation-01.csv inflammation-02.csv inflammation-03.csv inflammation-04.csv <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Pairs of Futures (or Spreads) Step2: However, from looking at the p-value for our test, we conclude that soybean meal and soybean meal and soybean oil are not cointegrated. Step3: We still have this compelling economic link, though. Both soybean oil and soybean meal have a root product in soybeans themselves. Let's see if we can suss out any signal by creating a spread between soybean prices and the refined products together, by implementing the crush spread. Step4: In the above plot, we offset the refined products by one month to roughly match the time it takes to crush the soybeans and we set roll='calendar' so that all three contracts are rolled at the same time. Step5: The test confirms that the spread is stationary. And it makes sense, economically, that the crush spread may exhibit some mean reversion due to simple supply and demand. Step6: If we apply a hypothesis test to the two price series we find that they are indeed cointegrated, corroborating our economic hypothesis. Step7: The next step would be to test if this signal is viable once we include market impact by trading EQR against the futures contract as a pair in a backtest. Step8: We have a high positive contemporaneous correlation, but a slightly negative lagged correlation. Step9: And when we form a linear regression of the excess returns of XOM on the lagged futures returns, the coefficient is significant and negative. This and the above correlations indicate that there might be a slight overreaction to the shift in oil prices. Step10: A coefficient of around $-0.02$ on the lagged futures return implies that if the oil price increased by 1% yesterday, the pure-play refiner is expected to go down by $2$ bp today. This would require more testing to formulate a functioning model, but it indicates that there might be some signal in drawing out the underreaction or overreaction of equity prices to changes in futures prices.
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels.tsa.stattools import coint, adfuller import matplotlib.pyplot as plt from quantopian.research.experimental import continuous_future, history soy_meal_mult = symbols('SMF17').multiplier soy_oil_mult = symbols('BOF17').multiplier soybean_mult = symbols('SYF17').multiplier sm_future = continuous_future('SM', offset=0, roll='calendar', adjustment='mul') sm_price = history(sm_future, fields='price', start_date='2014-01-01', end_date='2017-01-01') bo_future = continuous_future('BO', offset=0, roll='calendar', adjustment='mul') bo_price = history(bo_future, fields='price', start_date='2014-01-01', end_date='2017-01-01') sm_price.plot() bo_price.multiply(soy_oil_mult//soy_meal_mult).plot() plt.ylabel('Price') plt.legend(['Soybean Meal', 'Soybean Oil']); print 'p-value: ', coint(sm_price, bo_price)[1] sm_future = continuous_future('SM', offset=1, roll='calendar', adjustment='mul') sm_price = history(sm_future, fields='price', start_date='2014-01-01', end_date='2017-01-01') bo_future = continuous_future('BO', offset=1, roll='calendar', adjustment='mul') bo_price = history(bo_future, fields='price', start_date='2014-01-01', end_date='2017-01-01') sy_future = continuous_future('SY', offset=0, roll='calendar', adjustment='mul') sy_price = history(sy_future, fields='price', start_date='2014-01-01', end_date='2017-01-01') crush = sy_price - (sm_price + bo_price) crush.plot() plt.ylabel('Crush Spread'); print 'p-value for stationarity: ', adfuller(crush)[1] ty_future = continuous_future('TY', offset=0, roll='calendar', adjustment='mul') ty_prices = history(ty_future, fields='price', start_date='2009-01-01', end_date='2017-01-01') ty_prices.name = ty_future.root_symbol equities = symbols(['EQR', 'SPY']) equity_prices = get_pricing(equities, fields='price', start_date='2009-01-01', end_date='2017-01-01') equity_prices.columns = map(lambda x: x.symbol, equity_prices.columns) data = pd.concat([ty_prices, equity_prices], axis=1) data = data.dropna() data.plot() plt.legend(); print 'Cointegration test p-value: ', coint(data['TY'], data['EQR'])[1] cl_future = continuous_future('CL', offset=0, roll='calendar', adjustment='mul') cl_prices = history(cl_future, fields='price', start_date='2007-01-01', end_date='2017-04-06') cl_prices.name = cl_future.root_symbol equities = symbols(['XOM', 'SPY']) equity_prices = get_pricing(equities, fields='price', start_date='2007-01-01', end_date='2017-04-06') equity_prices.columns = map(lambda x: x.symbol, equity_prices.columns) data = pd.concat([cl_prices, equity_prices],axis=1) data = data.dropna() #Take log of prices data['stock_ret'] = np.log(data['XOM']).diff() data['spy_ret'] = np.log(data['SPY']).diff() data['futures_ret'] = np.log(data['CL']).diff() # Compute excess returns in excess of SPY data['stock_excess'] = data['stock_ret'] - data['spy_ret'] #Compute lagged futures returns data['futures_lag_diff'] = data['futures_ret'].shift(1) data = data[2:].dropna() data.tail(5) #Compute contemporaneous correlation contemp_corr = data['stock_excess'].shift(1).corr(data['futures_lag_diff']) #Compute correlation of excess stock returns with lagged futures returns lagged_corr = data['stock_excess'].corr(data['futures_lag_diff']) print 'Contemporaneous correlation: ', contemp_corr print 'Lagged correlation : ', lagged_corr result = sm.OLS(data['stock_excess'], sm.add_constant(data['futures_lag_diff'])).fit() result.summary2() data['futures_lag_diff'].plot(alpha=0.50, legend=True) data['stock_excess'].plot(alpha=0.50, legend=True); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Language Translation Step3: Explore the Data Step7: Implement Preprocessing Function Step9: Preprocess all the data and save it Step11: Check Point Step13: Check the Version of TensorFlow and Access to GPU Step16: Build the Neural Network Step19: Process Decoding Input Step22: Encoding Step25: Decoding - Training Step28: Decoding - Inference Step31: Build the Decoding Layer Step34: Build the Neural Network Step35: Neural Network Training Step37: Build the Graph Step40: Train Step42: Save Parameters Step44: Checkpoint Step47: Sentence to Sequence Step49: Translate
<ASSISTANT_TASK:> Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \ for sentence in source_text.split('\n')] y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \ for sentence in target_text.split('\n')] source_id_text = [] target_id_text = [] found in a forum post. necessary? n1 = len(x[i]) n2 = len(y[i]) n = n1 if n1 < n2 else n2 if abs(n1 - n2) <= 0.3 * n: if n1 <= 17 and n2 <= 17: for i in range(len(x)): source_id_text.append(x[i]) target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']]) return (source_id_text, target_id_text) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) def model_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) # TODO: Implement Function input_text = tf.placeholder(tf.string,[None, None], name="input") target_text = tf.placeholder(tf.string,[None, None], name="targets") learning_rate = tf.placeholder(tf.float32, name="learning_rate") keep_prob = tf.placeholder(tf.float32, name="keep_prob") return input_text, target_text, learning_rate, keep_prob DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) def process_decoding_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_decoding_input(process_decoding_input) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state # TODO: Implement Function enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) _, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32) return enc_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits # TODO: Implement Function train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \ dec_embed_input, sequence_length, scope=decoding_scope) train_logits = output_fn(train_logits_drop) #I'm missing the keep_prob! don't know where to put it return train_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: Maximum length of :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits # TODO: Implement Function infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) #Again, don't know where to put the keep_drop param return inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function return None, None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function return None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) # Number of Epochs epochs = None # Batch Size batch_size = None # RNN Size rnn_size = None # Number of Layers num_layers = None # Embedding Size encoding_embedding_size = None decoding_embedding_size = None # Learning Rate learning_rate = None # Dropout Keep Probability keep_probability = None DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) DON'T MODIFY ANYTHING IN THIS CELL import time def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target_batch, [(0,0),(0,max_seq - target_batch.shape[1]), (0,0)], 'constant') if max_seq - batch_train_logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function return None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: A Convenience Function Step2: The Assignment Step3: Copy out the status column into a slice, then drop it from the main dataframe. Always verify you properly executed the drop by double checking (printing out the resulting operating)! Many people forget to set the right axis here. Step4: With the labels safely extracted from the dataset, replace any nan values with the mean feature / column value Step5: Do train_test_split. Use the same variable names as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and keep the test_size at 0.5 (50%). Step6: Experiment with the basic SKLearn preprocessing scalers. We know that the features consist of different units mixed in together, so it might be reasonable to assume feature scaling is necessary. Print out a description of the dataset, post transformation. Recall Step7: Dimensionality Reduction Step8: Train your model against data_train, then transform both data_train and data_test using your model. You can save the results right back into the variables themselves. Step9: Implement and train KNeighborsClassifier on your projected 2D training data here. You can name your variable knmodel. You can use any K value from 1 - 15, so play around with it and see what results you can come up. Your goal is to find a good balance where you aren't too specific (low-K), nor are you too general (high-K). You should also experiment with how changing the weights parameter affects the results. Step10: Be sure to always keep the domain of the problem in mind! It's WAY more important to errantly classify a benign tumor as malignant, and have it removed, than to incorrectly leave a malignant tumor, believing it to be benign, and then having the patient progress in cancer. Since the UDF weights don't give you any class information, the only way to introduce this data into SKLearn's KNN Classifier is by "baking" it into your data. For example, randomly reducing the ratio of benign samples compared to malignant samples from the training set.
<ASSISTANT_TASK:> Python Code: import random, math import pandas as pd import numpy as np import scipy.io from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt matplotlib.style.use('ggplot') # Look Pretty # Leave this alone until indicated: Test_PCA = False def plotDecisionBoundary(model, X, y): print("Plotting...") fig = plt.figure() ax = fig.add_subplot(111) padding = 0.1 resolution = 0.1 #(2 for benign, 4 for malignant) colors = {2:'royalblue', 4:'lightsalmon'} # Calculate the boundaris x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() x_range = x_max - x_min y_range = y_max - y_min x_min -= x_range * padding y_min -= y_range * padding x_max += x_range * padding y_max += y_range * padding # Create a 2D Grid Matrix. The values stored in the matrix # are the predictions of the class at at said location xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution)) # What class does the classifier say? Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour map plt.contourf(xx, yy, Z, cmap=plt.cm.seismic) plt.axis('tight') # Plot your testing points as well... for label in np.unique(y): indices = np.where(y == label) plt.scatter(X[indices, 0], X[indices, 1], c=colors[label], alpha=0.8) p = model.get_params() plt.title('K = ' + str(p['n_neighbors'])) plt.show() # .. your code here .. # .. your code here .. # .. your code here .. # .. your code here .. # .. your code here .. model = None if Test_PCA: print('Computing 2D Principle Components') # TODO: Implement PCA here. Save your model into the variable 'model'. # You should reduce down to two dimensions. # .. your code here .. else: print('Computing 2D Isomap Manifold') # TODO: Implement Isomap here. Save your model into the variable 'model' # Experiment with K values from 5-10. # You should reduce down to two dimensions. # .. your code here .. # .. your code here .. # .. your code here .. # .. your code changes above .. plotDecisionBoundary(knmodel, X_test, y_test) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Softmax Regressions Step2: We describe these interacting operations by manipulating symbolic variables. Let's create one Step3: $\mathrm{x}$ isn't a specific value. It's a placeholder, a value that we'll input when we ask TensorFlow to run a computation. We want to be able to input any number of MNIST images, each flattened into a 784-dimensional vector. We represent this as a 2-D tensor of floating-point numbers, with a shape $\mathrm{[None, 784]}$. (<mark>Here None means that a dimension can be of any length.</mark>) Step4: We create these Variables by giving tf.Variable the initial value of the Variable Step5: First, we multiply $\mathrm{x}$ by $\mathrm{W}$ with the expression tf.matmul(x, W). This is flipped from when we multiplied them in our equation, where we had $\mathrm{Wx}$ , as a small trick to deal with $\mathrm{x}$ being a 2D tensor with multiple inputs. We then add $\mathrm{b}$, and finally apply tf.nn.softmax. Step6: Then we can implement the cross-entropy function, $\mathrm{-\sum\limits_{i} {y'_{i}\space log(y_i)}}$ Step7: First, tf.log computes the logarithm of each element of $\mathrm{y}$. Next, we multiply each element of y_ with the corresponding element of tf.log(y). Then tf.reduce_sum adds the elements in the second dimension of y, due to the reduction_indices=[1] parameter. Finally, tf.reduce_mean computes the mean over all the examples in the batch. Step8: In this case, we ask TensorFlow to minimize cross_entropy using the gradient descent algorithm with a learning rate of 0.5. Gradient descent is a simple procedure, where TensorFlow simply shifts each variable a little bit in the direction that reduces the cost. But TensorFlow also provides many other optimization algorithms Step9: We first have to create an operation to initialize the variables we created Step10: Let's train -- we'll run the training step 1000 times! Step11: Each step of the loop, we get a "batch" of one hundred random data points from our training set. We run train_step feeding in the batches data to replace the placeholders. Step12: That gives us a list of booleans. To determine what fraction are correct, we cast to floating point numbers and then take the mean. For example, [True, False, True, True] would become [1,0,1,1] which would become 0.75. Step13: Finally, we ask for our accuracy on our test data.
<ASSISTANT_TASK:> Python Code: from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data", one_hot=True) import tensorflow as tf x = tf.placeholder(tf.float32, [None, 784]) x W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy) sess = tf.InteractiveSession() tf.global_variables_initializer().run() for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.arg_max(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Create the Training set Step2: One-hot encode Step3: Create the Model Step4: Train the Model Step5: Generate new sequence
<ASSISTANT_TASK:> Python Code: # http://www.glozman.com/textpages.html # Harry Potter 1 - Sorcerer's Stone.txt # Harry Potter 2 - Chamber of Secrets.txt # Harry Potter 3 - The Prisoner Of Azkaban.txt # Harry Potter 4 - The Goblet Of Fire.txt # Harry Potter 5 - Order of the Phoenix.txt # Harry Potter 6 - The Half Blood Prince.txt # Harry Potter 7 - Deathly Hollows.txt with open("texts/HarryPotter1-SorcerersStone.txt", "r") as f: text = f.read().lower() chars = sorted(list(set(text))) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) 'corpus length: {} total chars: {}'.format(len(text), len(chars)) print(text[:100]) maxlen = 40 step = 3 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i+maxlen]) next_chars.append(text[i + maxlen]) print("sequences: ", len(sentences)) print(sentences[0]) print(sentences[1]) print(next_chars[0]) import numpy as np X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): X[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 from keras.models import Sequential from keras.layers import Dense, Activation from keras.layers import LSTM from keras.optimizers import RMSprop model = Sequential() model.add(LSTM(256, recurrent_dropout=0.0, input_shape=(maxlen, len(chars)), return_sequences=True)) model.add(LSTM(256, recurrent_dropout=0.0, input_shape=(maxlen, len(chars)), return_sequences=True)) model.add(LSTM(256, recurrent_dropout=0.0, input_shape=(maxlen, len(chars)))) model.add(Dense(2*len(chars))) model.add(Dense(len(chars))) model.add(Activation('softmax')) optimizer = RMSprop() model.compile(loss='categorical_crossentropy', optimizer=optimizer) model.summary() epochs = 100 batch_size = 512 model.fit(X, y, batch_size=batch_size, epochs=epochs) # model.save_weights("potter_lstm_weights_0568.h5") model.load_weights("potter_lstm_weights_0568.h5") import random def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) import sys start_index = random.randint(0, len(text) - maxlen - 1) for diversity in [0.2, 0.5, 1.0]: print() print('----- diversity:', diversity) generated = '' sentence = text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') sys.stdout.write(generated) for i in range(400): x = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x[0, t, char_indices[char]] = 1. preds = model.predict(x, verbose=0)[0] next_index = sample(preds, diversity) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() print() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Data Step2: Add these directories to PYTHONPATH so that we can import them Step3: Import our functions to read Extreme-Repository data format and to transform it to FastText format Step4: Read exrepo format, transform it to FastText format (stored in ./data/LSHTC-FT/*.txt) Step5: Generate fasttext commands to be run. They will generate our MIPS dataset. Step6: Runs bash command prepared above Step7: Tests Step8: Our imports Step9: Some basic logging Step18: Simple utils functions Step19: Create the database we'll store the results in Step20: Load the generated dataset Step21: We do not run on all queries, but on random subset of lIMIT queries Step22: Load groundtruth Step23: Logging Step24: Proxies Step25: Round 1 Step26: Kmeans Step27: Round 2 Step28: Kmeans Step29: Test ALSH and quantization Step30: Baseline Step31: Full Results
<ASSISTANT_TASK:> Python Code: cd .. import sys import os sys.path.append(os.path.abspath('./faiss/')) sys.path.append(os.path.abspath('./python/')) from experiments.data import get_data from misc.utils import to_ft, load_sift X, Y, words_mask, labels_mask = get_data('./data/LSHTC', 'train', min_words=3, min_labels=3) to_ft(X, Y, './data/LSHTC-FT/train.txt') X, Y, *_ = get_data('./data/LSHTC', 'test', words_mask=words_mask, labels_mask=labels_mask) to_ft(X, Y, './data/LSHTC-FT/test.txt') def make_cmd(*args, **kwargs): args = ' '.join(args) opts = ' '.join(f'-{k} {v}' for k, v in kwargs.items()) cmd = f'./fastText/fasttext {args} {opts}' return cmd.split() train_cmd = make_cmd('supervised', input = './data/LSHTC-FT/train.txt', output = './data/LSHTC-FT/model.ft', minCount = 5, minCountLabel = 5, lr = 0.1, lrUpdateRate = 100, dim = 256, ws = 5, epoch = 25, neg = 25, loss = 'ns', thread = 8, saveOutput = 1) generate_cmd = make_cmd('to-fvecs', './data/LSHTC-FT/model.ft.bin', './data/LSHTC-FT/test.txt', './data/LSHTC-FT/fvecs') import subprocess subprocess.call(train_cmd) subprocess.call(generate_cmd) import datetime import json import logging import os import sys import time import uuid import numpy as np from contextlib import contextmanager from tinydb import TinyDB, where sys.path.append(os.path.abspath('./faiss/')) sys.path.append(os.path.abspath('./python/')) import faiss import mips from experiments.data import get_data from misc.utils import to_ft, load_sift logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() @contextmanager def timer(): Simple context manager to convieniently measure times between enter and exit class Clock: elapsed = 0 t0 = time.time() yield Clock Clock.elapsed = time.time() - t0 def search(idx, data, k): Search top-k items in `data` using `idx` This is needed because currently our wrappers return 1-D arrays, so we need to reshape them D, I = idx.search(data, k) D, I = D.reshape(-1, k), I.reshape(-1, k) return D, I def compute_p1(G, I): Compute precision-at-1 for groundtruth `G` and predicted indices `I` p1 = 0. for i, item in enumerate(I): p1 += float(int(item) in G[i]) p1 /= len(G) return p1 def test_idx(IdxClass, params, xb, xq, G, k=100): Train and test the given Index class with given params Use the provided base, query, and groundtruth vectors. Index will predict top-`k` entries. In case of failure, the exception is returned as string This function returns a Report dictionary try: idx = IdxClass(**params) with timer() as train_t: idx.train(xb) idx.add(xb) with timer() as search_t: _, I = search(idx, xq, k) p1 = compute_p1(G, I[:, 0]) report = make_report(IdxClass, params, p1, train_t.elapsed, search_t.elapsed) except Exception as e: print('FAILED: ' + str(e)) report = str(e) return report def now(): Helper function to format current timestamp return datetime.datetime.fromtimestamp(time.time()).strftime("%d-%m-%y %H:%M:%S") def make_report(IdxClass, params, p1, train_t, search_t): Create a Report dictionary for given set of parameters return { 'ID': uuid.uuid4().hex, 'algo': IdxClass.__name__, 'params': params, 'p1': p1, 'train_t': train_t, 'search_t': search_t } def add_result(r): Add result `r` to the database if isinstance(r, dict): algo, params, p1, t = r['algo'], r['params'], r['p1'], r['search_t'] rep = f'(params={params}, p1={p1:.2f}, t={t:.2f})' else: rep = r logger.info(f'Adding: {rep}') def result_adder(doc): doc['results'].append(r) DB.update(result_adder, where('ID') == ID) def test(IdxClass, **params): Even higher-level wrapper for testing return test_idx(IdxClass, params, xb, xq, G, k=100) # !rm ./data/results/ad-hoc-db.json DB = TinyDB('./data/results/ad-hoc-db.json') ID = uuid.uuid4().hex info = dict( ID = ID, name = 'ad-hoc-results', date = now(), results = [] ) DB.insert(info) xq = load_sift('./data/LSHTC-FT/fvecs.hid.fvecs', dtype=np.float32) xb = load_sift('./data/LSHTC-FT/fvecs.wo.fvecs', dtype=np.float32) _n, d, c = xq.shape[0], xq.shape[1], xb.shape[0] LIMIT = 250_000 inds = np.random.choice(np.arange(_n), LIMIT, replace=False) xq = xq[inds, :] xq = np.copy(np.ascontiguousarray(xq), order='C') xb = np.copy(np.ascontiguousarray(xb), order='C') n = xq.shape[0] G = [] for line in open('./data/LSHTC-FT/fvecs.labels.txt'): G.append({int(y) for y in line.split()}) G = [G[idx] for idx in inds] logger.info(f"Loaded dataset of {_n:_}, {d:_}-dimensionsl queries (examples), but limiting to {LIMIT:_} queries") logger.info(f"The dataset contains {c:_} classes, and more than one class can be positive") class IVFIndex: def __init__(self, d, size, nprobe): self.index = faiss.index_factory(d, f"IVF{size},Flat", faiss.METRIC_INNER_PRODUCT) self.index.nprobe = nprobe def __getattr__(self, name): return getattr(self.index, name) class KMeansIndex: def __init__(self, d, layers, nprobe, m, U): self.aug = mips.MipsAugmentationShrivastava(d, m, U) self.index = mips.IndexHierarchicKmeans(d, layers, nprobe, self.aug, False) def __getattr__(self, name): return getattr(self.index, name) class FlatIndex: def __init__(self, d): self.index = faiss.IndexFlatIP(d) def __getattr__(self, name): return getattr(self.index, name) class AlshIndex: def __init__(self, d: int, L: int, K: int, r: int, m: int, U: int): self.aug = mips.MipsAugmentationShrivastava(d, m, U) self.index = mips.AlshIndex(d, L, K, r, self.aug) def __getattr__(self, name): return getattr(self.index, name) class QuantIndex: def __init__(self, dim: int, subspace_count: int, centroid_count: int): self.index = mips.IndexSubspaceQuantization(dim, subspace_count, centroid_count) def __getattr__(self, name): return getattr(self.index, name) for size in [4096]: for nprobe in [1, 16, 32, 64, 128]: add_result( test( IVFIndex, d=d, size=size, nprobe=nprobe)) for layers in [2]: for nprobe in [1, 16, 32, 64, 128]: add_result( test( KMeansIndex, d=d, layers=layers, nprobe=nprobe, m=5, U=0.85)) for size in [4096]: for nprobe in [256, 512]: add_result( test( IVFIndex, d=d, size=size, nprobe=nprobe)) for layers in [2]: for nprobe in [256, 512]: add_result( test( KMeansIndex, d=d, layers=layers, nprobe=nprobe, m=5, U=0.85)) for layers in [3]: for nprobe in [1, 16, 32, 64, 128]: add_result( test( KMeansIndex, d=d, layers=layers, nprobe=nprobe, m=5, U=0.85)) for L in [2,4,8,16,32]: for K in [2, 4, 8, 16, 32]: add_result( test( AlshIndex, d=d, L=L, K=K, m=5, r=10, U=0.85)) for subspace in [2,4,8]: for centroid in [2, 4, 8, 16, 32]: add_result( test( QuantIndex, dim=d, subspace_count=subspace, centroid_count=centroid)) add_result( test( FlatIndex, d=d)) DB.all() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Image Classification Step2: Explore the Data Step5: Implement Preprocess Functions Step8: One-hot encode Step10: Randomize Data Step12: Check Point Step17: Build the network Step20: Convolution and Max Pooling Layer Step23: Flatten Layer Step26: Fully-Connected Layer Step29: Output Layer Step37: Create Convolutional Model Step40: Train the Neural Network Step42: Show Stats Step43: Hyperparameters Step45: Train on a Single CIFAR-10 Batch Step47: Fully Train the Model Step50: Checkpoint
<ASSISTANT_TASK:> Python Code: DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('cifar-10-python.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', 'cifar-10-python.tar.gz', pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open('cifar-10-python.tar.gz') as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) def normalize(x): Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data # TODO: Implement Function ret=x/(x.max()-x.min()) return ret DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_normalize(normalize) def one_hot_encode(x): One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels # TODO: Implement Function n_labels = len(x) n_unique_labels = 10 one_hot_encode = np.zeros((n_labels,n_unique_labels)) for itme in range(n_labels): one_hot_encode[itme, x[itme]] = 1 print(x[0]) print(one_hot_encode[0]) print('----------') return one_hot_encode DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_one_hot_encode(one_hot_encode) DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) DON'T MODIFY ANYTHING IN THIS CELL import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) import tensorflow as tf def neural_net_image_input(image_shape): Return a Tensor for a bach of image input : image_shape: Shape of the images : return: Tensor for image input. # TODO: Implement Function #print(image_shape) x = tf.placeholder("float", shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x') #print(x.name) return x def neural_net_label_input(n_classes): Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. # TODO: Implement Function #print(n_classes) x = tf.placeholder("float", shape=[None, n_classes],name='y') return x def neural_net_keep_prob_input(): Return a Tensor for keep probability : return: Tensor for keep probability. # TODO: Implement Function x = tf.placeholder("float",name='keep_prob') return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor # TODO: Implement Function #print(x_tensor.shape) x, x_h, x_w, x_depth = x_tensor.shape #print(conv_num_outputs) #print(conv_ksize) conv_h,conv_w = conv_ksize #print(conv_strides) s_h,s_w = conv_strides #print(pool_ksize) pool_h, pool_w = pool_ksize #print(pool_strides) pool_h_s,pool_w_s = pool_strides #weights fw = tf.Variable(tf.truncated_normal(shape=(int(conv_h), int(conv_w), int(x_depth), int(conv_num_outputs)),stddev=0.01, mean = 0)) #bias fb = tf.Variable(tf.zeros(conv_num_outputs)) padding = 'VALID' conv = tf.nn.conv2d(x_tensor, fw, strides=[1, int(s_h), int(s_w), 1], padding=padding) + fb conv = tf.nn.relu(conv) conv = tf.nn.max_pool(conv, ksize=[1, int(pool_h), int(pool_w), 1],strides=[1, int(pool_h_s), int(pool_w_s), 1],padding=padding) conv.get_shape().as_list() return conv DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_con_pool(conv2d_maxpool) def flatten(x_tensor): Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). # TODO: Implement Function #print(x_tensor.shape) x, x_h, x_w, x_depth = x_tensor.shape return tf.reshape(x_tensor, [-1, int(x_h)*int(x_w)*int(x_depth)]) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_flatten(flatten) def fully_conn(x_tensor, num_outputs): Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. # TODO: Implement Function #print(x_tensor.shape) #print(num_outputs) w = tf.Variable(tf.truncated_normal(shape=(int(x_tensor.shape[1]), int(num_outputs)),stddev=0.1, mean = 0)) b = tf.Variable(tf.zeros([num_outputs])) y = tf.nn.relu(tf.add(tf.matmul(x_tensor, w), b)) return y DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_fully_conn(fully_conn) def output(x_tensor, num_outputs): Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. # TODO: Implement Function #print(x_tensor.shape) #print(num_outputs) w = tf.Variable(tf.truncated_normal((int(x_tensor.shape[1]), int(num_outputs)),stddev=0.1, mean = 0)) b = tf.Variable(tf.zeros([num_outputs])) y = tf.add(tf.matmul(x_tensor, w), b) return y DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_output(output) def conv_net(x, keep_prob): Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv=x conv = conv2d_maxpool(conv, conv_num_outputs=50, conv_ksize=(3,3), conv_strides=(1,1), pool_ksize=(3,3), pool_strides=(2,2)) # # conv = conv2d_maxpool(conv, conv_num_outputs=25, conv_ksize=(3,3), conv_strides=(1,1), pool_ksize=(3,3), pool_strides=(2,2)) # conv = conv2d_maxpool(conv, conv_num_outputs=10, conv_ksize=(3,3), conv_strides=(1,1), pool_ksize=(3,3), pool_strides=(2,2)) # # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) flat_layer = flatten(conv) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc = flat_layer # fc = fully_conn(fc, 120) fc = tf.nn.dropout(fc, keep_prob) # # fc = fully_conn(fc, 60) fc = tf.nn.dropout(fc, keep_prob) # # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) output_layer = output(fc, 10) # TODO: return output return output_layer DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data # TODO: Implement Function session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability}) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_train_nn(train_neural_network) def print_stats(session, feature_batch, label_batch, cost, accuracy): Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function # TODO: Implement Function loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.}) accuracy = session.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {:.4f} Accuracy: {:.2f}%'.format(loss, accuracy*100)) # TODO: Tune Parameters epochs = 50 batch_size = 2048 keep_probability = 0.9 DON'T MODIFY ANYTHING IN THIS CELL import datetime print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 start = datetime.datetime.now() for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) end = datetime.datetime.now() print (end-start) DON'T MODIFY ANYTHING IN THIS CELL save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches start = datetime.datetime.now() n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) end = datetime.datetime.now() print (end-start) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) DON'T MODIFY ANYTHING IN THIS CELL %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): Test the saved model against the test dataset test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Compare the fraction of emotional sentences per text Step2: Compare the number of lines per text Step3: Compare the average number of labels per sentence Step4: Compare the number of emotional sentences per text
<ASSISTANT_TASK:> Python Code: # Render our plots inline %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import os pd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier plt.rcParams['figure.figsize'] = (16, 6) # adjust to your local directories embem_data_dir = '/home/jvdzwaan/data/embem/' output_dir = '/home/jvdzwaan/data/tmp/' # load data def load_data(corpus, column_names, corpus_metadata, label_counts, body_parts, emotion_bodypart_pairs): c = pd.read_csv(corpus, header=None, sep='\t', index_col=0, names=column_names) md = pd.read_csv(corpus_metadata, index_col=0) l = pd.read_csv(label_counts, index_col=0) bp = pd.read_csv(body_parts, index_col=0) ebp = pd.read_csv(emotion_bodypart_pairs, index_col=0) return pd.concat([c, md, l, bp, ebp], axis=1) corpus_big = load_data(os.path.join(embem_data_dir, 'corpus/corpus_big.csv'), ['id', 'year', 'genre', 'title', 'authors'], os.path.join(embem_data_dir, 'dict/corpus_big_additional_metadata.csv'), os.path.join(embem_data_dir, 'dict/corpus_big_label_counts.csv'), os.path.join(embem_data_dir, 'dict/corpus_big_heem_expanded_body_parts.csv'), os.path.join(embem_data_dir, 'dict/corpus_big_emotion_bodypart_pairs.csv')) annotation = load_data(os.path.join(embem_data_dir, 'corpus/annotation_corpus.csv'), ['id', 'year', 'genre', 'title', 'authors'], os.path.join(embem_data_dir, 'dict/annotation_additional_metadata.csv'), os.path.join(embem_data_dir, 'dict/annotation_label_counts.csv'), os.path.join(embem_data_dir, 'dict/annotation_heem_expanded_body_parts.csv'), os.path.join(embem_data_dir, 'dict/annotation_emotion_bodypart_pairs.csv')) ceneton = load_data(os.path.join(embem_data_dir, 'corpus/ceneton.csv'), ['id', 'year', 'genre', 'title', 'authors'], os.path.join(embem_data_dir, 'dict/ceneton_additional_metadata.csv'), os.path.join(embem_data_dir, 'dict/ceneton_label_counts.csv'), os.path.join(embem_data_dir, 'dict/ceneton_heem_expanded_body_parts.csv'), os.path.join(embem_data_dir, 'dict/ceneton_emotion_bodypart_pairs.csv')) edbo = load_data(os.path.join(embem_data_dir, 'corpus/edbo.csv'), ['id', 'year', 'genre', 'title+author'], os.path.join(embem_data_dir, 'dict/edbo_additional_metadata.csv'), os.path.join(embem_data_dir, 'dict/edbo_label_counts.csv'), os.path.join(embem_data_dir, 'dict/edbo_heem_expanded_body_parts.csv'), os.path.join(embem_data_dir, 'dict/edbo_emotion_bodypart_pairs.csv')) complete = pd.concat([annotation, corpus_big, ceneton, edbo]).fillna(0) combined = pd.concat([corpus_big, ceneton, edbo]).fillna(0) # Basic statistics print '# texts' print 'Corpus big:', len(corpus_big) print 'Annotation:', len(annotation) print 'Ceneton:', len(ceneton) print 'EDBO:', len(edbo) print 'Combined:', len(combined) print 'Complete:', len(complete) #combined # number of texts per genre and period print 'Number of texts per genre' genres = complete.groupby('genre') genres.size().plot(kind='bar') print genres.size() print 'Number of texts per period' periods = complete.groupby('period') periods.size().reindex(['renaissance', 'classicism', 'enlightenment']).plot(kind='bar') print periods.size().reindex(['renaissance', 'classicism', 'enlightenment']) print 'Number of texts per period' df = pd.DataFrame({'count' : complete.groupby(['period', 'genre']).size()}).reset_index() df = df.pivot(index='period', columns='genre', values='count') df = df.fillna(0) df = df.reindex(['renaissance', 'classicism', 'enlightenment']) print df df.plot(kind='bar') print 'Number of texts per year' years = complete.groupby('year') #print years.size() print 'Number of years for which 0 texts are available:', np.sum(years.size() == 0) years.size().plot(marker='o') print 'Number of texts per genre per year' year2genre = pd.DataFrame({'count' : complete.groupby(['year', 'genre']).size()}).reset_index() year2genre = year2genre.pivot(index='year', columns='genre', values='count') year2genre = year2genre.fillna(0) #print year2genre year2genre.plot() complete.loc[:, 'frac_emotional'] = complete.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1) combined.loc[:, 'frac_emotional'] = combined.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1) annotation.loc[:, 'frac_emotional'] = annotation.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1) corpus_big.loc[:, 'frac_emotional'] = corpus_big.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1) ceneton.loc[:, 'frac_emotional'] = ceneton.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1) edbo.loc[:, 'frac_emotional'] = edbo.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1) data = [complete['frac_emotional'], combined['frac_emotional'], annotation['frac_emotional'], corpus_big['frac_emotional'], ceneton['frac_emotional'], edbo['frac_emotional']] plt.boxplot(data) plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO']) plt.title('Fraction of emotional sentences in the different datasets'); from scipy import stats import statsmodels.api as sm f_val, p_val = stats.f_oneway(annotation['frac_emotional'], corpus_big['frac_emotional'], ceneton['frac_emotional'], edbo['frac_emotional']) print "P value ANOVA: {:10.10f}\n".format(p_val) annotation.loc[:, 'corpus'] = annotation.apply(lambda row: 'annotation', axis=1) corpus_big.loc[:, 'corpus'] = corpus_big.apply(lambda row: 'corpus_big', axis=1) ceneton.loc[:, 'corpus'] = ceneton.apply(lambda row: 'ceneton', axis=1) edbo.loc[:, 'corpus'] = edbo.apply(lambda row: 'edbo', axis=1) df = pd.concat([annotation, corpus_big, ceneton, edbo]) result = sm.stats.multicomp.pairwise_tukeyhsd(df.frac_emotional, df.corpus) print(result.summary()) data = [complete['#lines'], combined['#lines'], annotation['#lines'], corpus_big['#lines'], ceneton['#lines'], edbo['#lines']] plt.boxplot(data) plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO']) plt.title('The number of lines per text in different datasets'); f_val, p_val = stats.f_oneway(annotation['#lines'], corpus_big['#lines'], ceneton['#lines'], edbo['#lines']) print "P value ANOVA: {:10.10f}\n".format(p_val) result = sm.stats.multicomp.pairwise_tukeyhsd(df.get('#lines'), df.corpus) print(result.summary()) data = [complete['avg_labels'], combined['avg_labels'], annotation['avg_labels'], corpus_big['avg_labels'], ceneton['avg_labels'], edbo['avg_labels']] plt.boxplot(data) plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO']); f_val, p_val = stats.f_oneway(complete['avg_labels'], combined['avg_labels'], annotation['avg_labels'], corpus_big['avg_labels'], ceneton['avg_labels'], edbo['avg_labels']) print "P value ANOVA: {:10.10f}\n".format(p_val) result = sm.stats.multicomp.pairwise_tukeyhsd(df.get('avg_labels'), df.corpus) print(result.summary()) data = [complete['#emotional'], combined['#emotional'], annotation['#emotional'], corpus_big['#emotional'], ceneton['#emotional'], edbo['#emotional']] plt.boxplot(data) plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO']); f_val, p_val = stats.f_oneway(annotation['#emotional'], corpus_big['#emotional'], ceneton['#emotional'], edbo['#emotional']) print "P value ANOVA: {:10.10f}\n".format(p_val) result = sm.stats.multicomp.pairwise_tukeyhsd(df.get('avg_labels'), df.corpus) print(result.summary()) # load label names import itertools from embem.emotools.heem_utils import heem_emotion_labels, heem_body_part_labels ebp_labels = ['{}_{}'.format(e, bp) for e, bp in list(itertools.product(heem_emotion_labels, heem_body_part_labels))] def count_pairs(row): #print row['Achterdocht_Arms'] #print row.index return np.sum([row[p] for p in ebp_labels if p in row.index]) complete.loc[:, '#pairs'] = complete.apply(count_pairs, axis=1) combined.loc[:, '#pairs'] = combined.apply(count_pairs, axis=1) # Save datasets to file (for easy loading) annotation.to_csv(os.path.join(output_dir, 'annotation.csv')) corpus_big.to_csv(os.path.join(output_dir, 'corpus_big.csv')) ceneton.to_csv(os.path.join(output_dir, 'ceneton.csv')) edbo.to_csv(os.path.join(output_dir, 'edbo.csv')) combined.to_csv(os.path.join(output_dir, 'combined.csv')) complete.to_csv(os.path.join(output_dir, 'complete.csv')) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step3: Convolutional autoencoder for image denoising Step4: Prepare the data Step5: Build the autoencoder Step6: Now we can train our autoencoder using train_data as both our input data Step7: Let's predict on our test dataset and display the original image together with Step8: Now that we know that our autoencoder works, let's retrain it using the noisy Step9: Let's now predict on the noisy data and display the results of our autoencoder.
<ASSISTANT_TASK:> Python Code: import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras import layers from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Model def preprocess(array): Normalizes the supplied array and reshapes it into the appropriate format. array = array.astype("float32") / 255.0 array = np.reshape(array, (len(array), 28, 28, 1)) return array def noise(array): Adds random noise to each image in the supplied array. noise_factor = 0.4 noisy_array = array + noise_factor * np.random.normal( loc=0.0, scale=1.0, size=array.shape ) return np.clip(noisy_array, 0.0, 1.0) def display(array1, array2): Displays ten random images from each one of the supplied arrays. n = 10 indices = np.random.randint(len(array1), size=n) images1 = array1[indices, :] images2 = array2[indices, :] plt.figure(figsize=(20, 4)) for i, (image1, image2) in enumerate(zip(images1, images2)): ax = plt.subplot(2, n, i + 1) plt.imshow(image1.reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax = plt.subplot(2, n, i + 1 + n) plt.imshow(image2.reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() # Since we only need images from the dataset to encode and decode, we # won't use the labels. (train_data, _), (test_data, _) = mnist.load_data() # Normalize and reshape the data train_data = preprocess(train_data) test_data = preprocess(test_data) # Create a copy of the data with added noise noisy_train_data = noise(train_data) noisy_test_data = noise(test_data) # Display the train data and a version of it with added noise display(train_data, noisy_train_data) input = layers.Input(shape=(28, 28, 1)) # Encoder x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input) x = layers.MaxPooling2D((2, 2), padding="same")(x) x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x) x = layers.MaxPooling2D((2, 2), padding="same")(x) # Decoder x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x) x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x) x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x) # Autoencoder autoencoder = Model(input, x) autoencoder.compile(optimizer="adam", loss="binary_crossentropy") autoencoder.summary() autoencoder.fit( x=train_data, y=train_data, epochs=50, batch_size=128, shuffle=True, validation_data=(test_data, test_data), ) predictions = autoencoder.predict(test_data) display(test_data, predictions) autoencoder.fit( x=noisy_train_data, y=train_data, epochs=100, batch_size=128, shuffle=True, validation_data=(noisy_test_data, test_data), ) predictions = autoencoder.predict(noisy_test_data) display(noisy_test_data, predictions) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
<ASSISTANT_TASK:> Python Code: MOD = 1000000007 ; def rangeSum(l , r ) : a = 1 ; b = 9 ; res = 0 ; for i in range(1 , 11 ) : L = max(l , a ) ; R = min(r , b ) ; if(L <= R ) : sum =(L + R ) *(R - L + 1 ) // 2 ; res +=(i * i ) *(sum % MOD ) ; res %= MOD ;  a *= 10 ; b = b * 10 + 9 ;  return res ;  if __name__== "__main __": l = 98 ; r = 102 ; print(rangeSum(l , r ) ) ;  <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Most important step, import all tools in the L2_tools.py file Step2: Download data first Step3: Now read in the data, two dictionaries for OCO-2 and TROPOMI were included in L2_tools.py, it basically just enables you to decides which fields to read in and can be adapted (actually to read in any kind of data) Step4: The syntax for reading in data is as follows Step5: Does this make sense? Step6: You can see 2 things here Step7: Phase angle Step8: well, we have "somewhat" more data points using TROPOMI Step9: Let us compute timeseries from OCO-2 and TROPOMI data Step10: We see that the time-series agree reasonably well but that there is an offset (or scaling factor) as we define SIF at different wavelength. Let us take a look at typical SIF shapes from the SCOPE model (https Step11: We can see that SIF changes quite a bit with wavelength but that the overall shape $>$740nm us pretty consistent between our leaf level measurements and the SCOPE model. The shorter wavelength is more affected by chlorophyll re-absorption, which can dramatically change the shape (also, we used red light as illumination for the leaf level measurements, which causes higher re-absorption as it penetrates deeper into the leaf). Step12: Now we can use the factor to better match the 2 time-series Step13: Voila, this looks much better
<ASSISTANT_TASK:> Python Code: #%matplotlib inline ## The usual packages (numpy, matplotlib, etc) import matplotlib.pyplot as plt import numpy as np # nicer figures using ggg plot style. plt.style.use('ggplot') from IPython.core.display import HTML from IPython.display import clear_output from IPython.core.pylabtools import figsize # This file L2_tools.py is in the same folder here and will provide easy access to TROPOMI and OCO-2 L2 data from L2_tools import * # Please adapt to your local directories after having mirrored our ftp data (and extracted the ungridded TROPOMI files): path_tropomi = '/Volumes/data2/projects/TROPOMI/nc_ungridded/' path_oco2 = '/Volumes/data1/ftp/data/OCO2/sif_lite_B8100/' # This is still cumbersome, the units are part of the netCDF4 files but I hardcode them for now: t_unit_oco2 = "seconds since 1993-1-1 0:0:0" t_unit_tropomi = "seconds since 1970-01-01 00:00:00" t_cal = u"gregorian" # Let us read in some OCO-2 and TROPOMI data over a non-fluorescing region in the Sahara # Define region boundaries for a part of the Sahara: latMin = 20 latMax = 23 lonMin = 5 lonMax = 8 oco2_sahara = L2(path_oco2+'2018/0[45678]/*.nc4', dict_oco2,latMin, latMax, lonMin, lonMax ) # TROPOMI will take a bit more time, be patient! tropomi_sahara = L2(path_tropomi+'*2018-0[45678]*.nc', dict_tropomi,latMin, latMax, lonMin, lonMax ) dict_oco2 dict_tropomi # First convert oco2 time units to a python time (this is a clunky step right now, need to automate later!) oco2_sahara.time_python = convert_time(oco2_sahara.time, t_unit_oco2, t_cal) tropomi_sahara.time_python = convert_time(tropomi_sahara.time, t_unit_tropomi, t_cal) # now lets plot a timeseries figsize(12,6) plt.subplot(1,2,1) plt.plot(oco2_sahara.time_python, oco2_sahara.sif_757,'.') plt.ylabel('SIF (W/m$^2$/sr/$\mu$m)') plt.title('OCO2 (SIF at 758nm)') plt.subplot(1,2,2) plt.plot(tropomi_sahara.time_python, tropomi_sahara.sif,'.') plt.ylabel('SIF (W/m$^2$/sr/$\mu$m)') plt.title('TROPOMI (SIF at 740nm)') # let us look at a histogram plt.hist(oco2_sahara.sif_757, 50,range=(-3,3), density=True, alpha=0.5, label='OCO-2') plt.hist(tropomi_sahara.sif, 50,range=(-3,3), density=True, alpha=0.5, label='TROPOMI') plt.legend(loc=0) print('Standard deviation of measured OCO-2 SIF data in Sahara ' + str(np.std(oco2_sahara.sif_757)) ) print('Expected standard deviation based on posteriori error ' + str(np.mean(oco2_sahara.sif_757_sigma)) ) print('Mean if we exclude negative data ' + str(np.mean(oco2_sahara.sif_757[oco2_sahara.sif_757>=0]))) print('True mean ' + str(np.mean(oco2_sahara.sif_757))) # Define region boundaries (roughly Iowa) latMin_cb = 40 latMax_cb = 43 lonMin_cb = -96 lonMax_cb = -91 oco2_iowa = L2(path_oco2+'2018/0[45678]/*.nc4', dict_oco2,latMin_cb, latMax_cb, lonMin_cb, lonMax_cb ) tropomi_iowa = L2(path_tropomi+'*2018-0[45678]*.nc', dict_tropomi,latMin_cb, latMax_cb, lonMin_cb, lonMax_cb ) # Again, convert tedious times in the files to something you can work with in python oco2_iowa.time_python = convert_time(oco2_iowa.time, t_unit_oco2, t_cal) tropomi_iowa.time_python = convert_time(tropomi_iowa.time, t_unit_tropomi, t_cal) # now lets plot a timeseries figsize(14,6) plt.subplot(1,2,1) nadir = np.where(oco2_iowa.mode==0)[0] glint = np.where(oco2_iowa.mode==1)[0] plt.plot(oco2_iowa.time_python[nadir], oco2_iowa.sif_757[nadir],'.',markersize=0.5, label='Nadir') plt.plot(oco2_iowa.time_python[glint], oco2_iowa.sif_757[glint],'.',markersize=0.5, label='Glint') plt.ylabel('SIF (W/m$^2$/sr/$\mu$m)') plt.title('OCO2 (SIF at 758nm)') plt.legend(loc=0) plt.subplot(1,2,2) plt.plot(tropomi_iowa.time_python, tropomi_iowa.sif,'.',markersize=0.5) plt.ylabel('SIF (W/m$^2$/sr/$\mu$m)') plt.title('TROPOMI (SIF at 740nm)') plt.scatter(tropomi_iowa.time_python, tropomi_iowa.sif,s=0.5, c=np.abs(tropomi_iowa.phaseAngle)) plt.colorbar() plt.title('TROPOMI time-series with PhaseAngle color-coded') print('Number of OCO-2 data points in Iowa: ' + str(len(oco2_iowa.sif_757))) print('Number of TROPOMI data points in Iowa: ' + str(len(tropomi_iowa.sif))) # Let us define some averaging interval, get data every 3 days and create a running mean of +/- 3 days: from datetime import datetime # Define data range to smooth on (every 3 days) dates = np.arange('2018-04', '2018-08', dtype='datetime64[3D]').astype(datetime) # We used dates with a 3 day time-step and now use a +/- 3 day filter for the running mean: iowa_timeseries_oco2 = sif_rMean(oco2_iowa.time_python, oco2_iowa.sif_757,dates, 3 ) iowa_timeseries_tropomi = sif_rMean(tropomi_iowa.time_python, tropomi_iowa.sif,dates, 3 ) # Conversion factor from 758nm to 740nm is roughly 1.55 (we can provide shapes if needed) cf = 1.55 plt.errorbar(dates, iowa_timeseries_oco2.mean,yerr=iowa_timeseries_oco2.standard_error, label='OCO-2 Mean') plt.errorbar(dates, iowa_timeseries_tropomi.mean,yerr=iowa_timeseries_tropomi.standard_error, label='TROPOMI Mean') plt.ylabel('SIF (W/m$^2$/sr/$\mu$m)') plt.legend(loc=0) plt.title('Iowa Timeseries, +/-3 day running mean') plt.plot(shape_scope[:,0], shape_scope[:,1], label='SCOPE') plt.plot(shape_walz[:,0], shape_walz[:,1]*12, label='Leaf level measurements') plt.title('normalized SIF shape') plt.xlabel('Wavelength (nm)') plt.legend() print('Factor from SCOPE ', convertWL(758,740,shape_scope)) print('Factor from leaf level ',convertWL(758,740,shape_walz)) fac = convertWL(758,740,shape_walz) # Conversion factor from 758nm to 740nm is roughly 1.55 (we can provide shapes if needed) plt.errorbar(dates, iowa_timeseries_oco2.mean*fac,yerr=iowa_timeseries_oco2.standard_error*fac, label='OCO-2 Mean') plt.errorbar(dates, iowa_timeseries_tropomi.mean,yerr=iowa_timeseries_tropomi.standard_error, label='TROPOMI Mean') plt.ylabel('SIF @740nm (W/m$^2$/sr/$\mu$m)') plt.legend(loc=0) plt.title('Iowa Timeseries, +/-3 day running mean') # Let's calculate the SIF timeseries including the length of day correction: # Do the same for the length of day correction iowa_timeseries_tropomiDC = sif_rMean(tropomi_iowa.time_python, tropomi_iowa.sif*tropomi_iowa.dcCorr,dates, 3 ) iowa_timeseries_oco2DC = sif_rMean(oco2_iowa.time_python, oco2_iowa.sif_757*oco2_iowa.dcCorr,dates, 3 ) plt.plot(dates, iowa_timeseries_oco2DC.mean*fac, label='OCO-2 LOD corrected') plt.plot(dates, iowa_timeseries_tropomiDC.mean, label='TROPOMI LOD corrected') plt.legend() plt.ylabel('$\overline{SIF}$ @740nm (W/m$^2$/sr/$\mu$m)') plt.title('Length of Day correction impact') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step3: Function to take connected component id as input and update all states in that component by setting the solution depth and the optimal neighbor game number. Step5: The algorithm is to follow breadth-first search but from a starting set rather than a single node. Step6: Saving Updates to the DB Step7: Develop data structures to support drawing the connected component.
<ASSISTANT_TASK:> Python Code: %matplotlib inline import os import pandas as pd import copy from IPython.display import HTML import sqlite3 as db import networkx as nx import collections con = db.connect('C:/Users/cliff/workspace/rushHour/Data Model and SQL/Schema Definition and Bulk Load/rush_hour.db') c = con.cursor() connected_component_id = 30 # component with 104 states sql = select s.comb_class_id, s.game_number, s.is_goal_state from connected_component c inner join game_state s on c.id = s.connected_component_id where connected_component_id = %d %(connected_component_id) c.execute(sql) component_states = c.fetchall() # #component_states #df = pd.read_sql(sql,con) #len(df) # each edge will match twice against the nodes - once for the pre transition state and once for the post transition state. # at the time of this writing, I am using distint to remove the duplicates. sql = select distinct t.* from connected_component c inner join game_state s on c.id = s.connected_component_id inner join state_transition t on (s.comb_class_id = t.comb_class_id and s.game_number = t.pre_transition_game_number ) or (t.comb_class_id = s.comb_class_id and s.game_number = t.post_transition_game_number ) where c.id = %d %(connected_component_id) c.execute(sql) component_edges = c.fetchall() #gray_states = collections.OrderedDict() # test code in preparation for work in next cell adj_list = {x[1]:[] for x in component_states} # component_state = () for [x,y,z]in component_edges: # [t1,t2,...tn] ti = (pre_tran_state_game_number, post_tran_state_game_number, comb_class_id): adj_list[x].append(y) adj_list[y].append(x) # component_states: (comb class id, game number, is Goal State?) # states as an array of dictionaries: [d1,d2,d3,...dn] di = {game_number:{'depth':,'optimal_nbr_game_number}} white_states = { x[1]:{'depth':None, 'optimal_nbr_game_number':None} for x in component_states if x[2] !=1} gray_states = collections.OrderedDict( { x[1]:{'depth':0, 'optimal_nbr_game_number':None} for x in component_states if x[2] == 1 }) black_states = {} len(gray_states), len(white_states),set([x for k in adj_list.keys() for x in adj_list[k] if x not in white_states]) len(set([k for k in white_states.keys()])) #cur_state_key,cur_state_values = gray_states.popitem(last=False) #nbr = (adj_list[cur_state_key])[0] #cur_state_key,nbr, [x for x in component_states if x[1] == nbr],nbr in gray_states,nbr in white_states, gray_states adj_list = {x[1]:[] for x in component_states} # component_state = () for [x,y,z]in component_edges: # [t1,t2,...tn] ti = (pre_tran_state_game_number, post_tran_state_game_number, comb_class_id): adj_list[x].append(y) adj_list[y].append(x) # component_states: (comb class id, game number, is Goal State?) # states as an array of dictionaries: [d1,d2,d3,...dn] di = {game_number:{'depth':,'optimal_nbr_game_number}} white_states = { x[1]:{'depth':None, 'optimal_nbr_game_number':None} for x in component_states if x[2] !=1} gray_states = collections.OrderedDict( { x[1]:{'depth':0, 'optimal_nbr_game_number':None} for x in component_states if x[2] == 1 }) black_states = {} while gray_states: cur_state_key,cur_state_values = gray_states.popitem(last=False) for nbr in adj_list[cur_state_key]: if nbr in white_states: gray_states[nbr] = {'depth': cur_state_values['depth'] + 1, 'optimal_nbr_game_number':cur_state_key} del white_states[nbr] black_states[cur_state_key] = cur_state_values distance_partition_max = max( v['depth'] for k,v in black_states.items() ) distance_partition = { i:[] for i in range(distance_partition_max + 1) } for k,v in black_states.items() : (distance_partition[v['depth'] ]).append(k) len(black_states), len(white_states),len(gray_states),len(adj_list),distance_partition_max,distance_partition [len(v) for k,v in distance_partition.items() ] # need to save depth and optimal neighbor back into the db # need to make comb_class_id = 36 update_sql = "update game_state set optimal_transition_game_number = ?, solution_distance = ? where comb_class_id = ? and game_number = ?" update_data = [ (v['optimal_nbr_game_number'], v['depth'],comb_class_id, k) for k,v in black_states.items() ] c.executemany(update_sql,update_data) con.commit() %run compute_edges_connected_components set_solution_data_for_comb_class(36) #res = solution_data_for_connected_component(20) #res #conn = db.connect('C:/Users/cliff/workspace/rushHour/Data Model and SQL/Schema Definition and Bulk Load/rush_hour.db') #cursor = conn.cursor() sql = 'select distinct connected_component_id from game_state where comb_class_id = %d' %(comb_class_id) c.execute(sql) component_ids = c.fetchall() component_ids def partition_color(depth): return depth #if depth == 0: # return 'r' #else: # return depth #if depth >0 and depth < 9: # return 'g' #if depth == 9: # return 'b' def nodes_colors(dist_partition): #dist_partition is dict k:v, K is partition depth (0,...,n); v is list of game numbers partition_list = [[k,elt] for k in dist_partition.keys() for elt in dist_partition[k]] nodes = [x[1] for x in partition_list] colors = [partition_color(x[0]) for x in partition_list] return [nodes,colors] [graph_nodes,graph_colors] = nodes_colors(dist_partition) node_labels = {node:str(depth) for (depth,nodes) in dist_partition.items() for node in nodes} shell_list = [list(dist_partition[k]) for k in dist_partition.keys() ] #nodes,node_labels #nx.draw(G) %pylab qt G=nx.Graph() G.add_nodes_from(graph_nodes) G.add_edges_from([ [x[0],x[1]] for x in component_edges]) nx.draw_shell(G,nlist=shell_list,scale=23,nodelist=graph_nodes, node_color=graph_colors, with_labels=True,labels=node_labels,font_size=24,node_size=300) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: What they have? Step5: Step2 Step6: Look at the feature vector. Step8: Step3 Step9: Step4 Step10: Testing model(Prediction) Step11: Step5
<ASSISTANT_TASK:> Python Code: import gzip import cPickle as pickle with gzip.open("../data/train.pklz", "rb") as train_file: train_set = pickle.load(train_file) with gzip.open("../data/test.pklz", "rb") as test_file: test_set = pickle.load(test_file) with gzip.open("../data/questions.pklz", "rb") as questions_file: questions = pickle.load(questions_file) print "* train_set:", train_set[1] print "* test_set:", test_set[7] print "* question keys:", questions[1].keys() "* question contents:", questions[1] from collections import defaultdict Calculate average position(response time) per user(uid) and question(qid). def get_avg_pos(data): pos_uid = defaultdict(list) pos_qid = defaultdict(list) for key in data: uid = data[key]['uid'] qid = data[key]['qid'] pos = data[key]['position'] pos_uid[uid].append(pos) pos_qid[qid].append(pos) avg_pos_uid = {} avg_pos_qid = {} for key in pos_uid: avg_pos_uid[key] = sum(pos_uid[key]) / len(pos_uid[key]) for key in pos_qid: avg_pos_qid[key] = sum(pos_qid[key]) / len(pos_qid[key]) return [avg_pos_uid, avg_pos_qid] Make feature vectors for given data set def featurize(data, avg_pos): X = [] avg_pos_uid = avg_pos[0] avg_pos_qid = avg_pos[1] for key in data: uid = data[key]['uid'] qid = data[key]['qid'] q_length = max(questions[qid]['pos_token'].keys()) category = questions[qid]['category'].lower() answer = questions[qid]['answer'].lower() if uid in avg_pos_uid: pos_uid = avg_pos_uid[uid] else: pos_uid = sum(avg_pos_uid.values()) / float(len(avg_pos_uid.values())) if qid in avg_pos_qid: pos_qid = avg_pos_qid[qid] else: pos_qid = sum(avg_pos_qid.values()) / float(len(avg_pos_qid.values())) feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer, "avg_pos_uid": pos_uid, "avg_pos_qid": pos_qid } X.append(feat) return X Get positions def get_positions(data): Y = [] for key in data: position = data[key]['position'] Y.append([position]) return Y X_train = featurize(train_set, get_avg_pos(train_set)) Y_train = get_positions(train_set) print len(X_train) print len(Y_train) print X_train[0], Y_train[0] from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer() X_train = vec.fit_transform(X_train) print X_train[0] from sklearn import linear_model from sklearn.cross_validation import train_test_split, cross_val_score import math from numpy import abs, sqrt regressor_names = LinearRegression Ridge Lasso ElasticNet print "=== Linear Cross validation RMSE scores:" for regressor in regressor_names.split(): scores = cross_val_score(getattr(linear_model, regressor)(), X_train, Y_train, cv=10,\ scoring='mean_squared_error') print regressor, sqrt(abs(scores)).mean() X_train = featurize(train_set, get_avg_pos(train_set)) X_test = featurize(test_set, get_avg_pos(train_set)) for x in X_test[:10]: print x X_train_length = len(X_train) X = vec.fit_transform(X_train + X_test) X_train = X[:X_train_length] X_test = X[X_train_length:] regressor = Lasso() regressor.fit(X_train, Y_train) predictions = regressor.predict(X_test) predictions = sorted([[id, predictions[index]] for index, id in enumerate(test_set.keys())]) print len(predictions) predictions[:5] import csv predictions.insert(0,["id", "position"]) with open('guess.csv', 'wb') as fp: writer = csv.writer(fp, delimiter=',') writer.writerows(predictions) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We can store it in a variable and print it out. It doesn't sound confusing at all! Step2: Here, a variable x stores number 42 as an integer. However, we can store the same number as a different type or within another data structure - as float, string, part of a list or a tuple. Depending on the type of variable, Python will print it slightly differently. Step3: So far it looks pretty normal. float adds floating point to the integer. Scientific notation is typically a float (serious scientists don't work with integers!). 42 as string looks exactly like we expected and similar to the integer, but their behaviors are different. The difference in behavior isn't obvious until we make it a part of a collection, for example a list. Step4: Here is the thing - Python won't show you quotes when you print a string, but if you print a string in another object, it encloses the string in single quotes. So each time you see this single quotes, you should understand that it's a string and not a number (at least for Python)! Step5: Wow, now you should be extreeeeeemely watchful! Step6: Using type() function might be extremely useful during debugging stage. However, quite often a simple print and a little bit of attention to what's printed is enough to figure out what's going on. Step7: As you see, when we print set and frozenset, they look very different. Frozenset, as lots of other objects in python, adds its object name when you print it. That makes really hard to confuse set and frozenset! Step9: Exercise. Now let's use our knowledge to practise and play with a function that takes a list and returns exactly the same list if every element of the list is a string. Otherwise, it returns a new list with all non-string elements converted to string. To avoid confusion, the function also returns a flag variable showing whether the list has been modified. Try to add some print statements to investigate the types of elements in the list, how the elements are printed out, and how the whole array looks like before and after type conversion.
<ASSISTANT_TASK:> Python Code: print (42) x = 42 print (x) x_float = float(42) x_scientific = 42e0 x_str = '42' print ('42 as a float', x_float) print ('42 as a float in scientific notation', x_scientific) print ('42 as a string', x_str) x_list = [x, x_str, x_float, x_scientific] print ("All 42 in a list:", x_list) x_tuple = tuple(x_list) x_set = set(x_list) x_dict = {x_str : x, x_tuple : x_list} print ("All 42 in a list:", x_list) print ("All 42 in a tuple:", x_tuple) print ("All 42 in a set:", x_set) print ("A dict of 42 in different flavors", x_dict) print ('42 as integer', x, "variable type is", type(x)) print ('42 as a float', x_float, "variable type is", type(x_float)) print ('42 as a float in scientific notation', x_scientific, "variable type is", type(x_scientific)) print ('42 as a string', x_str, "variable type is", type(x_str)) x_frozenset = frozenset(x_list) print ("Here's a set of 42:\n", x_set) print ("Here's a frozenset of 42:\n", x_frozenset) class my_42: def __init__(self): self.n = 42 def __str__(self): return 'Member of class my_42(' + str(self.n) + ')' print ('Just 42:',42) print ('New class:', my_42()) def list_converter(l): l - input list Returns a list where all elements have been stringified as well a flag to indicate if the list has been modified assert (type(l) == list) flag = False for el in l: # print the type of el if type(el) != str: flag = True if flag: new_list = [] for el in l: # how would be each element printed out? what's the element type? new_list.append(str(el)) # print how the new list looks like return new_list, flag else: return l, flag # `list_converter_test`: Test cell l = ['4', 8, 15, '16', 23, 42] l_true = ['4', '8', '15', '16', '23', '42'] new_l, flag = list_converter(l) print ("list_converter({}) -> {} [True: {}], new list flag is {}".format(l, new_l, l_true, flag)) assert new_l == l_true assert flag new_l, flag = list_converter(l_true) print ("list_converter({}) -> {} [True: {}], new list flag is {}".format(l, new_l, l_true, flag)) assert new_l == l_true assert not flag <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Create dataframe Step2: Make plot
<ASSISTANT_TASK:> Python Code: %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], 'pre_score': [4, 24, 31, 2, 3], 'mid_score': [25, 94, 57, 62, 70], 'post_score': [5, 43, 23, 23, 51]} df = pd.DataFrame(raw_data, columns = ['first_name', 'pre_score', 'mid_score', 'post_score']) df # Create a list of the mean scores for each variable mean_values = [df['pre_score'].mean(), df['mid_score'].mean(), df['post_score'].mean()] # Create a list of variances, which are set at .25 above and below the score variance = [df['pre_score'].mean() * 0.25, df['pre_score'].mean() * 0.25, df['pre_score'].mean() * 0.25] # Set the bar labels bar_labels = ['Pre Score', 'Mid Score', 'Post Score'] # Create the x position of the bars x_pos = list(range(len(bar_labels))) # Create the plot bars # In x position plt.bar(x_pos, # using the data from the mean_values mean_values, # with a y-error lines set at variance yerr=variance, # aligned in the center align='center', # with color color='#FFC222', # alpha 0.5 alpha=0.5) # add a grid plt.grid() # set height of the y-axis max_y = max(zip(mean_values, variance)) # returns a tuple, here: (3, 5) plt.ylim([0, (max_y[0] + max_y[1]) * 1.1]) # set axes labels and title plt.ylabel('Score') plt.xticks(x_pos, bar_labels) plt.title('Mean Scores For Each Test') plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load the Dataset Step2: Check the head of ad_data Step3: Use info and describe() on ad_data Step4: Let's check for any null values. Step5: Exploratory Data Analysis (EDA) Step6: TODO 1 Step7: TODO 2 Step8: TODO 1 Step9: Logistic Regression Step10: Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems. Step11: TODO 2 Step12: Train and fit a logistic regression model on the training set. Step13: Predictions and Evaluations Step14: Create a classification report for the model.
<ASSISTANT_TASK:> Python Code: !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns %matplotlib inline # TODO 1: Read in the advertising.csv file and set it to a data frame called ad_data. ad_data = pd.read_csv("../advertising.csv") ad_data.head() ad_data.info() ad_data.describe() ad_data.isnull().sum() # TODO 1 sns.set_style("whitegrid") ad_data["Age"].hist(bins=30) plt.xlabel("Age") # TODO 1 sns.jointplot(x="Age", y="Area Income", data=ad_data) # TODO 2 sns.jointplot( x="Age", y="Daily Time Spent on Site", data=ad_data, color="red", kind="kde", ); # TODO 1 sns.jointplot( x="Daily Time Spent on Site", y="Daily Internet Usage", data=ad_data, color="green", ) from sklearn.model_selection import train_test_split X = ad_data[ [ "Daily Time Spent on Site", "Age", "Area Income", "Daily Internet Usage", "Male", ] ] y = ad_data["Clicked on Ad"] # TODO 2 X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42 ) from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression() logmodel.fit(X_train, y_train) predictions = logmodel.predict(X_test) from sklearn.metrics import classification_report print(classification_report(y_test, predictions)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: A simple function Step2: We will examine the effect of varying phase and frequency Step3: Over a specific spatial area, sampled on a grid Step4: Compact data visualization Step5: As you can see, Image takes a 2D numpy array as input and Curve accepts a list of (x,y) points. With the + operator we can lay these elements out together and with .cols(2) we can arrange them into two columns. Step6: Now we create a high-dimensional HoloMap to explore Step7: You can still compose as many visualization elements as you wish together. Here is a demonstration of how to generate the horizontal cross-section of the circular wave using Curve elements. This is then positioned next to our circular wave Step8: You can then easily export your HoloMap objects to an interactive notebook, video formats, or GIF animations to use on a web page. Step9: Note that the spatial frequency of our curve plot is not affected by the frequency of our wave. That is because the phase always spans exactly one cycle at any of the chosen frequencies. Step10: Note that GIF support requires ImageMagick which is installed by default on many Linux installations and may be installed on OSX using brew. For more information on how to install ImageMagick (including Windows instructions) see the installation page. Step11: (The %% syntax for specifying options will be briefly explained in the next section below.) Step12: For convenience, the IPython-magic syntax for setting options like %%opts is used throughout these tutorials. However, you can use pure Python code to control the options in a similar way, e.g. from within an external non-IPython program where you want to render a HoloViews plot straight to a file, though it requires a few more curly brackets and quote marks Step13: Understanding your data Step14: HoloViews ensures that these differences are visible by default, by normalizing across any elements of the same type that are displayed together, and even across the frames of an animation Step15: This default visualization makes it clear that the two patterns differ greatly in amplitude. However, it is difficult to see the structure of the low-amplitude wave in B. If you wish to focus on the spatial structure rather than the amplitude, you can instruct HoloViews to normalize data in different axes separately
<ASSISTANT_TASK:> Python Code: import holoviews as hv import numpy as np hv.notebook_extension() def sine(x, phase=0, freq=100): return np.sin((freq * x + phase)) phases = np.linspace(0,2*np.pi,11) # Explored phases freqs = np.linspace(50,150,5) # Explored frequencies dist = np.linspace(-0.5,0.5,202) # Linear spatial sampling x,y = np.meshgrid(dist, dist) grid = (x**2+y**2) # 2D spatial sampling freq1 = hv.Image(sine(grid, freq=50)) + hv.Curve(zip(dist, sine(dist**2, freq=50))) freq2 = hv.Image(sine(grid, freq=200)) + hv.Curve(zip(dist, sine(dist**2, freq=200))) (freq1 + freq2).cols(2) dimensions = ['Phase', 'Frequency'] keys = [(p,f) for p in phases for f in freqs] items = [(k, hv.Image(sine(grid, *k), vdims=['Amplitude'])) for k in keys] circular_wave = hv.HoloMap(items, kdims=dimensions) circular_wave items = [(k, hv.Curve(zip(dist, sine(dist**2, *k)))) for k in keys] sections = hv.HoloMap(items, kdims=dimensions) circular_wave + sections sample_pos = (0,0.25) annotated = circular_wave * hv.Points([sample_pos]) sample = circular_wave.sample(samples=[sample_pos]).to.curve('Phase', 'Amplitude', ['Frequency']) annotated + sample %output holomap='gif' %%opts Image (cmap='gray') Contours (color='r') from holoviews.operation import contours, threshold, gradient m = hv.HoloMap([(p, hv.Image(sine(grid, phase=p))) for p in phases], kdims=['Phase']) contours(m, levels=[0.5]) + threshold(m, level=0.5) + gradient(m).hist(bin_range=(0,0.7)) %%opts Image (cmap='RdYlGn') Curve (color='g') (freq1 + freq2).cols(2) red_wave = circular_wave(options={'Image':{'style':{'cmap':'RdGy'}}}) red_wave comparison = hv.Image(sine(grid)) + hv.Image(sine(grid, phase=np.pi)*0.02) %%opts Image (cmap='gray') comparison = hv.Image(sine(grid)) + hv.Image(sine(grid, phase=np.pi)*0.02) comparison %%opts Image {+axiswise} (cmap='gray') comparison <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Problem Step2: Python Libraries Step3: If you've successfully completed the above install, all of the following statements should run. Step4: If any of these libraries are missing or out of date, you will need to install them and restart IPython. Step5: Hello matplotlib Step6: If that last cell complained about the %matplotlib line, you need to update IPython to v1.0, and restart the notebook. See the installation page Step7: Print the maximum, minimum, and mean of the array. This does not require writing a loop. In the code cell below, type x.m&lt;TAB&gt;, to find built-in operations for common array statistics like this Step8: Call the x.max function again, but use the axis keyword to print the maximum of each row in x. Step9: Here's a way to quickly simulate 500 coin "fair" coin tosses (where the probabily of getting Heads is 50%, or 0.5) Step10: Repeat this simulation 500 times, and use the plt.hist() function to plot a histogram of the number of Heads (1s) in each simulation Step12: The Monty Hall Problem Step14: Next, write a function that simulates the contestant's guesses for nsim simulations. Call this function simulate_guess. The specs Step16: Next, write a function, goat_door, to simulate randomly revealing one of the goat doors that a contestant didn't pick. Step18: Write a function, switch_guess, that represents the strategy of always switching a guess after the goat door is opened. Step20: Last function Step21: Now, put it together. Simulate 10000 games where contestant keeps his original guess, and 10000 games where the contestant switches his door after a goat door is revealed. Compute the percentage of time the contestant wins under either strategy. Is one strategy better than the other?
<ASSISTANT_TASK:> Python Code: import sys print sys.version x = [10, 20, 30, 40, 50] for item in x: print "Item is... you guessed it! ", item !pip install BeautifulSoup seaborn pyquery #IPython is what you are using now to run the notebook import IPython print "IPython version: %6.6s (need at least 3.0.0)" % IPython.__version__ # Numpy is a library for working with Arrays import numpy as np print "Numpy version: %6.6s (need at least 1.9.1)" % np.__version__ # SciPy implements many different numerical algorithms import scipy as sp print "SciPy version: %6.6s (need at least 0.15.1)" % sp.__version__ # Pandas makes working with data tables easier import pandas as pd print "Pandas version: %6.6s (need at least 0.16.2)" % pd.__version__ # Module for plotting import matplotlib print "Mapltolib version: %6.6s (need at least 1.4.1)" % matplotlib.__version__ # SciKit Learn implements several Machine Learning algorithms import sklearn print "Scikit-Learn version: %6.6s (need at least 0.16.1)" % sklearn.__version__ # Requests is a library for getting data from the Web import requests print "requests version: %6.6s (need at least 2.0.0)" % requests.__version__ #BeautifulSoup is a library to parse HTML and XML documents import bs4 print "BeautifulSoup version:%6.6s (need at least 4.4)" % bs4.__version__ import pyquery print "Loaded PyQuery" # The %... is an iPython thing, and is not part of the Python language. # In this case we're just telling the plotting library to draw things on # the notebook, instead of on a separate window. %matplotlib inline #this line above prepares IPython notebook for working with matplotlib # See all the "as ..." contructs? They're just aliasing the package names. # That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot(). import numpy as np # imports a fast numerical programming library import scipy as sp #imports stats functions, amongst other things import matplotlib as mpl # this actually imports matplotlib import matplotlib.cm as cm #allows us easy access to colormaps import matplotlib.pyplot as plt #sets up plotting under plt import pandas as pd #lets us handle data as dataframes #sets up pandas table display pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns #sets up styles and gives us more plotting options x = np.linspace(0, 10, 30) #array of 30 points from 0 to 10 y = np.sin(x) z = y + np.random.normal(size=30) * .2 plt.plot(x, y, 'o-', label='A sine wave') plt.plot(x, z, '--', label='Noisy sine') plt.legend(loc = 'lower right') plt.xlabel("X axis") plt.ylabel("Y axis") print "Make a 3 row x 4 column array of random numbers" x = np.random.random((3, 4)) print x print print "Add 1 to every element" x = x + 1 print x print print "Get the element at row 1, column 2" print x[1, 2] print # The colon syntax is called "slicing" the array. print "Get the first row" print x[0, :] print print "Get every 2nd column of the first row" print x[0, ::2] print print "Max is ", x.max() print "Min is ", x.min() print "Mean is ", x.mean() print x.max(axis=1) x = np.random.binomial(500, .5) print "number of heads:", x # 3 ways to run the simulations # loop heads = [] for i in range(500): heads.append(np.random.binomial(500, .5)) # "list comprehension" heads = [np.random.binomial(500, .5) for i in range(500)] print len(heads) # pure numpy heads = np.random.binomial(500, .5, size=500) histogram = plt.hist(heads, bins=10) heads.shape Function -------- simulate_prizedoor Generate a random array of 0s, 1s, and 2s, representing hiding a prize between door 0, door 1, and door 2 Parameters ---------- nsim : int The number of simulations to run Returns ------- sims : array Random array of 0s, 1s, and 2s Example ------- >>> print simulate_prizedoor(3) array([0, 0, 2]) def simulate_prizedoor(nsim): return np.random.randint(0, 3, (nsim)) Function -------- simulate_guess Return any strategy for guessing which door a prize is behind. This could be a random strategy, one that always guesses 2, whatever. Parameters ---------- nsim : int The number of simulations to generate guesses for Returns ------- guesses : array An array of guesses. Each guess is a 0, 1, or 2 Example ------- >>> print simulate_guess(5) array([0, 0, 0, 0, 0]) def simulate_guess(nsim): return np.zeros(nsim, dtype=np.int) Function -------- goat_door Simulate the opening of a "goat door" that doesn't contain the prize, and is different from the contestants guess Parameters ---------- prizedoors : array The door that the prize is behind in each simulation guesses : array THe door that the contestant guessed in each simulation Returns ------- goats : array The goat door that is opened for each simulation. Each item is 0, 1, or 2, and is different from both prizedoors and guesses Examples -------- >>> print goat_door(np.array([0, 1, 2]), np.array([1, 1, 1])) >>> array([2, 2, 0]) def goat_door(prizedoors, guesses): #strategy: generate random answers, and #keep updating until they satisfy the rule #that they aren't a prizedoor or a guess result = np.random.randint(0, 3, prizedoors.size) while True: bad = (result == prizedoors) | (result == guesses) if not bad.any(): return result result[bad] = np.random.randint(0, 3, bad.sum()) Function -------- switch_guess The strategy that always switches a guess after the goat door is opened Parameters ---------- guesses : array Array of original guesses, for each simulation goatdoors : array Array of revealed goat doors for each simulation Returns ------- The new door after switching. Should be different from both guesses and goatdoors Examples -------- >>> print switch_guess(np.array([0, 1, 2]), np.array([1, 2, 1])) >>> array([2, 0, 0]) def switch_guess(guesses, goatdoors): result = np.zeros(guesses.size) switch = {(0, 1): 2, (0, 2): 1, (1, 0): 2, (1, 2): 1, (2, 0): 1, (2, 1): 0} for i in [0, 1, 2]: for j in [0, 1, 2]: mask = (guesses == i) & (goatdoors == j) if not mask.any(): continue result = np.where(mask, np.ones_like(result) * switch[(i, j)], result) return result Function -------- win_percentage Calculate the percent of times that a simulation of guesses is correct Parameters ----------- guesses : array Guesses for each simulation prizedoors : array Location of prize for each simulation Returns -------- percentage : number between 0 and 100 The win percentage Examples --------- >>> print win_percentage(np.array([0, 1, 2]), np.array([0, 0, 0])) 33.333 def win_percentage(guesses, prizedoors): return 100 * (guesses == prizedoors).mean() nsim = 10000 #keep guesses print "Win percentage when keeping original door" print win_percentage(simulate_prizedoor(nsim), simulate_guess(nsim)) #switch pd = simulate_prizedoor(nsim) guess = simulate_guess(nsim) goats = goat_door(pd, guess) guess = switch_guess(guess, goats) print "Win percentage when switching doors" print win_percentage(pd, guess).mean() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The structure attribute returns a tuple of tuples, where each inner tuple corresponds to that node in the graph (and the column of data learned on). The numbers in that inner tuple correspond to the parents of that node. The results from this structure are that node 3 has node 1 as a parent, that node 2 has node 0 as a parent, and so forth. It seems to faithfully recapture the underlying dependencies in the data. Step2: These results show that the A* algorithm is both computationally faster and requires far less memory than the traditional algorithm, making it a better default for the 'exact' algorithm. The amount of memory used by the BNSL process is under 'increment', not 'peak memory', as 'peak memory' returns the total memory used by everything, while increment shows the difference in peak memory before and after the function has run. Step3: Approximate Learning Step4: Comparison Step5: We can see the expected results-- that the A* algorithm works faster than the shortest path, the greedy one faster than that, and Chow-Liu the fastest. The purple and cyan lines superimpose on the right plot as they produce graphs with the same score, followed closely by the greedy algorithm and then Chow-Liu performing the worst. Step6: This network contains three layer, with symptoms on the bottom (low energy, bloating, loss of appetite, vomitting, and abdominal cramps), diseases in the middle (overian cancer, lactose intolerance, and pregnancy), and genetic tests on the top for three different genetic mutations. The edges in this graph are constrainted such that symptoms are explained by diseases, and diseases can be partially explained by genetic mutations. There are no edges from diseases to genetic conditions, and no edges from genetic conditions to symptoms. If we were going to design a more efficient search algorithm, we would want to exploit this fact to drastically reduce the search space of graphs. Step7: All variables corresponding to these categories would be put in their appropriate name. This would define a scaffold for structure learning. Step8: In this graph, we're saying that variable 0 can be a parent for 1 or 2, and that variable 1 can be a parent for variable 2. In the same way that putting multiple variables in a node of the constraint graph allowed us to define layers, putting a single variable in the nodes of a constraint graph can allow us to define an ordering. Step9: In this situation we would have to run the exponential time algorithm on the variables in node 0 to find the optimal parents, and then run the independent parents algorithm on the variables in node 1 drawing only from the variables in node 0. To be specific Step10: We see that reconstructed perfectly here. Lets see what would happen if we didn't use the exact algorithm. Step11: It looks like we got three desirable attributes by using a constraint graph. The first is that there was over an order of magnitude speed improvement in finding the optimal graph. The second is that we were able to remove some edges we didn't want in the final Bayesian network, such as those between 11, 13, and 14. We also removed the edge between 1 and 12 and 1 and 3, which are spurious given the model that we originally defined. The third desired attribute is that we can specify the direction of some of the edges and get a better causal model.
<ASSISTANT_TASK:> Python Code: %pylab inline %load_ext memory_profiler from pomegranate import BayesianNetwork import seaborn, time seaborn.set_style('whitegrid') X = numpy.random.randint(2, size=(2000, 7)) X[:,3] = X[:,1] X[:,6] = X[:,1] X[:,0] = X[:,2] X[:,4] = X[:,5] model = BayesianNetwork.from_samples(X, algorithm='exact') print model.structure model.plot() from sklearn.datasets import load_digits X, y = load_digits(10, True) X = X > numpy.mean(X) plt.figure(figsize=(14, 4)) plt.subplot(131) plt.imshow(X[0].reshape(8, 8), interpolation='nearest') plt.grid(False) plt.subplot(132) plt.imshow(X[1].reshape(8, 8), interpolation='nearest') plt.grid(False) plt.subplot(133) plt.imshow(X[2].reshape(8, 8), interpolation='nearest') plt.grid(False) X = X[:,:18] tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='exact-dp') # << BNSL done here! t1 = time.time() - tic p1 = model.log_probability(X).sum() tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='exact') t2 = time.time() - tic p2 = model.log_probability(X).sum() print "Shortest Path" print "Time (s): ", t1 print "P(D|M): ", p1 %memit BayesianNetwork.from_samples(X, algorithm='exact-dp') print print "A* Search" print "Time (s): ", t2 print "P(D|M): ", p2 %memit BayesianNetwork.from_samples(X, algorithm='exact') tic = time.time() model = BayesianNetwork.from_samples(X) # << Default BNSL setting t = time.time() - tic p = model.log_probability(X).sum() print "Greedy" print "Time (s): ", t print "P(D|M): ", p %memit BayesianNetwork.from_samples(X) tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='chow-liu') # << Default BNSL setting t = time.time() - tic p = model.log_probability(X).sum() print "Chow-Liu" print "Time (s): ", t print "P(D|M): ", p %memit BayesianNetwork.from_samples(X, algorithm='chow-liu') X, _ = load_digits(10, True) X = X > numpy.mean(X) t1, t2, t3, t4 = [], [], [], [] p1, p2, p3, p4 = [], [], [], [] n_vars = range(8, 19) for i in n_vars: X_ = X[:,:i] tic = time.time() model = BayesianNetwork.from_samples(X_, algorithm='exact-dp') # << BNSL done here! t1.append(time.time() - tic) p1.append(model.log_probability(X_).sum()) tic = time.time() model = BayesianNetwork.from_samples(X_, algorithm='exact') t2.append(time.time() - tic) p2.append(model.log_probability(X_).sum()) tic = time.time() model = BayesianNetwork.from_samples(X_, algorithm='greedy') t3.append(time.time() - tic) p3.append(model.log_probability(X_).sum()) tic = time.time() model = BayesianNetwork.from_samples(X_, algorithm='chow-liu') t4.append(time.time() - tic) p4.append(model.log_probability(X_).sum()) plt.figure(figsize=(14, 4)) plt.subplot(121) plt.title("Time to Learn Structure", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.ylabel("Time (s)", fontsize=14) plt.xlabel("Variables", fontsize=14) plt.plot(n_vars, t1, c='c', label="Exact Shortest") plt.plot(n_vars, t2, c='m', label="Exact A*") plt.plot(n_vars, t3, c='g', label="Greedy") plt.plot(n_vars, t4, c='r', label="Chow-Liu") plt.legend(fontsize=14, loc=2) plt.subplot(122) plt.title("$P(D|M)$ with Resulting Model", fontsize=14) plt.xlabel("Variables", fontsize=14) plt.ylabel("logp", fontsize=14) plt.plot(n_vars, p1, c='c', label="Exact Shortest") plt.plot(n_vars, p2, c='m', label="Exact A*") plt.plot(n_vars, p3, c='g', label="Greedy") plt.plot(n_vars, p4, c='r', label="Chow-Liu") plt.legend(fontsize=14) from pomegranate import DiscreteDistribution, ConditionalProbabilityTable, Node BRCA1 = DiscreteDistribution({0: 0.999, 1: 0.001}) BRCA2 = DiscreteDistribution({0: 0.985, 1: 0.015}) LCT = DiscreteDistribution({0: 0.950, 1: 0.050}) OC = ConditionalProbabilityTable([[0, 0, 0, 0.999], [0, 0, 1, 0.001], [0, 1, 0, 0.750], [0, 1, 1, 0.250], [1, 0, 0, 0.700], [1, 0, 1, 0.300], [1, 1, 0, 0.050], [1, 1, 1, 0.950]], [BRCA1, BRCA2]) LI = ConditionalProbabilityTable([[0, 0, 0.99], [0, 1, 0.01], [1, 0, 0.20], [1, 1, 0.80]], [LCT]) PREG = DiscreteDistribution({0: 0.90, 1: 0.10}) LE = ConditionalProbabilityTable([[0, 0, 0.99], [0, 1, 0.01], [1, 0, 0.25], [1, 1, 0.75]], [OC]) BLOAT = ConditionalProbabilityTable([[0, 0, 0, 0.85], [0, 0, 1, 0.15], [0, 1, 0, 0.70], [0, 1, 1, 0.30], [1, 0, 0, 0.40], [1, 0, 1, 0.60], [1, 1, 0, 0.10], [1, 1, 1, 0.90]], [OC, LI]) LOA = ConditionalProbabilityTable([[0, 0, 0, 0.99], [0, 0, 1, 0.01], [0, 1, 0, 0.30], [0, 1, 1, 0.70], [1, 0, 0, 0.95], [1, 0, 1, 0.05], [1, 1, 0, 0.95], [1, 1, 1, 0.05]], [PREG, OC]) VOM = ConditionalProbabilityTable([[0, 0, 0, 0, 0.99], [0, 0, 0, 1, 0.01], [0, 0, 1, 0, 0.80], [0, 0, 1, 1, 0.20], [0, 1, 0, 0, 0.40], [0, 1, 0, 1, 0.60], [0, 1, 1, 0, 0.30], [0, 1, 1, 1, 0.70], [1, 0, 0, 0, 0.30], [1, 0, 0, 1, 0.70], [1, 0, 1, 0, 0.20], [1, 0, 1, 1, 0.80], [1, 1, 0, 0, 0.05], [1, 1, 0, 1, 0.95], [1, 1, 1, 0, 0.01], [1, 1, 1, 1, 0.99]], [PREG, OC, LI]) AC = ConditionalProbabilityTable([[0, 0, 0, 0.95], [0, 0, 1, 0.05], [0, 1, 0, 0.01], [0, 1, 1, 0.99], [1, 0, 0, 0.40], [1, 0, 1, 0.60], [1, 1, 0, 0.20], [1, 1, 1, 0.80]], [PREG, LI]) s1 = Node(BRCA1, name="BRCA1") s2 = Node(BRCA2, name="BRCA2") s3 = Node(LCT, name="LCT") s4 = Node(OC, name="OC") s5 = Node(LI, name="LI") s6 = Node(PREG, name="PREG") s7 = Node(LE, name="LE") s8 = Node(BLOAT, name="BLOAT") s9 = Node(LOA, name="LOA") s10 = Node(VOM, name="VOM") s11 = Node(AC, name="AC") model = BayesianNetwork("Hut") model.add_nodes(s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11) model.add_edge(s1, s4) model.add_edge(s2, s4) model.add_edge(s3, s5) model.add_edge(s4, s7) model.add_edge(s4, s8) model.add_edge(s4, s9) model.add_edge(s4, s10) model.add_edge(s5, s8) model.add_edge(s5, s10) model.add_edge(s5, s11) model.add_edge(s6, s9) model.add_edge(s6, s10) model.add_edge(s6, s11) model.bake() plt.figure(figsize=(14, 10)) model.plot() plt.show() import networkx from pomegranate.utils import plot_networkx constraints = networkx.DiGraph() constraints.add_edge('genetic conditions', 'diseases') constraints.add_edge('diseases', 'symptoms') plot_networkx(constraints) constraints = networkx.DiGraph() constraints.add_edge(0, 1) constraints.add_edge(1, 2) constraints.add_edge(0, 2) plot_networkx(constraints) constraints = networkx.DiGraph() constraints.add_edge(0, 1) constraints.add_edge(0, 0) plot_networkx(constraints) numpy.random.seed(6) X = numpy.random.randint(2, size=(200, 15)) X[:,1] = X[:,7] X[:,12] = 1 - X[:,7] X[:,5] = X[:,3] X[:,13] = X[:,11] X[:,14] = X[:,11] a = networkx.DiGraph() b = tuple((0, 1, 2, 3, 4)) c = tuple((5, 6, 7, 8, 9)) d = tuple((10, 11, 12, 13, 14)) a.add_edge(b, c) a.add_edge(c, d) print "Constraint Graph" plot_networkx(a) plt.show() print "Learned Bayesian Network" tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=a) plt.figure(figsize=(16, 8)) model.plot() plt.show() print "pomegranate time: ", time.time() - tic, model.structure tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='exact') plt.figure(figsize=(16, 8)) model.plot() plt.show() print "pomegranate time: ", time.time() - tic, model.structure constraint_times, times = [], [] x = numpy.arange(1, 7) for i in x: symptoms = tuple(range(i)) diseases = tuple(range(i, i*2)) genetic = tuple(range(i*2, i*3)) constraints = networkx.DiGraph() constraints.add_edge(genetic, diseases) constraints.add_edge(diseases, symptoms) X = numpy.random.randint(2, size=(2000, i*3)) tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=constraints) constraint_times.append( time.time() - tic ) tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='exact') times.append( time.time() - tic ) plt.figure(figsize=(14, 6)) plt.title('Time To Learn Bayesian Network', fontsize=18) plt.xlabel("Number of Variables", fontsize=14) plt.ylabel("Time (s)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.plot( x*3, times, linewidth=3, color='c', label='Exact') plt.plot( x*3, constraint_times, linewidth=3, color='m', label='Constrained') plt.legend(loc=2, fontsize=16) plt.yscale('log') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Matching between spaces with different dimensions Step2: We then run OTT's Gromov-Wasserstein solver to find a matching between the points of each geometry. In this tutorial, we define two pointclouds (ott.geometry.pointcloud) but general geometries can be used as well (ott.geometry.geometry). The loss between the distance matrices of the two pointclouds is by default the squared Euclidean loss. Step3: The resulting transport matrix between the two pointclouds is as follows Step4: The larger the regularisation parameter epsilon is, the more diffuse the transport matrix becomes, as we can see in the animation below. Step5: To better understand the correspondance found by the GW solver, we plot in the same color, for each green, blue or red point in the spiral, the point in the Swiss roll it is the most coupled to.
<ASSISTANT_TASK:> Python Code: !pip install ott-jax from IPython import display import jax from jax import numpy as jnp from jax import random import numpy as np from matplotlib import animation from matplotlib import cm import matplotlib.pyplot as plt import mpl_toolkits.mplot3d.axes3d as p3 import ott from ott.core import gromov_wasserstein as gw from ott.geometry import pointcloud # Samples spiral def sample_spiral( n, min_radius, max_radius, key, min_angle=0, max_angle=10, noise=1.): radius = jnp.linspace(min_radius, max_radius, n) angles = jnp.linspace(min_angle, max_angle, n) data = [] noise = jax.random.normal(key, (2, n)) * noise for i in range(n): x = (radius[i] + noise[0, i]) * jnp.cos(angles[i]) y = (radius[i] + noise[1, i]) * jnp.sin(angles[i]) data.append([x, y]) data = jnp.array(data) return data # Samples Swiss roll def sample_swiss_roll( n, min_radius, max_radius, length, key, min_angle=0, max_angle=10, noise=0.1): spiral = sample_spiral(n, min_radius, max_radius, key[0], min_angle, max_angle, noise) third_axis = jax.random.uniform(key[1], (n, 1)) * length swiss_roll = jnp.hstack((spiral[:, 0:1], third_axis, spiral[:, 1:])) return swiss_roll # Plots spiral and Swiss roll def plot(swiss_roll, spiral, colormap_angles_swiss_roll, colormap_angles_spiral): fig = plt.figure(figsize=(11, 5)) ax = fig.add_subplot(1, 3, 1) ax.scatter(spiral[:, 0], spiral[:, 1], c = colormap_angles_spiral) ax.set_ylim(-40, 30) ax.grid() ax = fig.add_subplot(1, 3, (2, 3), projection='3d') ax.view_init(7, -80) ax.scatter(swiss_roll[:, 0], swiss_roll[:, 1], swiss_roll[:, 2], c = colormap_angles_swiss_roll) ax.set_adjustable('box') plt.show() # Data parameters n_spiral = 400 n_swiss_roll = 500 length = 10 min_radius = 6 max_radius = 30 noise = 3 min_angle = 0 max_angle = 9 angle_shift = 3 # Seed seed=14 key = jax.random.PRNGKey(seed) key, *subkey = random.split(key, 4) spiral = sample_spiral(n_spiral, min_radius, max_radius, key=subkey[0], min_angle=min_angle + angle_shift, max_angle=max_angle + angle_shift, noise=noise) swiss_roll = sample_swiss_roll(n_swiss_roll, min_radius, max_radius, key=subkey[1:], length=length, min_angle=min_angle, max_angle=max_angle) plot(swiss_roll, spiral, 'blue', 'green') # Applies Gromov Wasserstein geom_x = pointcloud.PointCloud(x=spiral, y=spiral) geom_y = pointcloud.PointCloud(x=swiss_roll, y=swiss_roll) out = gw.gromov_wasserstein(geom_x=geom_x, geom_y=geom_y, epsilon=100., max_iterations=60, jit=True) print(f'The last Sinkhorn iteration has converged: {out.converged_sinkhorn[-1]}') print(f'The final regularised GW cost is: {out.reg_gw_cost:.3f}') transport = out.transport fig = plt.figure(figsize=(8, 6)) plt.imshow(transport, cmap='Purples') plt.xlabel('IDs of samples from the Swiss roll', fontsize=14) # IDs are ordered from center to outer part plt.ylabel('ID of samples from the spiral', fontsize=14) # IDs are ordered from center to outer part plt.colorbar() plt.show() # Animates the transport matrix fig = plt.figure(figsize=(8, 6)) im=plt.imshow(transport, cmap='Purples') plt.xlabel('IDs of samples from the Swiss roll', fontsize=14) # IDs are ordered from center to outer part plt.ylabel('IDs of samples from the spiral', fontsize=14) # IDs are ordered from center to outer part plt.colorbar() # Initialization function def init(): im.set_data(np.zeros(transport.shape)) return [im] # Animation function def animate(i): array = im.get_array() geom_x = pointcloud.PointCloud(x=spiral, y=spiral) geom_y = pointcloud.PointCloud(x=swiss_roll, y=swiss_roll) config = {'threshold': 1e-2} out = gw.gromov_wasserstein(geom_x=geom_x, geom_y=geom_y, epsilon=i, max_iterations=40, jit=True, sinkhorn_kwargs=config) im.set_array(out.transport) im.set_clim(0, jnp.max(out.transport[:])) return [im] # Call the animator anim = animation.FuncAnimation(fig, animate, init_func=init, frames=[100., 500., 750., 1000., 2000., 10000., 50000., 500000., 1000000.], interval=1500, blit=True) html = display.HTML(anim.to_jshtml()) display.display(html) plt.close() # For each sample from the spiral, we get the most coupled point from the Swiss roll indices_swiss_roll = jnp.array(np.argmax(transport, axis=1)) # Sets colors to visualise matching of some areas between each shape # IDs of spiral and Swiss roll are ordered from center to outer part colors_input_spiral = ['b']*40 + ['silver']*160 + ['g']*40 + ['silver']*90 + ['r']*40 + ['silver']*30 colors_swiss_roll = np.array(['silver']*500) colors_swiss_roll[indices_swiss_roll[:40]] = 'b' colors_swiss_roll[indices_swiss_roll[200:240]] = 'g' colors_swiss_roll[indices_swiss_roll[330:370]] = 'r' plot(swiss_roll, spiral, colors_swiss_roll, colors_input_spiral) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: On sait d'après les dernières questions qu'il faudra tout répéter plusieurs fois. On prend le soin d'écrire chaque question dans une fonction. C'est un mariage dans une salle ronde. On veut disposer les tables de sortes qu'elles soient éloignées le plus possible les unes des autres et du bord de la salle. Les tables sont toutes rondes et toutes la même taille. Step2: Q2 - distance_bord Step3: Q3 - table_alea Step4: On peut utiliser une autre façon en utilisant les coordonnées polaires. Step5: Q4 - n_table_alea Step6: Q5 - table_proches Step7: Q6 - distance_n_tables_alea Step8: Q7 - meilleur_table_alea Step9: Q8 - résultat numérique Step10: Q9 - plot_tables Step11: Q10 ... Step12: Q10 - Voronoï Step13: On ajoute le bord. Step14: Le diagramme de Voronoï permet de construire un voisinage de points pour qu'on peut rapprocher le plus possible d'en ensemble de triangles équilatéraux ou rapprocher une table le plus possible de sa frontière la plus éloignée. Après quelques essais, je ne peux pas que ce fut là la meilleure inspiration. Step15: Les centres des clusters sont les emplacements des tables cherchées. Step16: On essaye avec un mélange de lois normales.
<ASSISTANT_TASK:> Python Code: from jyquickhelper import add_notebook_menu add_notebook_menu() def distance_table(x1, y1, x2, y2): return ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5 distance_table(0, 0, 2, 1) def distance_bord(x1, y1, R): dist = distance_table(x1, y1, 0, 0) return R - dist distance_bord(1, 1, 5) distance_bord(10, 1, 5) import random def table_alea(R): R2 = R ** 2 dist = R2 * 2 while dist > R2: x = random.uniform(-R, R) y = random.uniform(-R, R) dist = x ** 2 + y ** 2 return x, y table_alea(5) import math def table_alea_pol(C): t = random.uniform(0, math.pi) y = random.uniform(0, C) x = math.cos(t) * t y = math.sin(t) * t return x, y table_alea_pol(5) def n_table_alea(N, R): return [table_alea(R) for n in range(N)] n_table_alea(3, 5) def table_proches(x1, y1, list_tables, R, skip_i): proche = -1 best = distance_bord(x1, y1, R) for i, table in enumerate(list_tables): if i == skip_i: continue dist = distance_table(x1, y1, table[0], table[1]) if dist < best: best = dist proche = i return proche, best R = 5 list_tables = n_table_alea(3, R) table_proches(1, 1, list_tables, R, None) table_proches(R * 0.9, 0, list_tables, R, None) def distance_n_tables_alea(N, R): distrib = n_table_alea(N, R) best = R ** 2 for i, table in enumerate(distrib): proche, dist = table_proches(table[0], table[1], distrib, R, skip_i=i) if dist < best: best = dist return best, distrib distance_n_tables_alea(3, R) def meilleur_table_alea(k, N, R): dist = 0 best = None for i in range(k): d, distrib = distance_n_tables_alea(N, R) if d > dist: best = distrib dist = d return best, dist meilleur_table_alea(10, 3, R) best, dist = meilleur_table_alea(10, 11, 1) best, dist %matplotlib inline import matplotlib.pyplot as plt from matplotlib.patches import Circle fig, ax = plt.subplots(1, 1, figsize=(4, 4)) R = 1 ax.set_xlim([-R, R]) ax.set_ylim([-R, R]) c = Circle((0, 0), 1, alpha=0.2) ax.add_artist(c) ax.plot([b[0] for b in best], [b[1] for b in best], 'o'); import numpy def improve_distrib(iter, tables, R, alpha=0.2): for it in range(iter): # On cherche la pair la plus proche. best = R ** 2 pair = None for i, table in enumerate(tables): proche, dist = table_proches(table[0], table[1], tables, R, skip_i=i) if dist < best: best = dist pair = i, proche if it % 50 == 0: print(it, "paire", pair, "distance", best) # On choisit une table. if pair[0] == -1: i = 1 elif pair[1] == -1: i = 0 else: i = numpy.random.randint(0, 1) pi = pair[i] if pair[1-i] == -1: pjp = (0, 0) sign = 1 else: pjp = tables[pair[1-i]] sign = -1 # On calcule le vecteur entre les deux tables. dx, dy = (pjp[0] - tables[pi][0], pjp[1] - tables[pi][1]) # Un peu d'aléa. h = numpy.random.uniform(0, alpha) dx *= h * sign dy *= h * sign # On bouge la table. table = tables[pi] tables[pi] = (table[0] + dx, table[1] + dy) if distance_bord(tables[pi][0], tables[pi][1], R) < 0: # Table hors du cercle tables[pi] = (table[0] - dx, table[1] - dy) R = 1 best_sol, dist = meilleur_table_alea(10, 11, R) improve_distrib(351, best_sol, R, alpha=0.5) fig, ax = plt.subplots(1, 1, figsize=(4, 4)) ax.set_xlim([-R, R]) ax.set_ylim([-R, R]) c = Circle((0, 0), 1, alpha=0.2) ax.add_artist(c) ax.plot([b[0] for b in best_sol], [b[1] for b in best_sol], 'o'); from scipy.spatial import Voronoi, voronoi_plot_2d, Delaunay points = numpy.array(best_sol) vor = Voronoi(points) dela = Delaunay(points) fig, ax = plt.subplots(1, 1, figsize=(4, 4)) c = Circle((0, 0), 1, alpha=0.2) ax.add_artist(c) ax.plot([b[0] for b in best_sol], [b[1] for b in best_sol], 'o') voronoi_plot_2d(vor, ax=ax) ax.triplot(points[:,0], points[:,1], dela.simplices.copy(), "--", label="Delaunay") ax.set_xlim([-R, R]) ax.set_ylim([-R, R]) ax.legend(); N = 12 bords = [] for i in range(0, N + 1): bords.append((R * math.cos(i / N * math.pi * 2), R * math.sin(i / N * math.pi * 2))) points2 = numpy.vstack([points, bords]) fig, ax = plt.subplots(1, 1, figsize=(4, 4)) c = Circle((0, 0), 1, alpha=0.2) ax.add_artist(c) ax.plot([b[0] for b in best_sol], [b[1] for b in best_sol], 'o') vor2 = Voronoi(points2) dela2 = Delaunay(points2) voronoi_plot_2d(vor2, ax=ax) ax.triplot(points2[:,0], points2[:,1], dela2.simplices.copy(), "--") ax.set_xlim([-R, R]) ax.set_ylim([-R, R]); def points_in_circle(N, R): points = numpy.empty(((N+1)**2, 2)) pos = 0 for i in range(0, N + 1): for j in range(0, N + 1): points[pos, 0] = 1.0 * i / N * R * 2 - R points[pos, 1] = 1.0 * j / N * R * 2 - R pos += 1 dist = points[:, 0] ** 2 + points[:, 1] ** 2 points = points[dist <= R**2, :] return points R = 1 points = points_in_circle(25, R) fig, ax = plt.subplots(1, 1, figsize=(4, 4)) c = Circle((0, 0), R, alpha=0.2) ax.add_artist(c) ax.plot(points[:, 0], points[:, 1], '.'); from sklearn.cluster import KMeans km = KMeans(n_clusters=11) km.fit(points) pred = km.predict(points) fig, ax = plt.subplots(1, 2, figsize=(10, 4)) c = Circle((0, 0), 1, alpha=0.2) ax[0].add_artist(c) ax[0].set_xlim([-R, R]) ax[0].set_ylim([-R, R]) ax[0].scatter(points[:, 0], points[:, 1], c=pred) c = Circle((0, 0), 1, alpha=0.2) ax[1].add_artist(c) ax[1].set_xlim([-R, R]) ax[1].set_ylim([-R, R]) ax[1].plot(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], 'o') vor2 = Voronoi(km.cluster_centers_) voronoi_plot_2d(vor2, ax=ax[1]) ax[1].set_title("Centres des clusters - KMeans") ax[1].set_xlim([-R, R]) ax[1].set_ylim([-R, R]); def distance_n_tables(distrib, R): best = R ** 2 for i, table in enumerate(distrib): proche, dist = table_proches(table[0], table[1], distrib, R, skip_i=i) if dist < best: best = dist return best distance_n_tables(km.cluster_centers_, 1), distance_n_tables(best_sol, 1) from sklearn.mixture import GaussianMixture gau = GaussianMixture(11) gau.fit(points) pred = gau.predict(points) fig, ax = plt.subplots(1, 2, figsize=(10, 4)) c = Circle((0, 0), 1, alpha=0.2) ax[0].add_artist(c) ax[0].set_xlim([-R, R]) ax[0].set_ylim([-R, R]) ax[0].scatter(points[:, 0], points[:, 1], c=pred) c = Circle((0, 0), 1, alpha=0.2) ax[1].add_artist(c) ax[1].set_xlim([-R, R]) ax[1].set_ylim([-R, R]) ax[1].plot(gau.means_[:, 0], gau.means_[:, 1], 'o') vor2 = Voronoi(gau.means_) voronoi_plot_2d(vor2, ax=ax[1]) ax[1].set_title("Centres des clusters - gaussian mixture") ax[1].set_xlim([-R, R]) ax[1].set_ylim([-R, R]); distance_n_tables(km.cluster_centers_, 1), distance_n_tables(gau.means_, 1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 数据维度 Step2: 数据可视化
<ASSISTANT_TASK:> Python Code: from __future__ import print_function from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("./mnist/", one_hot=True) # Load data x_train = mnist.train.images y_train = mnist.train.labels x_test = mnist.test.images y_test = mnist.test.labels print("x_train: ", x_train.shape) print("y_train: ", y_train.shape) print("x_test: ", x_test.shape) print("y_test: ", y_test.shape) import matplotlib.pyplot as plt import numpy as np %matplotlib inline def plot_mnist(data, classes): for i in range(10): idxs = (classes == i) # get 10 images for class i images = data[idxs][0:10] for j in range(5): plt.subplot(5, 10, i + j*10 + 1) plt.imshow(images[j].reshape(28, 28), cmap='gray') # print a title only once for each class if j == 0: plt.title(i) plt.axis('off') plt.show() classes = np.argmax(y_train, 1) plot_mnist(x_train, classes) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Environment Preparation Step2: Distributed automl for time series forecasting using Chronos AutoTS Step 0 Step3: Step 1 Step4: This is the only place where you need to specify local or distributed mode. View Orca Context for more details. Note that argument init_ray_on_spark must be True for Chronos. Step5: Step 2 Step6: Step 3 Step7: Step 4 Step8: Step 5
<ASSISTANT_TASK:> Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Install latest pre-release version of bigdl-chronos # Installing bigdl-chronos from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade bigdl-chronos[all] !pip uninstall -y torchtext # uninstall torchtext to avoid version conflict exit() # restart the runtime to refresh installed pkg # download the dataset !wget https://raw.githubusercontent.com/numenta/NAB/v1.0/data/realKnownCause/nyc_taxi.csv # load the dataset. The downloaded dataframe contains two columns, "timestamp" and "value". import pandas as pd df = pd.read_csv("nyc_taxi.csv", parse_dates=["timestamp"]) # import necesary libraries and modules from bigdl.orca import init_orca_context, stop_orca_context from bigdl.orca import OrcaContext # recommended to set it to True when running bigdl-chronos in Jupyter notebook OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook). init_orca_context(cluster_mode="local", cores=4, init_ray_on_spark=True) from bigdl.chronos.data import TSDataset from sklearn.preprocessing import StandardScaler tsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df, # the dataframe to load dt_col="timestamp", # the column name specifying datetime target_col="value", # the column name to predict with_split=True, # split the dataset into 3 parts val_ratio=0.1, # validation set ratio test_ratio=0.1) # test set ratio # for each tsdataset, we # 1. generate datetime feature columns. # 2. impute the dataset with last occured value. # 3. scale the dataset with standard scaler, fit = true for train data. standard_scaler = StandardScaler() for tsdata in [tsdata_train, tsdata_val, tsdata_test]: tsdata.gen_dt_feature()\ .impute(mode="last")\ .scale(standard_scaler, fit=(tsdata is tsdata_train)) import bigdl.orca.automl.hp as hp from bigdl.chronos.autots import AutoTSEstimator auto_estimator = AutoTSEstimator(model='lstm', # the model name used for training search_space='normal', # a default hyper parameter search space past_seq_len=hp.randint(1, 10)) # hp sampling function of past_seq_len for auto-tuning # fit with AutoTSEstimator for a returned TSPipeline ts_pipeline = auto_estimator.fit(data=tsdata_train, # train dataset validation_data=tsdata_val, # validation dataset epochs=5) # number of epochs to train in each trial # predict with the best trial y_pred = ts_pipeline.predict(tsdata_test) # evaluate the result pipeline mse, smape = ts_pipeline.evaluate(tsdata_test, metrics=["mse", "smape"]) print("Evaluate: the mean square error is", mse) print("Evaluate: the smape value is", smape) # plot the result import matplotlib.pyplot as plt lookback = auto_estimator.get_best_config()['past_seq_len'] groundtruth_unscale = tsdata_test.unscale().to_pandas()[lookback - 1:] plt.figure(figsize=(16,6)) plt.plot(groundtruth_unscale["timestamp"], y_pred[:,0,0]) plt.plot(groundtruth_unscale["timestamp"], groundtruth_unscale["value"]) plt.legend(["prediction", "ground truth"]) # save the pipeline my_ppl_file_path = "/tmp/saved_pipeline" ts_pipeline.save(my_ppl_file_path) # restore the pipeline for further deployment from bigdl.chronos.autots import TSPipeline loaded_ppl = TSPipeline.load(my_ppl_file_path) # Stop orca context when your program finishes stop_orca_context() # show a tensorboard view %load_ext tensorboard %tensorboard --logdir /tmp/autots_estimator/autots_estimator_leaderboard/ <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
<ASSISTANT_TASK:> Python Code: import pandas as pd df=pd.DataFrame({"Category":['Foo','Bar','Cho','Foo'],'Index':[1,2,3,4]}) filter_list=['Foo','Bar'] def g(df, filter_list): return df.query("Category == @filter_list") result = g(df.copy(), filter_list) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. Install CFNCluster Step2: 2. Upgrade CFNCluster Step3: 3. Configure CFNCluster Step4: After you finish configuration, you can call the below function to double check if your settings are correct. Step5: To create a new cluster, you need to set a cluster name and then call the below function. After the creation is complete, you will see the output information about your cluser IP address. Step6: 4. Manage cluster Step7: After the job is done, you can call the below function to close the connection. Step8: To delete the cluster, you just need to set the cluster name and call the below function.
<ASSISTANT_TASK:> Python Code: import os import sys sys.path.append(os.getcwd().replace("notebooks", "cfncluster")) ## Input the AWS account access keys aws_access_key_id = "/**aws_access_key_id**/" aws_secret_access_key = "/**aws_secret_access_key**/" ## CFNCluster name your_cluster_name = "cluster_name" ## The private key pair for accessing cluster. private_key = "/path/to/private_key.pem" ## If delete cfncluster after job is done. delete_cfncluster = False import CFNClusterManager CFNClusterManager.install_cfn_cluster() import CFNClusterManager CFNClusterManager.upgrade_cfn_cluster() import CFNClusterManager ## Configure cfncluster settings CFNClusterManager.insert_access_keys(aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key) CFNClusterManager.config_key_name(private_key) CFNClusterManager.config_instance_types(master_instance_type="m3.large", compute_instance_type="r3.2xlarge") CFNClusterManager.config_initial_cluster_size(initial_cluster_size="0") CFNClusterManager.config_spot_price(spot_price="0.7") CFNClusterManager.config_volume_size(volume_size="300") CFNClusterManager.config_ebs_snapshot_id(ebs_snapshot_id="snap-5faff708") CFNClusterManager.config_aws_region_name(aws_region_name="us-west-2") CFNClusterManager.config_post_install(post_install="s3://path/to/postinstall.sh") CFNClusterManager.config_vpc_subnet_id(master_subnet_id="subnet-00000000", vpc_id="vpc-00000000") CFNClusterManager.config_s3_resource(s3_read_resource="s3://bucket_name/", s3_read_write_resource="s3://bucket_name/") CFNClusterManager.view_cfncluster_config() CFNClusterManager.list_cfn_cluster() master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name) import ConnectionManager ssh_client = ConnectionManager.connect_master(hostname=master_ip_address, username="ec2-user", private_key_file=private_key) ConnectionManager.close_connection(ssh_client) import CFNClusterManager if delete_cfncluster == True: CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Loading The Dataset Step2: Normalising The Data Step3: Printing the shape of the Datasets Step4: ## Reshape To Match The Keras's Expectations Step5: Linear Model Step8: Basic Simple Plot And Evaluation Step9: Activations Look Like What? Step10: Let's see the activation of the 2nd channel of the first layer Step11: Let's plot the activations of the other conv layers as well. Step12: Classifcation Report
<ASSISTANT_TASK:> Python Code: import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt #for plotting from collections import Counter from sklearn.metrics import confusion_matrix import itertools import seaborn as sns from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) %matplotlib inline #loading the dataset.......(Train) train = pd.read_csv("../input/train.csv") print(train.shape) train.head() z_train = Counter(train['label']) z_train sns.countplot(train['label']) #loading the dataset.......(Test) test= pd.read_csv("../input/test.csv") print(test.shape) test.head() x_train = (train.ix[:,1:].values).astype('float32') # all pixel values y_train = train.ix[:,0].values.astype('int32') # only labels i.e targets digits x_test = test.values.astype('float32') %matplotlib inline # preview the images first plt.figure(figsize=(12,10)) x, y = 10, 4 for i in range(40): plt.subplot(y, x, i+1) plt.imshow(x_train[i].reshape((28,28)),interpolation='nearest') plt.show() x_train = x_train/255.0 x_test = x_test/255.0 y_train print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') X_train = x_train.reshape(x_train.shape[0], 28, 28,1) X_test = x_test.reshape(x_test.shape[0], 28, 28,1) import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D from keras.layers.normalization import BatchNormalization from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau from sklearn.model_selection import train_test_split batch_size = 64 num_classes = 10 epochs = 20 input_shape = (28, 28, 1) # convert class vectors to binary class matrices One Hot Encoding y_train = keras.utils.to_categorical(y_train, num_classes) X_train, X_val, Y_train, Y_val = train_test_split(X_train, y_train, test_size = 0.1, random_state=42) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=input_shape)) model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal')) model.add(MaxPool2D((2, 2))) model.add(Dropout(0.20)) model.add(Conv2D(64, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal')) model.add(Conv2D(64, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(128, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal')) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.25)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.RMSprop(), metrics=['accuracy']) learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.0001) datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=15, # randomly rotate images in the range (degrees, 0 to 180) zoom_range = 0.1, # Randomly zoom image width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=False, # randomly flip images vertical_flip=False) # randomly flip images model.summary() datagen.fit(X_train) h = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=batch_size), epochs = epochs, validation_data = (X_val,Y_val), verbose = 1, steps_per_epoch=X_train.shape[0] // batch_size , callbacks=[learning_rate_reduction],) final_loss, final_acc = model.evaluate(X_val, Y_val, verbose=0) print("Final loss: {0:.6f}, final accuracy: {1:.6f}".format(final_loss, final_acc)) # Look at confusion matrix #Note, this code is taken straight from the SKLEARN website, an nice way of viewing confusion matrix. def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Predict the values from the validation dataset Y_pred = model.predict(X_val) # Convert predictions classes to one hot vectors Y_pred_classes = np.argmax(Y_pred, axis = 1) # Convert validation observations to one hot vectors Y_true = np.argmax(Y_val, axis = 1) # compute the confusion matrix confusion_mtx = confusion_matrix(Y_true, Y_pred_classes) # plot the confusion matrix plot_confusion_matrix(confusion_mtx, classes = range(10)) print(h.history.keys()) accuracy = h.history['acc'] val_accuracy = h.history['val_acc'] loss = h.history['loss'] val_loss = h.history['val_loss'] epochs = range(len(accuracy)) plt.plot(epochs, accuracy, 'bo', label='Training accuracy') plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend() plt.show() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # Errors are difference between predicted labels and true labels errors = (Y_pred_classes - Y_true != 0) Y_pred_classes_errors = Y_pred_classes[errors] Y_pred_errors = Y_pred[errors] Y_true_errors = Y_true[errors] X_val_errors = X_val[errors] def display_errors(errors_index,img_errors,pred_errors, obs_errors): This function shows 6 images with their predicted and real labels n = 0 nrows = 2 ncols = 3 fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True) for row in range(nrows): for col in range(ncols): error = errors_index[n] ax[row,col].imshow((img_errors[error]).reshape((28,28))) ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error])) n += 1 # Probabilities of the wrong predicted numbers Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1) # Predicted probabilities of the true values in the error set true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1)) # Difference between the probability of the predicted label and the true label delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors # Sorted list of the delta prob errors sorted_dela_errors = np.argsort(delta_pred_true_errors) # Top 6 errors most_important_errors = sorted_dela_errors[-6:] # Show the top 6 errors display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors) test_im = X_train[154] plt.imshow(test_im.reshape(28,28), cmap='viridis', interpolation='none') from keras import models layer_outputs = [layer.output for layer in model.layers[:8]] activation_model = models.Model(input=model.input, output=layer_outputs) activations = activation_model.predict(test_im.reshape(1,28,28,1)) first_layer_activation = activations[0] plt.matshow(first_layer_activation[0, :, :, 4], cmap='viridis') model.layers[:-1]# Droping The Last Dense Layer layer_names = [] for layer in model.layers[:-1]: layer_names.append(layer.name) images_per_row = 16 for layer_name, layer_activation in zip(layer_names, activations): if layer_name.startswith('conv'): n_features = layer_activation.shape[-1] size = layer_activation.shape[1] n_cols = n_features // images_per_row display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): for row in range(images_per_row): channel_image = layer_activation[0,:, :, col * images_per_row + row] channel_image -= channel_image.mean() channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') layer_names = [] for layer in model.layers[:-1]: layer_names.append(layer.name) images_per_row = 16 for layer_name, layer_activation in zip(layer_names, activations): if layer_name.startswith('max'): n_features = layer_activation.shape[-1] size = layer_activation.shape[1] n_cols = n_features // images_per_row display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): for row in range(images_per_row): channel_image = layer_activation[0,:, :, col * images_per_row + row] channel_image -= channel_image.mean() channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') layer_names = [] for layer in model.layers[:-1]: layer_names.append(layer.name) images_per_row = 16 for layer_name, layer_activation in zip(layer_names, activations): if layer_name.startswith('drop'): n_features = layer_activation.shape[-1] size = layer_activation.shape[1] n_cols = n_features // images_per_row display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): for row in range(images_per_row): channel_image = layer_activation[0,:, :, col * images_per_row + row] channel_image -= channel_image.mean() channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') #get the predictions for the test data predicted_classes = model.predict_classes(X_test) #get the indices to be plotted y_true = test.iloc[:, 0] correct = np.nonzero(predicted_classes==y_true)[0] incorrect = np.nonzero(predicted_classes!=y_true)[0] from sklearn.metrics import classification_report target_names = ["Class {}".format(i) for i in range(num_classes)] print(classification_report(y_true, predicted_classes, target_names=target_names)) submissions=pd.DataFrame({"ImageId": list(range(1,len(predicted_classes)+1)), "Label": predicted_classes}) submissions.to_csv("asd.csv", index=False, header=True) model.save('my_model_1.h5') json_string = model.to_json() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Visualize the Data Step2: Denoising Step3: Training Step4: Checking out the results
<ASSISTANT_TASK:> Python Code: import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms # convert data to torch.FloatTensor transform = transforms.ToTensor() # load the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # Create training and test dataloaders num_workers = 0 # how many samples per batch to load batch_size = 20 # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # get one image from the batch img = np.squeeze(images[0]) fig = plt.figure(figsize = (5,5)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') import torch.nn as nn import torch.nn.functional as F # define the NN architecture class ConvDenoiser(nn.Module): def __init__(self): super(ConvDenoiser, self).__init__() ## encoder layers ## ## decoder layers ## ## a kernel of 2 and a stride of 2 will increase the spatial dims by 2 def forward(self, x): ## encode ## ## decode ## return x # initialize the NN model = ConvDenoiser() print(model) # specify loss function criterion = nn.MSELoss() # specify loss function optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # number of epochs to train the model n_epochs = 20 # for adding noise to images noise_factor=0.5 for epoch in range(1, n_epochs+1): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data in train_loader: # _ stands in for labels, here # no need to flatten images images, _ = data ## add random noise to the input images noisy_imgs = images + noise_factor * torch.randn(*images.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # clear the gradients of all optimized variables optimizer.zero_grad() ## forward pass: compute predicted outputs by passing *noisy* images to the model outputs = model(noisy_imgs) # calculate the loss # the "target" is still the original, not-noisy images loss = criterion(outputs, images) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*images.size(0) # print avg training statistics train_loss = train_loss/len(train_loader) print('Epoch: {} \tTraining Loss: {:.6f}'.format( epoch, train_loss )) # obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # add noise to the test images noisy_imgs = images + noise_factor * torch.randn(*images.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) # get sample outputs output = model(noisy_imgs) # prep images for display noisy_imgs = noisy_imgs.numpy() # output is resized into a batch of iages output = output.view(batch_size, 1, 28, 28) # use detach when it's an output that requires_grad output = output.detach().numpy() # plot the first ten input images and then reconstructed images fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4)) # input images on top row, reconstructions on bottom for noisy_imgs, row in zip([noisy_imgs, output], axes): for img, ax in zip(noisy_imgs, row): ax.imshow(np.squeeze(img), cmap='gray') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Now let's try using a tree. I'll use a $k$-d tree to store SWIRE indices. Step2: Note that this measurement actually depends a lot on leafsize. Smaller values seem to be slower to some extent. I think this is because I am returning many points.
<ASSISTANT_TASK:> Python Code: import csv import time import h5py import numpy CROWDASTRO_H5_PATH = '../crowdastro.h5' CROWDASTRO_CSV_PATH = '../crowdastro.csv' ARCMIN = 0.0166667 with h5py.File(CROWDASTRO_H5_PATH) as f_h5: positions = f_h5['/swire/cdfs/catalogue'][:, :2] times = [] for i in range(1000): now = time.time() sx, sy = f_h5['/atlas/cdfs/positions'][i] lt_x = positions[:, 0] <= sx + ARCMIN gt_x = positions[:, 0] >= sx - ARCMIN lt_y = positions[:, 1] <= sy + ARCMIN gt_y = positions[:, 1] >= sy - ARCMIN enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0) potential_hosts = positions[enclosed] total = time.time() - now times.append(total) print('{:.02} +- {:1.1} s'.format(numpy.mean(times), numpy.std(times))) import scipy.spatial with h5py.File(CROWDASTRO_H5_PATH) as f_h5: positions = f_h5['/swire/cdfs/catalogue'][:, :2] tree = scipy.spatial.KDTree(positions, leafsize=100) radius = numpy.sqrt(2) * ARCMIN times = [] for i in range(1000): now = time.time() point = f_h5['/atlas/cdfs/positions'][i] enclosed = tree.query_ball_point(point, radius) potential_hosts = positions[enclosed] lt_x = potential_hosts[:, 0] <= sx + ARCMIN gt_x = potential_hosts[:, 0] >= sx - ARCMIN lt_y = potential_hosts[:, 1] <= sy + ARCMIN gt_y = potential_hosts[:, 1] >= sy - ARCMIN enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0) potential_hosts = potential_hosts[enclosed] total = time.time() - now times.append(total) print('{:.02} +- {:1.1} s'.format(numpy.mean(times), numpy.std(times))) import sklearn.neighbors with h5py.File(CROWDASTRO_H5_PATH) as f_h5: positions = f_h5['/swire/cdfs/catalogue'][:, :2] tree = sklearn.neighbors.KDTree(positions, leaf_size=100) radius = numpy.sqrt(2) * ARCMIN times = [] for i in range(1000): now = time.time() point = f_h5['/atlas/cdfs/positions'][i] (enclosed,) = tree.query_radius(point.reshape((1, -1)), r=radius) potential_hosts = positions[enclosed] lt_x = potential_hosts[:, 0] <= sx + ARCMIN gt_x = potential_hosts[:, 0] >= sx - ARCMIN lt_y = potential_hosts[:, 1] <= sy + ARCMIN gt_y = potential_hosts[:, 1] >= sy - ARCMIN enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0) potential_hosts = potential_hosts[enclosed] total = time.time() - now times.append(total) print('{:.02} +- {:1.1} s'.format(numpy.mean(times), numpy.std(times))) with h5py.File(CROWDASTRO_H5_PATH) as f_h5: positions = f_h5['/swire/cdfs/catalogue'][:, :2] tree = sklearn.neighbors.KDTree(positions, leaf_size=20) radius = numpy.sqrt(2) * ARCMIN now = time.time() points = f_h5['/atlas/cdfs/positions'][:1000] all_enclosed = tree.query_radius(points, r=radius) # potential_hosts = positions[enclosed] # lt_x = potential_hosts[:, 0] <= sx + ARCMIN # gt_x = potential_hosts[:, 0] >= sx - ARCMIN # lt_y = potential_hosts[:, 1] <= sy + ARCMIN # gt_y = potential_hosts[:, 1] >= sy - ARCMIN # enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0) # potential_hosts = potential_hosts[enclosed] total = time.time() - now print('{:.06f} s'.format(total / 1000)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Import data Step2: Cleaning Step3: Observations Step4: Prune outliers (similar approaches Step5: EDA Step6: Conclusions on cleaned data without normalization <a class="tocSkip">
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import matplotlib.style as style style.use('ggplot') #print(style.available) import matplotlib.pyplot as plt %matplotlib inline import csv import datetime from IPython.core.display import display, HTML data = pd.read_csv("train.csv", header = 0) data.head() print("Dataset length: %i" % len(data)) # --- print("vendor_id: [%s, %s]" % (min(data["vendor_id"]), max(data["vendor_id"]))) print(" ") print("pickup_datetime: \t[%s, %s]" % (min(data["pickup_datetime"]), max(data["pickup_datetime"]))) print("dropoff_datetime: \t[%s, %s]" % (min(data["dropoff_datetime"]), max(data["dropoff_datetime"]))) print(" ") print("pickup_longitude: \t[%s, %s]" % (min(data["pickup_longitude"]), max(data["pickup_longitude"]))) print("pickup_latitude: \t[%s, %s]" % (min(data["pickup_latitude"]), max(data["pickup_latitude"]))) print(" ") print("dropoff_longitude: \t[%s, %s]" % (min(data["dropoff_longitude"]), max(data["dropoff_longitude"]))) print("dropoff_latitude: \t[%s, %s]" % (min(data["dropoff_latitude"]), max(data["dropoff_latitude"]))) print(" ") print("store_and_fwd_flag: [%s, %s]" % (min(data["store_and_fwd_flag"]), max(data["store_and_fwd_flag"]))) print("passenger_count: [%s, %s]" % (min(data["passenger_count"]), max(data["passenger_count"]))) max_trip_d = max(data["trip_duration"]) m, s = divmod(max_trip_d, 60) h, m = divmod(m, 60) max_trip_d_f = "%d:%02d:%02d" % (h, m, s) print("trip_duration: [%s, %s]" % (min(data["trip_duration"]), max_trip_d_f)) print("---max (seconds): ", max_trip_d) #print(data.columns.values) print("vendor_id: %s" % data["vendor_id"].dtypes) print("pickup_datetime: %s" % data["pickup_datetime"].dtypes) print("dropoff_datetime: %s" % data["dropoff_datetime"].dtypes) print() print("pickup_longitude: %s" % data["pickup_longitude"].dtypes) print("pickup_latitude: %s" % data["pickup_latitude"].dtypes) print("dropoff_longitude: %s" % data["dropoff_longitude"].dtypes) print("dropoff_latitude: %s" % data["dropoff_latitude"].dtypes) print() print("store_and_fwd_flag: %s" % data["store_and_fwd_flag"].dtypes) print("passenger_count: %s" % data["passenger_count"].dtypes) print("trip_duration: %s" % data["trip_duration"].dtypes) # resolution data_clean = data data_clean["pickup_datetime"] = data_clean["pickup_datetime"].astype("datetime64") data_clean["dropoff_datetime"] = data_clean["dropoff_datetime"].astype("datetime64") # chosen as ~91 min (by analyzing the distribution below beforehand) data_clean = data_clean[data.trip_duration < 5500] display(HTML('<center><h1>Analyzing distribution for the series</h1></center>')) data["pickup_datetime"].groupby(data["pickup_datetime"].dt.month).count().plot(kind="bar", rot=0) data["dropoff_datetime"].groupby(data["dropoff_datetime"].dt.month).count().plot(kind="bar") # initial # xlim = [-80, -70] # ylim = [40, 42] xlim = [-74.2, -73.7] ylim = [40.55, 40.95] data_normalized = data_clean data_normalized = data_normalized[(data_normalized.dropoff_latitude < ylim[1]) & (data_normalized.pickup_latitude < ylim[1])] data_normalized = data_normalized[(data_normalized.dropoff_latitude > ylim[0]) & (data_normalized.pickup_latitude > ylim[0])] data_normalized = data_normalized[(data_normalized.dropoff_longitude < xlim[1]) & (data_normalized.pickup_longitude < xlim[1])] data_normalized = data_normalized[(data_normalized.dropoff_longitude > xlim[0]) & (data_normalized.pickup_longitude > xlim[0])] data_normalized.hist( column=["pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude"], figsize=(10, 10), weights = np.ones_like(data_normalized.index) / len(data_normalized.index), bins = 30 #,sharey=True, sharex=True ) display(HTML('<center><h1>Analyzing distribution for location (normalized)</h1></center>')) data_clean.hist( column=["passenger_count", "trip_duration"], #"store_and_fwd_flag", figsize=(10, 3) ) plt.twiny() display(HTML('<center><h1>Analyzing distribution for the other parameters</h1></center>')) # some cleaning needed # some big outliers are clogging the view ## values determined empirically with 0.05 marker plot below xlim = [-74.2, -73.7] # -74.2, -73.85 ylim = [40.55, 40.95] data_viz = data_clean data_viz = data_viz[(data_viz.pickup_longitude > xlim[0]) & (data_viz.pickup_longitude < xlim[1])] data_viz = data_viz[(data_viz.dropoff_longitude> xlim[0]) & (data_viz.dropoff_longitude< xlim[1])] data_viz = data_viz[(data_viz.pickup_latitude > ylim[0]) & (data_viz.pickup_latitude < ylim[1])] data_viz = data_viz[(data_viz.dropoff_latitude > ylim[0]) & (data_viz.dropoff_latitude < ylim[1])] longitude = list(data_viz.pickup_longitude) + list(data_viz.dropoff_longitude) latitude = list(data_viz.pickup_latitude) + list(data_viz.dropoff_latitude) print("longitude: \t[%s, %s]" % (min(longitude), max(longitude))) print("latitude: \t[%s, %s]" % (min(latitude), max(latitude))) display(HTML('<center><h1>Scatter plot for points (estimating map)</h1></center>')) plt.figure(figsize = (10,10)) plt.plot(longitude,latitude,'.', alpha = 0.4, markersize = 0.05) # 0.05 (less time, best for visualization) #10 (more time, best for outliers) plt.show() plt.figure(figsize = (10,5)) plt.subplots_adjust(wspace = 0.6) # wspace=None # wspace = 0.2 # the amount of width reserved for blank space between subplots display(HTML('<center><h1>Examining clusterization density (hex binning)</h1></center>')) plt.subplot(121),\ plt.hexbin(longitude,latitude, gridsize=50, cmap='inferno'),\ plt.title('no log') plt.colorbar().set_label('counts') plt.subplot(122),\ plt.hexbin(longitude,latitude, gridsize=50, bins='log', cmap='inferno'),\ plt.title('Log'),\ #plt.plot(longitude,latitude,'.', alpha = 0.1, c='c', markersize = 0.05) plt.colorbar().set_label('log10(N)') plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Here you see that there is a blank figure for chart that you want to use. You can inspect the characteristic of the chart by using its option member. Step2: As soon as Chart objects are build, the chart option will be using this minimum Python dictionary. This template will translate to JSON object that represent ECharts option. Step3: There is no plot visualization since there is no data, but we can still customization like theme and title. This also benefit you for saving the figure later to be used as a basis for other plots.
<ASSISTANT_TASK:> Python Code: # Use this when you want to nbconvert the notebook (used by nbviewer) from krisk import init_notebook; init_notebook() from krisk import Chart chart = Chart() chart chart.option chart.set_title('This is a blank visualization', x_pos='center') chart.set_theme('vintage') chart.option['series'] = [{'data': [10, 3, 7, 4, 5], 'name': 'continent', 'type': 'bar'}] chart.option['xAxis'] = {'data': ['Americas', 'Asia', 'Africa', 'Oceania', 'Europe']} chart <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Get acq stats data and clean Step2: Get ASVT data and make it look more like acq stats data Step3: Combine flight acqs and ASVT data Step4: Compute box probit delta term based on box size Step7: Model definition Step9: Model fitting functions Step10: Plotting and validation Step11: Define magnitude bins for fitting and show data Step12: Color != 1.5 fit (this is MOST acq stars) Step13: Define model for color=1.5 stars Step14: Define an analytical approximation for distribution with ad-hoc positive tail Step16: Global model for arbitrary mag, t_ccd, color, and halfwidth Step18: Compare to flight data for halfwidth=120 Step19: Compare model to flight for color != 1.5 stars Step20: Compare model to flight for color = 1.5 stars Step21: Write model as a 3-d grid to a gzipped FITS file Step22: Generate regression data for chandra_aca
<ASSISTANT_TASK:> Python Code: import sys import os from itertools import count from pathlib import Path # Include utils.py for asvt_utils sys.path.insert(0, str(Path(os.environ['HOME'], 'git', 'skanb', 'pea-test-set'))) import utils as asvt_utils import numpy as np import matplotlib.pyplot as plt from astropy.table import Table, vstack from astropy.time import Time import tables from scipy import stats from scipy.interpolate import CubicSpline from Chandra.Time import DateTime from astropy.table import Table from chandra_aca.star_probs import (get_box_delta, broadcast_arrays, acq_success_prob, grid_model_acq_prob) from chandra_aca import star_probs %matplotlib inline MODEL_DATE = '2019-08' MODEL_NAME = f'grid-floor-{MODEL_DATE}' np.random.seed(0) SKA = Path(os.environ['SKA']) # Make a map of AGASC_ID to AGACS 1.7 MAG_ACA. The acq_stats.h5 file has whatever MAG_ACA # was in place at the time of planning the loads. # Define new term `red_mag_err` which is used here in place of the # traditional COLOR1 == 1.5 test. with tables.open_file(str(SKA / 'data' / 'agasc' / 'miniagasc_1p7.h5'), 'r') as h5: agasc_mag_aca = h5.root.data.col('MAG_ACA') agasc_id = h5.root.data.col('AGASC_ID') has_color3 = h5.root.data.col('RSV3') != 0 # red_star = np.isclose(h5.root.data.col('COLOR1'), 1.5) mag_aca_err = h5.root.data.col('MAG_ACA_ERR') / 100 red_mag_err = red_star & ~has_color3 # MAG_ACA, MAG_ACA_ERR is potentially inaccurate agasc1p7_idx = {id: idx for id, idx in zip(agasc_id, count())} agasc1p7 = Table([agasc_mag_aca, mag_aca_err, red_mag_err], names=['mag_aca', 'mag_aca_err', 'red_mag_err'], copy=False) acq_file = str(SKA / 'data' / 'acq_stats' / 'acq_stats.h5') with tables.open_file(str(acq_file), 'r') as h5: cols = h5.root.data.cols names = {'tstart': 'guide_tstart', 'obsid': 'obsid', 'obc_id': 'acqid', 'halfwidth': 'halfw', 'warm_pix': 'n100_warm_frac', 'mag_aca': 'mag_aca', 'mag_obs': 'mean_trak_mag', 'known_bad': 'known_bad', 'color': 'color1', 'img_func': 'img_func', 'ion_rad': 'ion_rad', 'sat_pix': 'sat_pix', 'agasc_id': 'agasc_id', 't_ccd': 'ccd_temp', 'slot': 'slot'} acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()], names=list(names.keys())) year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4') acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4') # Create 'fail' column, rewriting history as if the OBC always # ignore the MS flag in ID'ing acq stars. # # CHECK: is ion_rad being ignored on-board? # Answer: Not as of 2019-09 # obc_id = acqs['obc_id'] obc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad'] acqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0) # Re-map acq_stats database magnitudes for AGASC 1.7 acqs['mag_aca'] = [agasc1p7['mag_aca'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']] acqs['red_mag_err'] = [agasc1p7['red_mag_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']] acqs['mag_aca_err'] = [agasc1p7['mag_aca_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']] # Add a flag to distinguish flight from ASVT data acqs['asvt'] = False # Filter for year and mag # year_max = Time(f'{MODEL_DATE}-01').decimalyear year_min = year_max - 4.5 acq_ok = ((acqs['year'] > year_min) & (acqs['year'] < year_max) & (acqs['mag_aca'] > 7.0) & (acqs['mag_aca'] < 11) & (~np.isclose(acqs['color'], 0.7))) # Filter known bad obsids. NOTE: this is no longer doing anything, but # consider updating the list of known bad obsids or obtaining programmically? print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(acq_ok))) bad_obsids = [ # Venus 2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583, 7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499, 16500,16501,16503,16504,16505,16506,16502, ] for badid in bad_obsids: acq_ok = acq_ok & (acqs['obsid'] != badid) print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(acq_ok))) peas = Table.read('pea_analysis_results_2018_299_CCD_temp_performance.csv', format='ascii.csv') peas = asvt_utils.flatten_pea_test_data(peas) peas = peas[peas['ccd_temp'] > -10.5] # Version of ASVT PEA data that is more flight-like fpeas = Table([peas['star_mag'], peas['ccd_temp'], peas['search_box_hw']], names=['mag_aca', 't_ccd', 'halfwidth']) fpeas['year'] = np.random.uniform(2019.0, 2019.5, size=len(peas)) fpeas['color'] = 1.0 fpeas['quarter'] = (np.trunc((fpeas['year'] - year_q0) * 4)).astype('f4') fpeas['fail'] = 1.0 - peas['search_success'] fpeas['asvt'] = True fpeas['red_mag_err'] = False fpeas['mag_obs'] = 0.0 data_all = vstack([acqs[acq_ok]['year', 'fail', 'mag_aca', 't_ccd', 'halfwidth', 'quarter', 'color', 'asvt', 'red_mag_err', 'mag_obs'], fpeas]) data_all.sort('year') # Adjust probability (in probit space) for box size. data_all['box_delta'] = get_box_delta(data_all['halfwidth']) # Put in an ad-hoc penalty on ASVT data that introduces up to a -0.3 shift # on probit probability. It goes from 0.0 for mag < 10.1 up to 0.3 at mag=10.4. ok = data_all['asvt'] box_delta_tweak = (data_all['mag_aca'][ok] - 10.1).clip(0, 0.3) data_all['box_delta'][ok] -= box_delta_tweak # Another ad-hoc tweak: the mag=8.0 data show more failures at smaller # box sizes. This confounds the fitting. For this case only just # set the box deltas to zero and this makes the fit work. ok = data_all['asvt'] & (data_all['mag_aca'] == 8) data_all['box_delta'][ok] = 0.0 data_all = data_all.group_by('quarter') data_all0 = data_all.copy() # For later augmentation with simulated red_mag_err stars data_mean = data_all.groups.aggregate(np.mean) def t_ccd_normed(t_ccd): return (t_ccd + 8.0) / 8.0 def p_fail(pars, t_ccd, tc2=None, box_delta=0, rescale=True, probit=False): Acquisition probability model :param pars: p0, p1, p2 (quadratic in t_ccd) and floor (min p_fail) :param t_ccd: t_ccd (degC) or scaled t_ccd if rescale is False. :param tc2: (scaled t_ccd) ** 2, this is just for faster fitting :param box_delta: delta p_fail for search box size :param rescale: rescale t_ccd to about -1 to 1 (makes P0, P1, P2 better-behaved) :param probit: return probability as probit instead of 0 to 1. p0, p1, p2, floor = pars tc = t_ccd_normed(t_ccd) if rescale else t_ccd if tc2 is None: tc2 = tc ** 2 # Make sure box_delta has right dimensions tc, box_delta = np.broadcast_arrays(tc, box_delta) # Compute the model. Also clip at +10 to avoid values that are # exactly 1.0 at 64-bit precision. probit_p_fail = (p0 + p1 * tc + p2 * tc2 + box_delta).clip(floor, 10) # Possibly transform from probit to linear probability out = probit_p_fail if probit else stats.norm.cdf(probit_p_fail) return out def p_acq_fail(data=None): Sherpa fit function wrapper to ensure proper use of data in fitting. if data is None: data = data_all tc = t_ccd_normed(data['t_ccd']) tc2 = tc ** 2 box_delta = data['box_delta'] def sherpa_func(pars, x=None): return p_fail(pars, tc, tc2, box_delta, rescale=False) return sherpa_func def calc_binom_stat(data, model, staterror=None, syserror=None, weight=None, bkg=None): Calculate log-likelihood for a binomial probability distribution for a single trial at each point. Defining p = model, then probability of seeing data == 1 is p and probability of seeing data == 0 is (1 - p). Note here that ``data`` is strictly either 0.0 or 1.0, and np.where interprets those float values as False or True respectively. fit_stat = -np.sum(np.log(np.where(data, model, 1.0 - model))) return fit_stat, np.ones(1) def fit_poly_model(data): from sherpa import ui comp_names = ['p0', 'p1', 'p2', 'floor'] data_id = 1 ui.set_method('simplex') # Set up the custom binomial statistics ones = np.ones(len(data)) ui.load_user_stat('binom_stat', calc_binom_stat, lambda x: ones) ui.set_stat(binom_stat) # Define the user model ui.load_user_model(p_acq_fail(data), 'model') ui.add_user_pars('model', comp_names) ui.set_model(data_id, 'model') ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float)) # Initial fit values from fit of all data fmod = ui.get_model_component('model') # Define initial values / min / max # This is the p_fail value at t_ccd = -8.0 fmod.p0 = -2.605 fmod.p0.min = -10 fmod.p0.max = 10 # Linear slope of p_fail fmod.p1 = 2.5 fmod.p1.min = 0.0 fmod.p1.max = 10 # Quadratic term. Only allow negative curvature, and not too much at that. fmod.p2 = 0.0 fmod.p2.min = -1 fmod.p2.max = 0 # Floor to p_fail. fmod.floor = -2.6 fmod.floor.min = -2.6 fmod.floor.max = -0.5 ui.fit(data_id) return ui.get_fit_results() def plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all, year0=2015.0): ok = (data_all['year'] > year0) & ~data_all['fail'].astype(bool) da = data_all[ok] fuzzx = np.random.uniform(-0.3, 0.3, len(da)) fuzzy = np.random.uniform(-0.125, 0.125, len(da)) plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C0', markersize=4) ok = (data_all['year'] > year0) & data_all['fail'].astype(bool) da = data_all[ok] fuzzx = np.random.uniform(-0.3, 0.3, len(da)) fuzzy = np.random.uniform(-0.125, 0.125, len(da)) plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C1', markersize=4, alpha=0.8) # plt.xlim(-18, -10) # plt.ylim(7.0, 11.1) x0, x1 = plt.xlim() for y in mag_bins: plt.plot([x0, x1], [y, y], '-', color='r', linewidth=2, alpha=0.8) plt.xlabel('T_ccd (C)') plt.ylabel('Mag_aca') plt.title(f'Acq successes (blue) and failures (orange) since {year0}') plt.grid() def plot_fit_grouped(data, group_col, group_bin, log=False, colors='br', label=None, probit=False): group = np.trunc(data[group_col] / group_bin) data = data.group_by(group) data_mean = data.groups.aggregate(np.mean) len_groups = np.diff(data.groups.indices) data_fail = data_mean['fail'] model_fail = np.array(data_mean['model']) fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups # Possibly plot the data and model probabilities in probit space if probit: dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6)) dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6)) data_fail = stats.norm.ppf(data_fail) model_fail = stats.norm.ppf(model_fail) fail_sigmas = np.vstack([data_fail - dm, dp - data_fail]) plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas, fmt='.' + colors[1], label=label, markersize=8) plt.plot(data_mean[group_col], model_fail, '-' + colors[0]) if log: ax = plt.gca() ax.set_yscale('log') def mag_filter(mag0, mag1): ok = (data_all['mag_aca'] > mag0) & (data_all['mag_aca'] < mag1) return ok def t_ccd_filter(t_ccd0, t_ccd1): ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1) return ok def wp_filter(wp0, wp1): ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1) return ok mag_centers = np.array([6.3, 8.1, 9.1, 9.55, 9.75, 10.0, 10.25, 10.55, 10.75, 11.0]) mag_bins = (mag_centers[1:] + mag_centers[:-1]) / 2 mag_means = np.array([8.0, 9.0, 9.5, 9.75, 10.0, 10.25, 10.5, 10.75]) for m0, m1, mm in zip(mag_bins[:-1], mag_bins[1:], mag_means): ok = (data_all['asvt'] == False) & (data_all['mag_aca'] >= m0) & (data_all['mag_aca'] < m1) print(f"m0={m0:.2f} m1={m1:.2f} mean_mag={data_all['mag_aca'][ok].mean():.2f} vs. {mm}") plt.figure(figsize=(10, 14)) for subplot, halfwidth in enumerate([60, 80, 100, 120, 140, 160, 180]): plt.subplot(4, 2, subplot + 1) ok = (data_all['halfwidth'] > halfwidth - 10) & (data_all['halfwidth'] <= halfwidth + 10) plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all[ok]) plt.title(f'Acq success (blue) fail (orange) box={halfwidth}') plt.tight_layout() # fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True) mask_no_1p5 = ((data_all['red_mag_err'] == False) & (data_all['t_ccd'] > -18) & (data_all['t_ccd'] < -0.5)) mag0s, mag1s = mag_bins[:-1], mag_bins[1:] fits = {} masks = [] for m0, m1 in zip(mag0s, mag1s): print(m0, m1) mask = mask_no_1p5 & mag_filter(m0, m1) # & t_ccd_filter(-10.5, 0) print(np.count_nonzero(mask)) masks.append(mask) fits[m0, m1] = fit_poly_model(data_all[mask]) colors = [f'kC{i}' for i in range(9)] plt.figure(figsize=(13, 4)) for subplot in (1, 2): plt.subplot(1, 2, subplot) probit = (subplot == 2) for m0_m1, color, mask, mag_mean in zip(list(fits), colors, masks, mag_means): fit = fits[m0_m1] data = data_all[mask] data['model'] = p_acq_fail(data)(fit.parvals) plot_fit_grouped(data, 't_ccd', 2.0, probit=probit, colors=[color, color], label=str(mag_mean)) plt.grid() if probit: plt.ylim(-3.5, 2.5) plt.ylabel('Probit(p_fail)' if probit else 'p_fail') plt.xlabel('T_ccd'); plt.legend(fontsize='small') # This computes probabilities for 120 arcsec boxes, corresponding to raw data t_ccds = np.linspace(-16, -0, 20) plt.figure(figsize=(13, 4)) for subplot in (1, 2): plt.subplot(1, 2, subplot) probit = (subplot == 2) for m0_m1, color, mag_mean in zip(list(fits), colors, mag_means): fit = fits[m0_m1] probs = p_fail(fit.parvals, t_ccds) if probit: probs = stats.norm.ppf(probs) plt.plot(t_ccds, probs, label=f'{mag_mean:.2f}') plt.legend() plt.xlabel('T_ccd') plt.ylabel('P_fail' if subplot == 1 else 'Probit(p_fail)') plt.title('P_fail for halfwidth=120') plt.grid() mag_bin_centers = np.concatenate([[5.0], mag_means, [13.0]]) fit_parvals = [] for fit in fits.values(): fit_parvals.extend(fit.parvals) fit_parvals = np.array(fit_parvals).reshape(-1, 4) parvals_mag12 = [[5, 0, 0, 0]] parvals_mag5 = [[-5, 0, 0, -3]] fit_parvals = np.concatenate([parvals_mag5, fit_parvals, parvals_mag12]) fit_parvals = fit_parvals.transpose() for ps, parname in zip(fit_parvals, fit.parnames): plt.plot(mag_bin_centers, ps, '.-', label=parname) plt.legend(fontsize='small') plt.title('Model coefficients vs. mag') plt.xlabel('Mag_aca') plt.grid() def plot_mag_errs(acqs, red_mag_err): ok = ((acqs['red_mag_err'] == red_mag_err) & (acqs['mag_obs'] > 0) & (acqs['img_func'] == 'star')) dok = acqs[ok] dmag = dok['mag_obs'] - dok['mag_aca'] plt.figure(figsize=(14, 4.5)) plt.subplot(1, 3, 1) plt.plot(dok['mag_aca'], dmag, '.') plt.plot(dok['mag_aca'], dmag, ',', alpha=0.3) plt.xlabel('mag_aca (catalog)') plt.ylabel('Mag err') plt.title('Mag err (observed - catalog) vs mag_aca') plt.xlim(5, 11.5) plt.ylim(-4, 2) plt.grid() plt.subplot(1, 3, 2) plt.hist(dmag, bins=np.arange(-3, 4, 0.2), log=True); plt.grid() plt.xlabel('Mag err') plt.title('Mag err (observed - catalog)') plt.xlim(-4, 2) plt.subplot(1, 3, 3) plt.hist(dmag, bins=100, cumulative=-1, normed=True) plt.xlim(-1, 1) plt.xlabel('Mag err') plt.title('Mag err (observed - catalog)') plt.grid() plot_mag_errs(acqs, red_mag_err=True) plt.subplot(1, 3, 2) plt.plot([-2.8, 0], [1, 7000], 'r'); plt.plot([0, 4.0], [7000, 1], 'r'); plt.xlim(-4, 4); # Define parameters / metadata for floor model FLOOR = {'fit_parvals': fit_parvals, 'mag_bin_centers': mag_bin_centers} def calc_1p5_mag_err_weights(): x = np.linspace(-2.8, 4, 18) ly = 3.8 * (1 - np.abs(x) / np.where(x > 0, 4.0, 2.8)) y = 10 ** ly return x, y / y.sum() FLOOR['mag_errs_1p5'], FLOOR['mag_err_weights_1p5'] = calc_1p5_mag_err_weights() plt.semilogy(FLOOR['mag_errs_1p5'], FLOOR['mag_err_weights_1p5']) plt.grid() def floor_model_acq_prob(mag, t_ccd, color=0.6, halfwidth=120, probit=False): Acquisition probability model :param mag: Star magnitude(s) :param t_ccd: CCD temperature(s) :param color: Star color (compared to 1.5 to decide which p_fail model to use) :param halfwidth: Search box size (arcsec) :param probit: Return probit of failure probability :returns: acquisition failure probability parvals = FLOOR['fit_parvals'] mag_bin_centers = FLOOR['mag_bin_centers'] mag_errs_1p5 = FLOOR['mag_errs_1p5'] mag_err_weights_1p5 = FLOOR['mag_err_weights_1p5'] # Make sure inputs have right dimensions is_scalar, t_ccds, mags, halfwidths, colors = broadcast_arrays(t_ccd, mag, halfwidth, color) box_deltas = get_box_delta(halfwidths) p_fails = [] for t_ccd, mag, box_delta, color in zip(t_ccds.flat, mags.flat, box_deltas.flat, colors.flat): if np.isclose(color, 1.5): pars_list = [[np.interp(mag + mag_err_1p5, mag_bin_centers, ps) for ps in parvals] for mag_err_1p5 in mag_errs_1p5] weights = mag_err_weights_1p5 if probit: raise ValueError('cannot use probit=True with color=1.5 stars') else: pars_list = [[np.interp(mag, mag_bin_centers, ps) for ps in parvals]] weights = [1] pf = sum(weight * p_fail(pars, t_ccd, box_delta=box_delta, probit=probit) for pars, weight in zip(pars_list, weights)) p_fails.append(pf) out = np.array(p_fails).reshape(t_ccds.shape) return out mags, t_ccds = np.mgrid[8.75:10.75:30j, -16:-4:30j] plt.figure(figsize=(13, 4)) for subplot, color in enumerate([1.0, 1.5]): plt.subplot(1, 2, subplot + 1) p_fails = floor_model_acq_prob(mags, t_ccds, probit=False, color=color) cs = plt.contour(t_ccds, mags, p_fails, levels=[0.05, 0.1, 0.2, 0.5, 0.75, 0.9], colors=['g', 'g', 'b', 'c', 'm', 'r']) plt.clabel(cs, inline=1, fontsize=10) plt.grid() plt.xlim(-17, -4) plt.ylim(8.5, 11.0) plt.xlabel('T_ccd (degC)') plt.ylabel('Mag_ACA') plt.title(f'Failure probability color={color}'); mags = np.linspace(8, 11, 301) plt.figure() for t_ccd in np.arange(-16, -0.9, 1): p_fails = floor_model_acq_prob(mags, t_ccd, probit=True) plt.plot(mags, p_fails) plt.grid() plt.xlim(8, 11) # NOTE this is in chandra_aca.star_probs as of version 4.27 from scipy.stats import binom def binom_ppf(k, n, conf, n_sample=1000): Compute percent point function (inverse of CDF) for binomial, where the percentage is with respect to the "p" (binomial probability) parameter not the "k" parameter. The following example returns the 1-sigma (0.17 - 0.84) confidence interval on the true binomial probability for an experiment with 4 successes in 5 trials. Example:: >>> binom_ppf(4, 5, [0.17, 0.84]) array([ 0.55463945, 0.87748177]) :param k: int, number of successes (0 < k <= n) :param n: int, number of trials :param conf: float, array of floats, percent point values :param n_sample: number of PMF samples for interpolation :return: percent point function values corresponding to ``conf`` ps = np.linspace(0, 1, n_sample) vals = binom.pmf(k=k, n=n, p=ps) return np.interp(conf, xp=np.cumsum(vals) / np.sum(vals), fp=ps) binom_ppf(4, 5, [0.17, 0.84]) n = 156 k = 127 binom_ppf(k, n, [0.17, 0.84]) def calc_binned_pfail(data_all, mag, dmag, t_ccd, dt, halfwidth=120): da = data_all[~data_all['asvt'] & (data_all['halfwidth'] == halfwidth)] fail = da['fail'].astype(bool) ok = (np.abs(da['mag_aca'] - mag) < dmag) & (np.abs(da['t_ccd'] - t_ccd) < dt) n_fail = np.count_nonzero(fail[ok]) n_acq = np.count_nonzero(ok) p_fail = n_fail / n_acq p_fail_lower, p_fail_upper = binom_ppf(n_fail, n_acq, [0.17, 0.84]) mean_t_ccd = np.mean(da['t_ccd'][ok]) mean_mag = np.mean(da['mag_aca'][ok]) return p_fail, p_fail_lower, p_fail_upper, mean_t_ccd, mean_mag, n_fail, n_acq halfwidth = 120 pfs_list = [] for mag in (10.0, 10.3, 10.55): pfs = [] for t_ccd in np.linspace(-15, -10, 6): pf = calc_binned_pfail(data_all, mag, 0.2, t_ccd, 0.5, halfwidth=halfwidth) pfs.append(pf) print(f'mag={mag} mean_mag_aca={pf[4]:.2f} t_ccd=f{pf[3]:.2f} p_fail={pf[-2]}/{pf[-1]}={pf[0]:.2f}') pfs_list.append(pfs) def plot_floor_and_flight(color, halfwidth=120): # This computes probabilities for 120 arcsec boxes, corresponding to raw data t_ccds = np.linspace(-16, -6, 20) mag_acas = np.array([9.5, 10.0, 10.25, 10.5, 10.75]) for ii, mag_aca in enumerate(reversed(mag_acas)): flight_probs = 1 - acq_success_prob(date='2018-05-01T00:00:00', t_ccd=t_ccds, mag=mag_aca, color=color, halfwidth=halfwidth) new_probs = floor_model_acq_prob(mag_aca, t_ccds, color=color, halfwidth=halfwidth) plt.plot(t_ccds, flight_probs, '--', color=f'C{ii}') plt.plot(t_ccds, new_probs, '-', color=f'C{ii}', label=f'mag_aca={mag_aca}') if color != 1.5: # pf1, pf2 have p_fail, p_fail_lower, p_fail_upper, mean_t_ccd, mean_mag_aca, n_fail, n_acq for pfs, clr in zip(pfs_list, ('C3', 'C2', 'C1')): for pf in pfs: yerr = np.array([pf[0] - pf[1], pf[2] - pf[0]]).reshape(2, 1) plt.errorbar(pf[3], pf[0], xerr=0.5, yerr=yerr, color=clr) # plt.xlim(-16, None) plt.legend() plt.xlabel('T_ccd') plt.ylabel('P_fail') plt.title(f'P_fail (color={color}: new (solid) and flight (dashed)') plt.grid() plot_floor_and_flight(color=1.0) plt.figure(figsize=(13, 4)) plt.subplot(1, 2, 1) for m0, m1, color in [(9, 9.5, 'C0'), (9.5, 10, 'C1'), (10, 10.3, 'C2'), (10.3, 10.7, 'C3')]: ok = data_all['red_mag_err'] & mag_filter(m0, m1) & t_ccd_filter(-16, -10) data = data_all[ok] data['model'] = floor_model_acq_prob(data['mag_aca'], data['t_ccd'], color=1.5, halfwidth=data['halfwidth']) plot_fit_grouped(data, 't_ccd', 2.0, probit=False, colors=[color, color], label=f'{m0}-{m1}') plt.ylim(0, 1.0) plt.legend(fontsize='small') plt.grid() plt.xlabel('T_ccd') plt.title('COLOR1=1.5 acquisition probabilities') plt.subplot(1, 2, 2) plot_floor_and_flight(color=1.5) def write_model_as_fits(model_name, comment=None, mag0=5, mag1=12, n_mag=141, # 0.05 mag spacing t_ccd0=-16, t_ccd1=-1, n_t_ccd=31, # 0.5 degC spacing halfw0=60, halfw1=180, n_halfw=7, # 20 arcsec spacing ): from astropy.io import fits mags = np.linspace(mag0, mag1, n_mag) t_ccds = np.linspace(t_ccd0, t_ccd1, n_t_ccd) halfws = np.linspace(halfw0, halfw1, n_halfw) mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij') print('Computing probs, stand by...') # COLOR = 1.5 (stars with poor mag estimates) p_fails = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=False, color=1.5) p_fails_probit_1p5 = stats.norm.ppf(p_fails) # COLOR not 1.5 (most stars) p_fails_probit = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0) hdu = fits.PrimaryHDU() if comment: hdu.header['comment'] = comment hdu.header['date'] = DateTime().fits hdu.header['mdl_name'] = model_name hdu.header['mag_lo'] = mags[0] hdu.header['mag_hi'] = mags[-1] hdu.header['mag_n'] = len(mags) hdu.header['t_ccd_lo'] = t_ccds[0] hdu.header['t_ccd_hi'] = t_ccds[-1] hdu.header['t_ccd_n'] = len(t_ccds) hdu.header['halfw_lo'] = halfws[0] hdu.header['halfw_hi'] = halfws[-1] hdu.header['halfw_n'] = len(halfws) hdu1 = fits.ImageHDU(p_fails_probit.astype(np.float32)) hdu1.header['comment'] = 'COLOR1 != 1.5 (good mag estimates)' hdu2 = fits.ImageHDU(p_fails_probit_1p5.astype(np.float32)) hdu2.header['comment'] = 'COLOR1 == 1.5 (poor mag estimates)' hdus = fits.HDUList([hdu, hdu1, hdu2]) hdus.writeto(f'{model_name}.fits.gz', overwrite=True) comment = f'Created with fit_acq_model-{MODEL_DATE}-binned-poly-binom-floor.ipynb in aca_stats repository' write_model_as_fits(MODEL_NAME, comment=comment) # Fudge the chandra_aca.star_probs global STAR_PROBS_DATA_DIR temporarily # in order to load the dev model that was just created locally _dir_orig = star_probs.STAR_PROBS_DATA_DIR star_probs.STAR_PROBS_DATA_DIR = '.' grid_model_acq_prob(model=MODEL_NAME) star_probs.STAR_PROBS_DATA_DIR = _dir_orig # Remake standard plot comparing grouped data to model, but now use # chandra_aca.star_probs grid_model_acq_prob function with the newly # generated 3-d FITS model that we just loaded. colors = [f'kC{i}' for i in range(9)] plt.figure(figsize=(13, 4)) for subplot in (1, 2): plt.subplot(1, 2, subplot) probit = (subplot == 2) for m0_m1, color, mask, mag_mean in zip(list(fits), colors, masks, mag_means): fit = fits[m0_m1] data = data_all[mask] data['model'] = 1 - grid_model_acq_prob(data['mag_aca'], data['t_ccd'], halfwidth=data['halfwidth'], model=MODEL_NAME) plot_fit_grouped(data, 't_ccd', 2.0, probit=probit, colors=[color, color], label=str(mag_mean)) plt.grid() if probit: plt.ylim(-3.5, 2.5) plt.ylabel('Probit(p_fail)' if probit else 'p_fail') plt.xlabel('T_ccd'); plt.legend(fontsize='small') # Check chandra_aca implementation vs. native model from this notebook mags = np.linspace(5, 12, 40) t_ccds = np.linspace(-16, -1, 40) halfws = np.linspace(60, 180, 7) mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij') # First color != 1.5 # Notebook nb_probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0) # Chandra_aca. Note that grid_model returns p_success, so need to negate it. ca_probs = -grid_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0, model=MODEL_NAME) assert nb_probs.shape == ca_probs.shape print('Max difference is {:.3f}'.format(np.max(np.abs(nb_probs - ca_probs)))) assert np.allclose(nb_probs, ca_probs, rtol=0, atol=0.1) d_probs = (nb_probs - ca_probs)[:, :, 3] plt.imshow(d_probs, origin='lower', extent=[-16, -1, 5, 12], aspect='auto', cmap='jet') plt.colorbar(); plt.title('Delta between probit p_fail: analytical vs. gridded'); mags = np.linspace(8, 11, 200) plt.figure() for ii, t_ccd in enumerate(np.arange(-16, -0.9, 2)): p_fails = floor_model_acq_prob(mags, t_ccd, probit=True) plt.plot(mags, p_fails, color=f'C{ii}') p_success = grid_model_acq_prob(mags, t_ccd, probit=True, model=MODEL_NAME) plt.plot(mags, -p_success, color=f'C{ii}') plt.grid() plt.xlim(8, 11) mags = [9, 9.5, 10.5] t_ccds = [-10, -5] halfws = [60, 120, 160] mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij') probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0) print(repr(probs.round(3).flatten())) probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=False, color=1.5) probs = stats.norm.ppf(probs) print(repr(probs.round(3).flatten())) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1.2 Train Ridge Regression on training data Step2: 1.3 Train Ridge Regression with optimal $\alpha$ and evaluate model in test data Step3: <div style="text-align Step4: 2.2 Classification with a linear SVM Step5: 2.3 Classification with SVM and RBF kernel
<ASSISTANT_TASK:> Python Code: %matplotlib inline import scipy as sp import matplotlib import pylab as pl matplotlib.rcParams.update({'font.size': 15}) from sklearn.linear_model import Ridge from sklearn.svm import SVC from sklearn.model_selection import KFold, StratifiedKFold, GridSearchCV,StratifiedShuffleSplit from sklearn.model_selection import cross_val_score, train_test_split from sklearn.metrics import accuracy_score, mean_squared_error, mean_absolute_error from sklearn.metrics import roc_curve, auc def visualized_variance_bias_tradeoff(hyperp, line_search, optimal_hyperp,classification=False): pl.figure(figsize=(18,7)) if classification: factor=1 else: factor=-1 pl.plot(hyperp,line_search.cv_results_['mean_train_score']*factor,label="Training Error",color="#e67e22") pl.fill_between(hyperp, line_search.cv_results_['mean_train_score']*factor-line_search.cv_results_['std_train_score'], line_search.cv_results_['mean_train_score']*factor+line_search.cv_results_['std_train_score'], alpha=0.3,color="#e67e22") pl.plot(hyperp,line_search.cv_results_['mean_test_score']*factor,label="Validation Error",color="#2980b9") pl.fill_between(hyperp, line_search.cv_results_['mean_test_score']*factor-line_search.cv_results_['std_test_score'], line_search.cv_results_['mean_test_score']*factor+line_search.cv_results_['std_test_score'], alpha=0.3,color="#2980b9") pl.xscale("log") if classification: pl.ylabel("Accuracy") else: pl.ylabel("Mean Squared Error") pl.xlabel("Hyperparameter") pl.legend(frameon=True) pl.grid(True) pl.axvline(x=optimal_hyperp,color='r',linestyle="--") pl.title("Training- vs. Validation-Error (Optimal Hyperparameter = %.1e)"%optimal_hyperp); random_state = 42 #Load Data data = sp.loadtxt("data/X.txt") binary_target = sp.loadtxt("data/y_binary.txt") continuous_target = sp.loadtxt("data/y.txt") #Summary of the Data print("Orginal Data") print("Number Patients:\t%d"%data.shape[0]) print("Number Features:\t%d"%data.shape[1]) print() #Split Data into Training and Testing data train_test_data = train_test_split(data, continuous_target, test_size=0.2, random_state=random_state) training_data = train_test_data[0] testing_data = train_test_data[1] training_target = train_test_data[2] testing_target = train_test_data[3] print("Training Data") print("Number Patients:\t%d"%training_data.shape[0]) print("Number Features:\t%d"%training_data.shape[1]) print() print("Testing Data") print("Number Patients:\t%d"%testing_data.shape[0]) print("Number Features:\t%d"%testing_data.shape[1]) #Initialize different alpha values for the Ridge Regression model alphas = sp.logspace(-2,8,11) param_grid = dict(alpha=alphas) #5-fold cross-validation (outer-loop) outer_cv = KFold(n_splits=5,shuffle=True,random_state=random_state) #Line-search to find the optimal alpha value (internal-loop) #Model performance is measured with the negative mean squared error line_search = GridSearchCV(Ridge(random_state=random_state,solver="cholesky"), param_grid=param_grid, scoring="neg_mean_squared_error", return_train_score=True) #Execute nested cross-validation and compute mean squared error score = cross_val_score(line_search,X=training_data,y=training_target,cv=outer_cv,scoring="neg_mean_squared_error") print("5-fold nested cross-validation") print("Mean-Squared-Error:\t\t%.2f (-+ %.2f)"%(score.mean()*(-1),score.std())) print() #Estimate optimal alpha on the full training data line_search.fit(training_data,training_target) optimal_alpha = line_search.best_params_['alpha'] #Visualize training and validation error for different alphas visualized_variance_bias_tradeoff(alphas, line_search, optimal_alpha) #Train Ridge Regression on the full training data with optimal alpha model = Ridge(alpha=optimal_alpha,solver="cholesky") model.fit(training_data,training_target) #Use trained model the predict new instances in test data predictions = model.predict(testing_data) print("Prediction results on test data") print("MSE (test data, alpha=optimal):\t%.2f "%(mean_squared_error(testing_target,predictions))) print("Optimal Alpha:\t\t\t%.2f"%optimal_alpha) print() #Split data into training and testing splits, stratified by class-ratios stratiefied_splitter = StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=42) for train_index,test_index in stratiefied_splitter.split(data,binary_target): training_data = data[train_index,:] training_target = binary_target[train_index] testing_data = data[test_index,:] testing_target = binary_target[test_index] print("Training Data") print("Number Patients:\t\t%d"%training_data.shape[0]) print("Number Features:\t\t%d"%training_data.shape[1]) print("Number Patients Class 0:\t%d"%(training_target==0).sum()) print("Number Patients Class 1:\t%d"%(training_target==1).sum()) print() print("Testing Data") print("Number Patients:\t\t%d"%testing_data.shape[0]) print("Number Features:\t\t%d"%testing_data.shape[1]) print("Number Patients Class 0:\t%d"%(testing_target==0).sum()) print("Number Patients Class 1:\t%d"%(testing_target==1).sum()) Cs = sp.logspace(-7, 1, 9) param_grid = dict(C=Cs) grid = GridSearchCV(SVC(kernel="linear",random_state=random_state), param_grid=param_grid, scoring="accuracy", n_jobs=4, return_train_score=True) outer_cv = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state) #Perform 5 Fold cross-validation with internal line-search and report average Accuracy score = cross_val_score(grid,X=training_data,y=training_target,cv=outer_cv,scoring="accuracy") print("5-fold nested cross-validation on training data") print("Average(Accuracy):\t\t\t%.2f (-+ %.2f)"%(score.mean(),score.std())) print() grid.fit(training_data,training_target) optimal_C = grid.best_params_['C'] #Plot variance bias tradeoff visualized_variance_bias_tradeoff(Cs, grid, optimal_C,classification=True) #retrain model with optimal C and evaluate on test data model = SVC(C=optimal_C,random_state=random_state,kernel="linear") model.fit(training_data,training_target) predictions = model.predict(testing_data) print("Prediction with optimal C") print("Accuracy (Test data, C=Optimal):\t%.2f "%(accuracy_score(testing_target,predictions))) print("Optimal C:\t\t\t\t%.2e"%optimal_C) print() #Compute ROC FPR, TPR and AUC fpr, tpr, _ = roc_curve(testing_target, model.decision_function(testing_data)) roc_auc = auc(fpr, tpr) #Plot ROC Curve pl.figure(figsize=(8,8)) pl.plot(fpr, tpr, color='darkorange', lw=3, label='ROC curve (AUC = %0.2f)' % roc_auc) pl.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--') pl.xlim([-0.01, 1.0]) pl.ylim([0.0, 1.05]) pl.xlabel('False Positive Rate (1-Specificity)',fontsize=18) pl.ylabel('True Positive Rate (Sensitivity)',fontsize=18) pl.title('Receiver Operating Characteristic (ROC) Curve',fontsize=18) pl.legend(loc="lower right",fontsize=18) Cs = sp.logspace(-4, 4, 9) gammas = sp.logspace(-7, 1, 9) param_grid = dict(C=Cs,gamma=gammas) grid = GridSearchCV(SVC(kernel="rbf",random_state=42), param_grid=param_grid, scoring="accuracy", n_jobs=4, return_train_score=True) outer_cv = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state) #Perform 5 Fold cross-validation with internal line-search and report average Accuracy score = cross_val_score(grid,X=training_data,y=training_target,cv=outer_cv,scoring="accuracy") print("5-fold nested cross-validation on training data") print("Average(Accuracy):\t\t\t%.2f (-+ %.2f)"%(score.mean(),score.std())) print() grid.fit(training_data,training_target) optimal_C = grid.best_params_['C'] optimal_gamma = grid.best_params_['gamma'] #Retrain and test model = SVC(C=optimal_C,gamma=optimal_gamma,random_state=42,kernel="rbf") model.fit(training_data,training_target) predictions = model.predict(testing_data) print("Prediction with optimal C and Gamma") print("Accuracy (Test Data, C=Optimal):\t%.2f "%(accuracy_score(testing_target,predictions))) print("Optimal C:\t\t\t\t%.2e"%optimal_C) print("Optimal Gamma:\t\t\t\t%.2e"%optimal_gamma) print() #Compute ROC FPR, TPR and AUC fpr, tpr, _ = roc_curve(testing_target, model.decision_function(testing_data)) roc_auc = auc(fpr, tpr) #Plot ROC Curve pl.figure(figsize=(8,8)) pl.plot(fpr, tpr, color='darkorange', lw=3, label='ROC curve (AUC = %0.2f)' % roc_auc) pl.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--') pl.xlim([-0.01, 1.0]) pl.ylim([0.0, 1.05]) pl.xlabel('False Positive Rate (1-Specificity)',fontsize=18) pl.ylabel('True Positive Rate (Sensitivity)',fontsize=18) pl.title('Receiver Operating Characteristic (ROC) Curve',fontsize=18) pl.legend(loc="lower right",fontsize=18) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: And here's my code, using CDFs.
<ASSISTANT_TASK:> Python Code: import numpy as np from scipy.stats import beta import matplotlib.pyplot as plt def quadratic(x): return 4*x*(1-x) N = 100000 x = np.empty(N) # arbitary irrational starting point x[0] = 1/np.sqrt(3) for i in range(1, N): x[i] = quadratic( x[i-1] ) plt.plot(x[0:100]) plt.xlabel("iteration index") plt.show() t = np.linspace(0, 1, 100) plt.hist(x, bins=t, normed=True) plt.xlabel("bins") plt.ylabel("counts") plt.plot(t, beta(0.5,0.5).pdf(t), linewidth=3) plt.legend(["beta(1/2, 1/2)"]) plt.show() from thinkstats2 import Cdf import thinkplot plt.plot(t, beta(0.5,0.5).cdf(t), color='orange') thinkplot.Cdf(Cdf(x), color='blue', linewidth=1) plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2. Data Step2: Entering a DataFrame to display it will truncate it if it's too long. Step3: df_raw.tail() will show the last few rows of the DataFrame. By default it shows the Step4: RMSLE is used in the Kaggle competition. So by taking the log of all sale prices, we can just use RMSE later to calculate our loss. RMSLE Step5: 2.2.2 Initial Processing Step6: This dataset contains a mix of continuous and categorical variables. Most ML models (incl. RFs) req numbers -- so we need to convert all our cols to numbers. Step7: Inside fastai.structured is a function called add_datepart, which we'll use to fix this. Step8: Now the datatype for 'saledate' is numerical (int64). If we check the columns of the DataFrame we'll see the new ones added by add_datepart Step9: This isn't enough. One more bit of feature engineering is needed Step10: Now we can access categorical variables as .catattributes just as we could with .dt for datetime Step11: 'High', 'Low', 'Medium' in UsageBand will be seen by the RF as cats 0, 1, 2. It'll form a split first on either 0 vs 1, 2, or 2 vs 0, 1. That translates to 'High' vs 'Low' & 'Medium' or 'Medium' vs 'High' & 'Low'. That's a bit odd, and though the DT can get to a correct split regardless, by using a sensible ordering we can ensure it gets there in fewer splits - thus improving our model. Step12: 2.2.3 Preprocessing Step13: First we'll save our dataframe to disk in feather format since we have a good starting point. Step14: Now we want to replace the string categories with their numeric codes, handle missing continous values, and pull out the dependant variable (SalePrice) into a separate variable. The fastai.structured.proc_df is what we'll use to do this. Step15: 'SalePrice' is now absent from the DataFrame's columns, and all columns with a non-zero value for null-fractions have corresponding '_na' columns. Step16: If we check the DataFrame, we see that everything is now a number Step17: Now we have something we can pass into a Random-Forest Regressor. Step18: NOTE Step19: 3. Random Forests Step20: There was some overfitting going on -- but this 0.252 loss gets into the top 25% of the Kaggle public leaderboard. Step21: Loss Train, | Loss Valid, | R2 Train, | R2 Loss Step22: fastai.structured.draw_tree lets us visualize Decision Trees. model.estimators_[0] returns the 1st estimator from an array. Step23: We have 20k samples at the start of the this tree - because that's what we made the training set as when we split our data. Step24: 3.4 Bagging Step25: We'll grab predictions for each individual tree and look at one example. After you've built a RF, each tree is stored in the attribute Step26: Notice that most of the predictions were a bit off, but the mean of all of them is pretty good. 9.459 avg, 9.105 actual. Step27: Also note the plot's flattening out. (tested in original notebook and course nb) Step28: The OoB Score will usually slightly underestimate the generalizability of the model -- the more trees you have, the less the underestimation of the model - but it works well enough anyway. Step29: The basic idea is this Step30: Now with 10 estimators (default) we get an R2 of 0.858. Step31: Increasing to 40 esimates increases our R2 score from 0.858 to 0.877. Step32: Let's get a baseline for this full set to compare to. Step33: Each of the estimators will train all the way down until the leaf nodes have 1 sample in them. NOTE that our OoB score is better than our validation R2 score (.89278) because our validation set is not a random sample Step34: Values of 1, 3, 5, 10, 25 tend to work well for min_samples_leaf. Step35: Our model now has a validation R2 of 0.906. Our RMSE of log(price) has dropped from 0.233 to 0.229 as well. How good is that? Well on the Kaggle public leaderboard a loss of 0.2289 puts us in the top 20 of the competition. That's with a "totally brainless random forest with some totally brainless minor hyperparameter tuning."
<ASSISTANT_TASK:> Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline from fastai.imports import * from fastai.structured import * from pandas_summary import DataFrameSummary from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics PATH = "../../data/competitions/bluebook-for-bulldozers/" !ls {PATH} df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False, parse_dates=['saledate']) def display_all(df): with pd.option_context("display.max_rows", 1000): with pd.option_context("display.max_columns", 1000): display(df) # display_all(df_raw.tail().transpose()) # display_all(df_raw.describe(include='all').transpose()) # df_raw.head() df_raw.SalePrice = np.log(df_raw.SalePrice) model = RandomForestRegressor(n_jobs=-1) # n_jobs: number of cores to use. -1 ==> all model.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice) df_raw.saledate[:5] add_datepart(df_raw, 'saledate') df_raw.saleYear.head() df_raw.columns df_raw.head() train_cats(df_raw) df_raw.UsageBand.cat.categories df_raw.UsageBand.cat.set_categories(['High','Medium','Low'], ordered=True, inplace=True) print(df_raw.UsageBand[100:110]) print(df_raw.UsageBand.cat.codes[100:110]) display_all(df_raw.isnull().sum().sort_index()/len(df_raw)) os.makedirs(PATH + 'tmp', exist_ok=True) df_raw.to_feather(PATH + 'tmp/raw') df, y, nans = proc_df(df_raw, 'SalePrice') df.columns df.head() model = RandomForestRegressor(n_jobs=-1) model.fit(df, y) model.score(df, y) def split_vals(a, n): return a[:n].copy(), a[n:].copy() n_valid = 12000 # same as Kaggle's test set size # 12000 rows for val set n_trn = len(df) - n_valid # all else in trn set raw_train, raw_valid = split_vals(df_raw, n_trn) X_train, X_valid = split_vals(df, n_trn) y_train, y_valid = split_vals(y, n_trn) X_train.shape, y_train.shape, X_valid.shape def rmse(x,y): return math.sqrt(((x-y)**2).mean()) def print_score(m): res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid), m.score(X_train, y_train), m.score(X_valid, y_valid)] if hasattr(m, 'oob_score_'): res.append(m.oob_score_) print(res) model = RandomForestRegressor(n_jobs=-1) %time model.fit(X_train, y_train) print_score(model) df_trn, y_trn, nans = proc_df(df_raw, 'SalePrice', subset=30000) X_train, _ = split_vals(df_trn, 20000) y_train, _ = split_vals(y_trn, 20000) model = RandomForestRegressor(n_jobs=-1) ## initialize Model %time model.fit(X_train, y_train) ## train Model print_score(model) ## run predictions - still using orig valset # A small Deterministic Decision Tree model = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1) model.fit(X_train, y_train) print_score(model) # model.estimators_ draw_tree(model.estimators_[0], df_trn, precision=3) model = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1) model.fit(X_train, y_train) print_score(model) model = RandomForestRegressor(n_jobs=-1) # default is 10 estimators model.fit(X_train, y_train) print_score(model) preds = np.stack([t.predict(X_valid) for t in model.estimators_]) preds[:,0], np.mean(preds[:,0]), y_valid[0] # print first tree's predictions preds.shape # 12,000 predictions for each of 10 trees plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]); model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True) model.fit(X_train, y_train) print_score(model) df_trn, y_trn, nans = proc_df(df_raw, 'SalePrice') X_train, X_valid = split_vals(df_trn, n_trn) y_train, y_valid = split_vals(y_trn, n_trn) set_rf_samples(20000) model = RandomForestRegressor(n_jobs=-1, oob_score=True) %time model.fit(X_train, y_train) print_score(model) model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True) %time model.fit(X_train, y_train) print_score(model) reset_rf_samples() model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True) %time model.fit(X_train, y_train) print_score(model) model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True) %time model.fit(X_train, y_train) print_score(model) model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) %time model.fit(X_train, y_train) print_score(model) df_raw.fiProductClassDesc.cat.codes df_raw.fiProductClassDesc.cat.categories <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Implementation Step2: Creation of the Library Step3: So far, so good. Now we'll see how easy it is to use this library from within Python thanks to ROOT. Step4: That's it! We can now start exploring the content of the library. If you are wondering what a return code equal to 0 means, ROOT is telling us that the loading of the library happened without problems! Step5: Notice how the "impedence mismatch" generated by the concept of templates is ironed out in this case. The template parameter is specified as string in parentheses. Step6: Interactivity within C++
<ASSISTANT_TASK:> Python Code: %%python code=''' #include <iostream> #include <typeinfo> /// A trivial class class A { public: A(); ~A(); }; /// A trivial function int CountCharacters(const std::string s); /// A trivial template template<class T> class B { public: B() { std::cout << "The typeid name of the template argument is " << typeid(T).name() << std::endl; } }; ''' with open('myLibrary.h','w') as f_out: f_out.write(code) %%python code=''' #include "myLibrary.h" A::A() { std::cout << "This is the constructor of A" << std::endl; } A::~A() { std::cout << "This is the destructor of A" << std::endl; } int CountCharacters(const std::string s) { return s.size(); } ''' with open('myLibrary.cc','w') as f_out: f_out.write(code) %%bash g++ -o libmyLibrary.so -shared -fPIC myLibrary.cc %%bash ls *so import ROOT ROOT.gInterpreter.ProcessLine('#include "myLibrary.h"') ROOT.gSystem.Load("./libmyLibrary.so") a = ROOT.A() del a b_doublePtr = ROOT.B("double*")() ROOT.CountCharacters("This interactivity without bindings is really impressive.") %%cpp A a; <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Next we define the inverse problem parameters as a dictionary. Parameter's not specified will be filled with default values. Step2: Now we are ready to use different source localization methods. Step3: And visualize the results Step4: Alternatively we can use the optimization methods built on top of CasADi package. And initialize the problem. Step5: And use the optimization method as described in the thesis "Source localization for high-density microelectrode arrays" by Cem Uran.
<ASSISTANT_TASK:> Python Code: # Data path/filename t_ind = 38 data_path = '../data/' file_name = data_path + 'data_sim_low.hdf5' data_options = {'flag_cell': True, 'flag_electode': False} data = data_in(file_name, **data_options) localization_options = {'p_vres':20, 'p_jlen':0, 'p_erad': 5, 't_ind': 38, 'flag_depthweighted': False} loc = data_out(data, **localization_options) loc.cmp_sloreta() loc.xres = loc.res[:, t_ind] vis = visualize(data=data, loc=loc) vis.show_snapshot() optimization_options = {'p_vres':10, 'p_jlen':0, 'p_erad': 10, 'solver': p_solver, 'hessian': p_hessian, 'linsol': p_linsol, 'method': p_method, 't_ind': 35, 't_int': 1, 'sigma': float(p_sparse), 'flag_depthweighted': bool(int(p_norm)), 'flag_parallel': False, 'datafile_name': 'output_file', 'flag_lift_mask': False, 'flag_data_mask': True, 'flag_callback': True, 'flag_callback_plot': True, 'callback_steps': 40, 'p_dyn': float(p_dynamic) } opt = opt_out(data, **optimization_options) opt.set_optimization_variables_thesis() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Joining two tables using TD-IDF measure typically consists of six steps Step2: 2. Profiling the tables Step3: Based on the profile output, we find that the 'Title' attribute in both tables does Step4: 4. Applying overlap filter Step5: If you want to include pairs with missing value in the output, Step6: Now, when you apply the filter, pairs with missing values will also Step7: 6. Applying the TF-IDF matcher
<ASSISTANT_TASK:> Python Code: # Import libraries import py_stringsimjoin as ssj import py_stringmatching as sm import pandas as pd import os import sys print('python version: ' + sys.version) print('py_stringsimjoin version: ' + ssj.__version__) print('py_stringmatching version: ' + sm.__version__) print('pandas version: ' + pd.__version__) # construct the path of the tables to be loaded. Since we are loading a # dataset from the package, we need to access the data from the path # where the package is installed. If you need to load your own data, you can directly # provide your table path to the read_csv command. table_A_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'books_table_A.csv.gz']) table_B_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'books_table_B.csv.gz']) # Load csv files as dataframes. Since we are reading a compressed csv file, # we provide the compression argument. If you are reading an uncompressed # csv file, you should not specify the compression argument. A = pd.read_csv(table_A_path, compression='gzip') B = pd.read_csv(table_B_path, compression='gzip') print('Number of records in A: ' + str(len(A))) print('Number of records in B: ' + str(len(B))) A.head(1) B.head(1) # profile attributes in table A ssj.profile_table_for_join(A) # profile attributes in table B ssj.profile_table_for_join(B) # create whitespace tokenizer for tokenizing 'Title' attribute ws = sm.WhitespaceTokenizer() ws.tokenize('The Maze Runner Series Complete Collection') # create overlap filter with whitespace tokenizer and threshold of 1. of = ssj.OverlapFilter(ws, 1) # apply overlap filter to tables A and B to find tuple pairs # sharing at least 1 token in Title attribute C = of.filter_tables(A, B, 'ID', 'ID', 'Title', 'Title', n_jobs=-1) len(C) C.head(5) of = ssj.OverlapFilter(ws, 1, allow_missing=True) # create a list of tokens A_tokens = A['Title'].apply(ws.tokenize).tolist() B_tokens = B['Title'].apply(ws.tokenize).tolist() # merge both the lists of tokens to create the corpus corpus = A_tokens + B_tokens # create tf-idf object with the generated corpus tfidf = sm.TfIdf(corpus, dampen=True) # apply the matcher with a threshold of 0.5. This will find pairs from C # with TF-IDF score >= 0.5. Setting n_jobs=-1 exploits all CPU cores available. output_pairs = ssj.apply_matcher(C, 'l_ID', 'r_ID', A, B, 'ID', 'ID', 'Title', 'Title', ws, tfidf.get_sim_score, 0.5, l_out_attrs=['Title'], r_out_attrs=['Title'], n_jobs=-1) len(output_pairs) output_pairs.head() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Units Step2: From the representation above, we can already see that the units are in solar radii. We can access the units directly via get_default_unit. Step3: Calling get_value returns only the float of the value in these units. Step4: Alternatively, you can access an astropy quantity object that contains the value and unit by calling get_quantity. Step5: Both get_value and get_quantity also accept a unit argument which will return the value or quantity in the requested units (if able to convert). This unit argument takes either a unit object (we imported a forked version of astropy units from within PHOEBE) or a string representation that can be parsed. Step6: Similarly when setting the value, you can provide either a Quantity object or a value and unit. These will still be stored within PHOEBE according to the default_unit of the Parameter object. Step7: If for some reason you want to change the default units, you can, but just be careful that this could cause some float-point precision issues.
<ASSISTANT_TASK:> Python Code: #!pip install -I "phoebe>=2.4,<2.5" import phoebe from phoebe import u,c logger = phoebe.logger(clevel='WARNING') b = phoebe.default_binary() b.get_parameter(qualifier='sma', component='binary', context='component') b.get_parameter(qualifier='sma', component='binary', context='component').get_default_unit() b.get_parameter(qualifier='sma', component='binary', context='component').get_value() b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity() b.get_parameter(qualifier='sma', component='binary', context='component').get_value(unit=u.km) b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity(unit='km') b.get_parameter(qualifier='sma', component='binary', context='component').set_value(3800000*u.km) b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity() b.get_parameter(qualifier='sma', component='binary', context='component').set_value(3900000, unit='km') b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity() b.get_parameter(qualifier='sma', component='binary', context='component').set_default_unit('mm') b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity() b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity(unit='solRad') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 1.3. Ice Albedo Step7: 1.4. Atmospheric Coupling Variables Step8: 1.5. Oceanic Coupling Variables Step9: 1.6. Prognostic Variables Step10: 2. Key Properties --&gt; Software Properties Step11: 2.2. Code Version Step12: 2.3. Code Languages Step13: 3. Grid Step14: 3.2. Adaptive Grid Step15: 3.3. Base Resolution Step16: 3.4. Resolution Limit Step17: 3.5. Projection Step18: 4. Glaciers Step19: 4.2. Description Step20: 4.3. Dynamic Areal Extent Step21: 5. Ice Step22: 5.2. Grounding Line Method Step23: 5.3. Ice Sheet Step24: 5.4. Ice Shelf Step25: 6. Ice --&gt; Mass Balance Step26: 7. Ice --&gt; Mass Balance --&gt; Basal Step27: 7.2. Ocean Step28: 8. Ice --&gt; Mass Balance --&gt; Frontal Step29: 8.2. Melting Step30: 9. Ice --&gt; Dynamics Step31: 9.2. Approximation Step32: 9.3. Adaptive Timestep Step33: 9.4. Timestep
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-3', 'landice') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Using different formulations of plotting positions Step2: Normal vs Weibull scales and Cunnane vs Weibull plotting positions Step3: Now let's create probability plots on both Weibull and normal probability scales. Step4: This demonstrates that the different formulations of the plotting positions vary most at the extreme values of the dataset. Step5: Summary Step6: Again, the different values of α and β don't significantly alter the shape of the probability plot near between -- say -- the lower and upper quartiles.
<ASSISTANT_TASK:> Python Code: %matplotlib inline import warnings warnings.simplefilter('ignore') import numpy from matplotlib import pyplot from scipy import stats import seaborn clear_bkgd = {'axes.facecolor':'none', 'figure.facecolor':'none'} seaborn.set(style='ticks', context='talk', color_codes=True, rc=clear_bkgd) import probscale def format_axes(ax1, ax2): Sets axes labels and grids for ax in (ax1, ax2): if ax is not None: ax.set_ylim(bottom=1, top=99) ax.set_xlabel('Values of Data') seaborn.despine(ax=ax) ax.yaxis.grid(True) ax1.legend(loc='upper left', numpoints=1, frameon=False) ax1.set_ylabel('Normal Probability Scale') if ax2 is not None: ax2.set_ylabel('Weibull Probability Scale') numpy.random.seed(0) # reproducible data = numpy.random.normal(loc=5, scale=1.25, size=37) # simple weibull distribution weibull = stats.weibull_min(2) w_opts = {'label': 'Weibull (α=0, β=0)', 'marker': 'o', 'markeredgecolor': 'b'} c_opts = {'label': 'Cunnane (α=0.4, β=0.4)', 'marker': 's', 'markeredgecolor': 'g'} common_opts = { 'markerfacecolor': 'none', 'markeredgewidth': 1.25, 'linestyle': 'none' } fig, (ax1, ax2) = pyplot.subplots(figsize=(10, 8), ncols=2, sharex=True, sharey=False) for dist, ax in zip([None, weibull], [ax1, ax2]): for opts, postype in zip([w_opts, c_opts,], ['weibull', 'cunnane']): probscale.probplot(data, ax=ax, dist=dist, probax='y', scatter_kws={**opts, **common_opts}, pp_kws={'postype': postype}) format_axes(ax1, ax2) fig.tight_layout() h_opts = {'label': 'Hazen (α=0.5, β=0.5)', 'marker': '^', 'markeredgecolor': 'r'} fig, (ax1, ax2) = pyplot.subplots(figsize=(10, 8), ncols=2, sharex=True, sharey=False) for dist, ax in zip([None, weibull], [ax1, ax2]): for opts, postype in zip([c_opts, h_opts,], ['cunnane', 'Hazen']): probscale.probplot(data, ax=ax, dist=dist, probax='y', scatter_kws={**opts, **common_opts}, pp_kws={'postype': postype}) format_axes(ax1, ax2) fig.tight_layout() fig, ax1 = pyplot.subplots(figsize=(6, 8)) for opts, postype in zip([w_opts, c_opts, h_opts,], ['weibull', 'cunnane', 'hazen']): probscale.probplot(data, ax=ax1, dist=None, probax='y', scatter_kws={**opts, **common_opts}, pp_kws={'postype': postype}) format_axes(ax1, None) fig.tight_layout() # weibull plotting positions and sorted data w_probs, _ = probscale.plot_pos(data, postype='weibull') # normal plotting positions, returned "data" is identical to above c_probs, _ = probscale.plot_pos(data, postype='cunnane') # type 4 plot positions h_probs, _ = probscale.plot_pos(data, postype='hazen') # convert to percentages w_probs *= 100 c_probs *= 100 h_probs *= 100 print('Weibull: ', numpy.round(w_probs[:10], 2)) print('Cunnane: ', numpy.round(c_probs[:10], 2)) print('Hazen: ', numpy.round(h_probs[:10], 2)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: built in magic commands start with Step2: Character encoding Step3: Other than these two optional lines in the beginning of a Python code file, no additional code is required for initializing a program. Step4: This includes the whole module and makes it available for use later in the program. For example, we can do Step5: Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module Step6: This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions. Step7: Looking at what a module contains, and its documentation Step8: And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way). Step9: We can also use the help function directly on modules Step10: White spacing is ignored inside parenteses and brackets Step11: So use that to make you code easier to read Step14: This may make it hard to cut and paste code since the indentation may have to adjusted to the block Step15: You can create short anonymous fuctions or lambdas or even assign lambdas to variables but better to use a def Step16: Assignment Step17: Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it. Step18: If we assign a new value to a variable, its type can change. Step19: If we try to use a variable that has not yet been defined we get an NameError Step20: Fundamental types Step21: Type utility functions Step22: We can also use the isinstance method for testing types of variables Step23: Type casting Step24: Complex variables cannot be cast to floats or integers. We need to use z.real or z.imag to extract the part of the complex number we want Step25: Operators and comparisons Step26: Note Step27: Comparison operators &gt;, &lt;, &gt;= (greater or equal), &lt;= (less or equal), == equality, is identical. Step28: Compound types Step29: We can index a character in a string using [] Step30: Heads up MATLAB users Step31: If we omit either (or both) of start or stop from [start Step32: We can also define the step size using the syntax [start Step33: This technique is called slicing. Read more about the syntax here Step34: List Step35: We can use the same slicing techniques to manipulate lists as we could use on strings Step36: Heads up MATLAB users Step37: Elements in a list do not all have to be of the same type Step38: Python lists can be inhomogeneous and arbitrarily nested Step39: Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function Step40: Adding, inserting, modifying, and removing elements from lists Step41: We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable. Step42: Insert an element at an specific index using insert Step43: Remove first element with specific value using 'remove' Step44: Remove an element at a specific location using del Step45: See help(list) for more details, or read the online documentation Step46: We can unpack a tuple by assigning it to a comma-separated list of variables Step47: If we try to assign a new value to an element in a tuple we get an error Step48: Dictionaries Step49: Control Flow Step50: For the first time, here we encounted a peculiar and unusual aspect of the Python programming language Step51: Loops Step52: The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example Step53: Note Step54: To iterate over key-value pairs of a dictionary Step55: Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this Step56: List comprehensions Step57: while loops Step58: Note that the print("done") statement is not part of the while loop body because of the difference in indentation. Step60: Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body. Step62: Functions that returns a value use the return keyword Step64: We can return multiple values from a function using tuples (see above) Step65: Default argument and keyword arguments Step66: If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition Step67: If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments. Step68: Unnamed functions (lambda function) Step69: This technique is useful for example when we want to pass a simple function as an argument to another function, like this Step73: Classes Step74: To create a new instance of a class Step75: To invoke a class method in the class instance p Step80: Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables. Step81: We can import the module mymodule into our Python program using import Step82: Use help(module) to get a summary of what the module provides Step83: If we make changes to the code in mymodule.py, we need to reload it using reload Step84: Exceptions Step85: A typical use of exceptions is to abort functions when some error condition occurs, for example Step86: To get information about the error, we can access the Exception class instance that describes the exception by using for example Step87: Counter Step88: Sorting Step89: List Comprehensions Step91: Generators and Iterators Step92: You need to recreate the the lazy generator to use it a second time or use a list Step93: Lecuture 2 continued
<ASSISTANT_TASK:> Python Code: ls ..\..\Scripts\hello-world*.py %%sh cat ../../Scripts/hello-world.py !python ..\..\Scripts\hello-world.py %%sh cat ../../Scripts/hello-world-in-swedish.py !python ../../Scripts/hello-world-in-swedish.py import math import math x = math.cos(2 * math.pi) print(x) from math import * x = cos(2 * pi) print(x) from math import cos, pi x = cos(2 * pi) print(x) import math print(dir(math)) help(math.log) log(10) log(10, 2) for i in [1,2,3,4]: print i print "done looping" long_winded_computation = (1+2+ 3 + 4 + 5+ 6 + 7 + 8 + 9 + 10 + 11+ 13 + 14 + 15) long_winded_computation list_of_lists = [[1,2,3],[4,5,6],[7,8,9]] easier_to_read_list_of_lists = [[1,2,3], [4,5,6], [7,8,9]] def double(x): this is where you put in the docstring that explains what the fuciton does: this function multiplies its input by return x * 2 def apply_to_one(f): calls the fuction f with 1 as its argument return f(1) apply_to_one(double) apply_to_one(lambda x: x + 4 ) another_double = lambda x: 2 * x def yet_another_double(x): return 2 * x # variable assignments x = 1.0 my_variable = 12.2 type(x) x = 1 type(x) print(y) # integers x = 1 type(x) # float x = 1.0 type(x) # boolean b1 = True b2 = False type(b1) # complex numbers: note the use of `j` to specify the imaginary part x = 1.0 - 1.0j type(x) print(x) print(x.real, x.imag) import types # print all types defined in the `types` module print(dir(types)) x = 1.0 # check if the variable x is a float type(x) is float # check if the variable x is an int type(x) is int isinstance(x, float) x = 1.5 print(x, type(x)) x = int(x) print(x, type(x)) z = complex(x) print(z, type(z)) x = float(z) y = bool(z.real) print(z.real, " -> ", y, type(y)) y = bool(z.imag) print(z.imag, " -> ", y, type(y)) 1 + 2, 1 - 2, 1 * 2, 1 / 2 1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0 # Integer division of float numbers 3.0 // 2.0 # Note! The power operators in python isn't ^, but ** 2 ** 2 True and False not False True or False 2 > 1, 2 < 1 2 > 2, 2 < 2 2 >= 2, 2 <= 2 # equality [1,2] == [1,2] # objects identical? l1 = l2 = [1,2] l1 is l2 s = "Hello world" type(s) # length of the string: the number of characters len(s) # replace a substring in a string with somethign else s2 = s.replace("world", "test") print(s2) s[0] s[0:5] s[4:5] s[:5] s[6:] s[:] s[::1] s[::2] print("str1", "str2", "str3") # The print statement concatenates strings with a space print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings print("str1" + "str2" + "str3") # strings added with + are concatenated without space print("value = %f" % 1.0) # we can use C-style string formatting # this formatting creates a string s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5) print(s2) # alternative, more intuitive way of formatting a string s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5) print(s3) l = [1,2,3,4] print(type(l)) print(l) print(l) print(l[1:3]) print(l[::2]) l[0] l = [1, 'a', 1.0, 1-1j] print(l) nested_list = [1, [2, [3, [4, [5]]]]] nested_list start = 10 stop = 30 step = 2 range(start, stop, step) # in python 3 range generates an interator, which can be converted to a list using 'list(...)'. # It has no effect in python 2 list(range(start, stop, step)) list(range(-10, 10)) s # convert a string to a list by type casting: s2 = list(s) s2 # sorting lists s2.sort() print(s2) # create a new empty list l = [] # add an elements using `append` l.append("A") l.append("d") l.append("d") print(l) l[1] = "p" l[2] = "p" print(l) l[1:3] = ["d", "d"] print(l) l.insert(0, "i") l.insert(1, "n") l.insert(2, "s") l.insert(3, "e") l.insert(4, "r") l.insert(5, "t") print(l) l.remove("A") print(l) del l[7] del l[6] print(l) point = (10, 20) print(point, type(point)) point = 10, 20 print(point, type(point)) x, y = point print("x =", x) print("y =", y) point[0] = 20 params = {"parameter1" : 1.0, "parameter2" : 2.0, "parameter3" : 3.0,} print(type(params)) print(params) print("parameter1 = " + str(params["parameter1"])) print("parameter2 = " + str(params["parameter2"])) print("parameter3 = " + str(params["parameter3"])) params["parameter1"] = "A" params["parameter2"] = "B" # add a new entry params["parameter4"] = "D" print("parameter1 = " + str(params["parameter1"])) print("parameter2 = " + str(params["parameter2"])) print("parameter3 = " + str(params["parameter3"])) print("parameter4 = " + str(params["parameter4"])) statement1 = False statement2 = False if statement1: print("statement1 is True") elif statement2: print("statement2 is True") else: print("statement1 and statement2 are False") statement1 = statement2 = True if statement1: if statement2: print("both statement1 and statement2 are True") # Bad indentation! if statement1: if statement2: print("both statement1 and statement2 are True") # this line is not properly indented statement1 = False if statement1: print("printed if statement1 is True") print("still inside the if block") if statement1: print("printed if statement1 is True") print("now outside the if block") for x in [1,2,3]: print(x) for x in range(4): # by default range start at 0 print(x) for x in range(-3,3): print(x) for word in ["scientific", "computing", "with", "python"]: print(word) for key, value in params.items(): print(key + " = " + str(value)) for idx, x in enumerate(range(-3,3)): print(idx, x) l1 = [x**2 for x in range(0,5)] print(l1) i = 0 while i < 5: print(i) i = i + 1 print("done") def func0(): print("test") func0() def func1(s): Print a string 's' and tell how many characters it has print(s + " has " + str(len(s)) + " characters") help(func1) func1("test") def square(x): Return the square of x. return x ** 2 square(4) def powers(x): Return a few powers of x. return x ** 2, x ** 3, x ** 4 powers(3) x2, x3, x4 = powers(3) print(x3) def myfunc(x, p=2, debug=False): if debug: print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p)) return x**p myfunc(5) myfunc(5, debug=True) myfunc(p=3, debug=True, x=7) f1 = lambda x: x**2 # is equivalent to def f2(x): return x**2 f1(2), f2(2) # map is a built-in python function map(lambda x: x**2, range(-3,4)) # in python 3 we can use `list(...)` to convert the iterator to an explicit list list(map(lambda x: x**2, range(-3,4))) class Point: Simple class for representing a point in a Cartesian coordinate system. def __init__(self, x, y): Create a new Point at x, y. self.x = x self.y = y def translate(self, dx, dy): Translate the point by dx and dy in the x and y direction. self.x += dx self.y += dy def __str__(self): return("Point at [%f, %f]" % (self.x, self.y)) p1 = Point(0, 0) # this will invoke the __init__ method in the Point class print(p1) # this will invoke the __str__ method p2 = Point(1, 1) p1.translate(0.25, 1.5) print(p1) print(p2) %%file mymodule.py Example of a python module. Contains a variable called my_variable, a function called my_function, and a class called MyClass. my_variable = 0 def my_function(): Example function return my_variable class MyClass: Example class. def __init__(self): self.variable = my_variable def set_variable(self, new_value): Set self.variable to a new value self.variable = new_value def get_variable(self): return self.variable import mymodule help(mymodule) mymodule.my_variable mymodule.my_function() my_class = mymodule.MyClass() my_class.set_variable(10) my_class.get_variable() reload(mymodule) # works only in python 2 raise Exception("description of the error") try: print("test") # generate an error: the variable test is not defined print(test) except: print("Caught an exception") try: print("test") # generate an error: the variable test is not defined print(test) except Exception as e: print("Caught an exception:" + str(e)) import collections as coll c = coll.Counter([0,1,2,0]) print c document = "This is a test document with a lot of different words but at least one duplicate".split(" ") word_counts = coll.Counter(document) print word_counts x = [4,1,2,3] y=sorted(x) x.sort() print "X="+str(x) print "y="+str(y) even_numbers = [x for x in range(5) if x %2 == 0] squares = [x * x for x in range(5)] even_squared = [x*x for x in even_numbers] print even_squared square_dict = { x: x*x for x in range(5) } print square_dict range(10) # works greate but sometimes we need a single number or set when we need them def lazy_range(n): a lazy version of ther range function to only create the value when evaluating it import when the range gets really big i = 0 while i < n : yield i i += 1 for i in lazy_range(10): print str(double(i)) ### randomness import random as rand [rand.random() for _ in range(4)] import this <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: <br/> Step2: <br/> Step3: <br/> Step4: <br/> Step5: <br/> Step6: Generate CSV file for Scoreboard
<ASSISTANT_TASK:> Python Code: #!pip install --user --upgrade watson-developer-cloud #Making a local folder to put my data. #NOTE: YOU MUST do something like this on a Spark Enterprise cluster at the hackathon so that #you can put your data into a separate local file space. Otherwise, you'll likely collide with #your fellow participants. my_team_name_data_folder = 'my_team_name_data_folder' mydatafolder = os.environ['PWD'] + '/' + my_team_name_data_folder + '/zipfiles' if os.path.exists(mydatafolder) is False: os.makedirs(mydatafolder) !ls my_team_name_data_folder/zipfiles from __future__ import division import cStringIO import glob import json import os import requests import time import timeit import zipfile import copy from random import randint import matplotlib.pyplot as plt import numpy as np import ibmseti from watson_developer_cloud import VisualRecognitionV3 apiVer = VisualRecognitionV3.latest_version #'2016-05-20' classifier_prefix = 'setisignals' #You can sign up with WatsonVR through Bluemix to get a key #However, Hackathon participants will be provided with a WATSON VR key that has more free API calls per day. apiKey = 'WATSON-VISUAL-RECOGNITION-API-KEY' vr = VisualRecognitionV3(apiVer, api_key=apiKey) ## View all of your classifiers classifiers = vr.list_classifiers() print json.dumps(classifiers, indent=2) ## Run this cell ONLY IF you want to REMOVE all classifiers # Otherwise, the subsequent cell will append images to the `classifier_prefix` classifier classifiers = vr.list_classifiers() for c in classifiers['classifiers']: vr.delete_classifier(c['classifier_id']) classifiers = vr.list_classifiers() print json.dumps(classifiers, indent=2) #Create new classifier, or get the ID for the latest SETISIGNALS classifier classifier_id = None classifier = None classifiers = vr.list_classifiers() for c in classifiers['classifiers']: if c['status'] == 'ready' and (classifier_prefix in c['classifier_id']): classifier_id = c['classifier_id'] if classifier_id is not None: classifier = vr.get_classifier(classifier_id) print '\r\nFound classifer:\r\n\r\n{}'.format(json.dumps(classifier, indent=2)) else: print 'No custom classifier available\r\n' print(json.dumps(classifiers, indent=2)) squiggle = sorted(glob.glob('{}/classification_*_squiggle.zip'.format(mydatafolder))) narrowband = sorted(glob.glob('{}/classification_*_narrowband.zip'.format(mydatafolder))) narrowbanddrd = sorted(glob.glob('{}/classification_*_narrowbanddrd.zip'.format(mydatafolder))) noise = sorted(glob.glob('{}/classification_*_noise.zip'.format(mydatafolder))) sq = len(squiggle) nb = len(narrowband) nd = len(narrowbanddrd) ns = len(noise) ## Possible todo here: Try using the 'noise' as a "negative" example when training Watson. See the Watson documentation. num = max(sq, nb, nd, ns) #num = max(sq, nb, nd) if classifier_id is None: print 'Adding custom classifier ... this may take awhile' else: print 'Updating custom classifier {} ... this may take awhile'.format(classifier_id) for i in range(num): squiggle_p = open(squiggle[i], 'rb') if i < sq else None narrowband_p = open(narrowband[i], 'rb') if i < nb else None narrowbanddrd_p = open(narrowbanddrd[i], 'rb') if i < nd else None noise_p = open(noise[i], 'rb') if i < ns else None if classifier_id is None: # print 'Creating with\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p) #use this line if going to use 'noise' as negative example print 'Creating with\r\n{}\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p, noise_p) classifier = vr.create_classifier( classifier_prefix, squiggle_positive_examples = squiggle_p, narrowband_positive_examples = narrowband_p, narrowbanddrd_positive_examples = narrowbanddrd_p, noise_positive_examples = noise_p #remove this if going to use noise as 'negative' examples ) classifier_id = classifier['classifier_id'] else: print 'Updating with\r\n{}\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p, noise_p) # print 'Updating with\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p) #use this line if going to use 'noise' as negative example classifier = vr.update_classifier( classifier_id, squiggle_positive_examples = squiggle_p, narrowband_positive_examples = narrowband_p, narrowbanddrd_positive_examples = narrowbanddrd_p, noise_positive_examples = noise_p #remove this if going to use noise as 'negative' examples ) if squiggle_p is not None: squiggle_p.close() if narrowband_p is not None: narrowband_p.close() if narrowbanddrd_p is not None: narrowbanddrd_p.close() if noise_p is not None: noise_p.close() if classifier is not None: print('Classifier: {}'.format(classifier_id)) status = classifier['status'] startTimer = timeit.default_timer() while status in ['training', 'retraining']: print('Status: {}'.format(status)) time.sleep(10) classifier = vr.get_classifier(classifier_id) status = classifier['status'] stopTimer = timeit.default_timer() print '{} took {} minutes'.format('Training' if i == 0 else 'Retraining', int(stopTimer - startTimer) / 60) print(json.dumps(vr.get_classifier(classifier_id), indent=2)) zz = zipfile.ZipFile(mydatafolder + '/' + 'testset_narrowband.zip') test_list = zz.namelist() randomSignal = zz.open(test_list[10],'r') from IPython.display import Image squigImg = randomSignal.read() Image(squigImg) #note - have to 'open' this again because it was already .read() out in the line above randomSignal = zz.open(test_list[10],'r') url_result = vr.classify(images_file=randomSignal, classifier_ids=classifier_id, threshold=0.0) print(json.dumps(url_result, indent=2)) #Create a dictionary object to store results from Watson from collections import defaultdict class_list = ['squiggle', 'noise', 'narrowband', 'narrowbanddrd'] results_group_by_class = {} for classification in class_list: results_group_by_class[classification] = defaultdict(list) failed_to_classify_uuid_list = [] print classifier_id results_group_by_class ### NOTE. If this breaks due to a requests timeout or other error: **just restart this cell** # The processing should pick up where it left off. ## NOTE: This code could be more efficient and make fewer HTTP calls to Watson. I could have dumped the testset_<class>.zip into # smaller zip files (testset_<class>_N.zip for N = 1,2,3,4...) and then made a single call to Watson with each smaller zip file # # Example: # with open(mydatafolder + '/' + 'testset_squiggle_1.zip', 'rb') as squigglezips: # url_result = vr.classify(images_file=squigglezips, classifier_ids=classifier_id, threshold=0.0) # The 'testset_squiggle.zip' files are too large to make a single to call to Watson, and so this code goes through # each file one by one. ### ASLO, I could have farmed this out to the Spark executor nodes as well. for sigclass in class_list: passed = 0 zz = zipfile.ZipFile(mydatafolder + '/' + 'testset_{}.zip'.format(sigclass)) zzlist = zz.namelist() ### REDUCING TESTING to only the first 30 signals in the test set -- to keep this demonstration code faster. zzlist = zzlist[:30] zzlistsize = len(zzlist) startTimer = timeit.default_timer() resdict = results_group_by_class[classification] print 'Running test ({} images) for {}... this may take a while.'.format(zzlistsize, sigclass) for fn in zzlist: pngfilename = fn.split('/')[-1] uuid = pngfilename.split('.')[0] classification = sigclass if uuid in resdict['uuid'] or uuid in failed_to_classify_uuid_list: print " have already classified {}".format(uuid) continue classify_result = vr.classify(images_file=zz.open(fn,'r'), classifier_ids=classifier_id, threshold=0.0) maxscore = 0 maxscoreclass = None classifiers_arr = classify_result['images'][0]['classifiers'] score_list = [] for classifier_result in classifiers_arr: for class_result in classifier_result['classes']: score_list.append((class_result['class'],class_result['score'])) if class_result['score'] > maxscore: maxscore = class_result['score'] maxscoreclass = class_result['class'] #sort alphabetically score_list.sort(key = lambda x: x[0]) score_list = map(lambda x:x[1], score_list) if maxscoreclass is None: print 'Failed: {} - Actual: {}, No classification returned'.format(pngfilename, classification) #print(json.dumps(classify_result, indent=2)) elif maxscoreclass != classification: print 'Failed: {} - Actual: {}, Watson Predicted: {} ({})'.format(pngfilename, classification, maxscoreclass, maxscore) else: passed += 1 print 'Passed: {} - Actual: {}, Watson Predicted: {} ({})'.format(pngfilename, classification, maxscoreclass, maxscore) if maxscoreclass is not None: resdict['signal_classification'].append(classification) resdict['uuid'].append(uuid) resdict['watson_class'].append(maxscoreclass) resdict['watson_class_score'].append(maxscore) resdict['scores'].append(score_list) else: #add to failed list failed_to_classify_uuid_list.append(uuid) stopTimer = timeit.default_timer() print 'Test Score: {}% ({} of {} Passed)'.format(int((float(passed) / zzlistsize) * 100), passed, zzlistsize) print 'Tested {} images in {} minutes'.format(zzlistsize, int(stopTimer - startTimer) / 60) print "DONE" import pickle pickle.dump(results_group_by_class, open(mydatafolder + '/' + "watson_results.pickle", "w")) watson_results = pickle.load(open(mydatafolder + '/' + "watson_results.pickle","r")) # reorganize the watson_results dictionary to extract # a list of [true_class, [scores], estimated_class] and # use these for measuring our model's performance class_scores = [] for k in watson_results.keys(): class_scores += zip(watson_results[k]['uuid'], watson_results[k]['signal_classification'], watson_results[k]['scores'], watson_results[k]['watson_class'] ) class_scores[100] from sklearn.metrics import classification_report import sklearn y_train = [x[1] for x in class_scores] y_pred = [x[3] for x in class_scores] y_prob = [x[2] for x in class_scores] #we normalize the Watson score values to 1 in order to use them in the log_loss calculation even though the Watson VR scores are not true class prediction probabilities y_prob = map(lambda x: (x, sum(x)), y_prob) y_prob = map(lambda x: [y / x[1] for y in x[0]], y_prob) print sklearn.metrics.classification_report(y_train,y_pred) print sklearn.metrics.confusion_matrix(y_train,y_pred) print("Classification accuracy: %0.6f" % sklearn.metrics.accuracy_score(y_train,y_pred) ) print("Log Loss: %0.6f" % sklearn.metrics.log_loss(y_train,y_prob) ) import csv my_output_results = my_team_name_data_folder + '/' + 'watson_scores.csv' with open(my_output_results, 'w') as csvfile: fwriter = csv.writer(csvfile, delimiter=',') for row in class_scores: fwriter.writerow([row[0]] + row[2]) !cat my_team_name_data_folder/watson_scores.csv <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Make sure to restart your kernel to ensure this change has taken place. Step2: To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action, Sci-Fi, Comedy, Cartoon, and Drama to describe our movies (and thus our users). Step3: Initialize our users, movie ratings, and features Step4: Computing the user feature matrix Step5: Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users. Step6: Ranking feature relevance for each user Step7: Determining movie recommendations. Step8: The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix. Step9: Finally, let's grab and print out the top 2 rated movies for each user.
<ASSISTANT_TASK:> Python Code: !pip3 install tensorflow import numpy as np import tensorflow as tf print(tf.__version__) users = ['Ryan', 'Danielle', 'Vijay', 'Chris'] movies = [ 'Star Wars', 'The Dark Knight', 'Shrek', 'The Incredibles', 'Bleu', 'Memento' ] features = ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama'] num_users = len(users) num_movies = len(movies) num_feats = len(features) num_recommendations = 2 # Each row represents a user's rating for the different movies. users_movies = tf.constant([ [4, 6, 8, 0, 0, 0], [0, 0, 10, 0, 8, 3], [0, 6, 0, 0, 3, 7], [10, 9, 0, 5, 0, 2]], dtype=tf.float32) # Features of the movies one-hot encoded. # e.g. columns could represent # ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama'] movies_feats = tf.constant([ [1, 1, 0, 0, 1], [1, 1, 0, 0, 0], [0, 0, 1, 1, 0], [1, 0, 1, 1, 0], [0, 0, 0, 0, 1], [1, 0, 0, 0, 1]], dtype=tf.float32) users_feats = tf.matmul(users_movies, movies_feats) users_feats users_feats = users_feats / tf.reduce_sum(users_feats, axis=1, keepdims=True) users_feats top_users_features = tf.nn.top_k(users_feats, num_feats)[1] top_users_features for i in range(num_users): feature_names = [features[int(index)] for index in top_users_features[i]] print('{}: {}'.format(users[i], feature_names)) users_ratings = tf.matmul(users_feats, tf.transpose(movies_feats)) users_ratings users_unseen_movies = tf.equal(users_movies, tf.zeros_like(users_movies)) ignore_matrix = tf.zeros_like(tf.cast(users_movies, tf.float32)) users_ratings_new = tf.where( users_unseen_movies, users_ratings, ignore_matrix) users_ratings_new top_movies = tf.nn.top_k(users_ratings_new, num_recommendations)[1] top_movies for i in range(num_users): movie_names = [movies[index] for index in top_movies[i]] print('{}: {}'.format(users[i], movie_names)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step 1 Step1: Independent Graph Assumption Step2: From the above, we conclude that the assumption that the graphs were independent is false. This is because the off-diagonal components of the covariance are highly significant in the cross-graph covariance matrix. Step3: From the above we observe that, since the elbow of the bic curve lies at 6, that our data may not have been sampled identically from one distribution. This assumption based on the evidence provided is also false. Step4: From the above, we can conclude that the edges are not independent of one another, as the ratio of on- to off-diagonal covariance is very small. This assumption is false. Step5: Looking at the bic curve, we observe that the optimal number of clusters is 5, indicating that edges are not in fact identically distributed. This assumptions appears to be false.
<ASSISTANT_TASK:> Python Code: # change working dir path = "/Users/albertlee/claritycontrol/code/scripts" # use your own path import os os.chdir(path) import numpy as np import matplotlib.pyplot as plt import os import csv import igraph as ig %matplotlib inline # Initializing dataset names dnames = list(['../data/hist']) print "Dataset: " + ", ".join(dnames) # Getting graph names fs = list() for dd in dnames: fs.extend([root+'/'+file for root, dir, files in os.walk(dd) for file in files]) fs = fs[1:] def loadGraphs(filenames, rois, printer=False): A = np.zeros((rois, rois, len(filenames))) for idx, files in enumerate(filenames): if printer: print "Loading: " + files g = ig.Graph.Read_GraphML(files) tempg = g.get_adjacency(attribute='weight') A[:,:,idx] = np.asarray(tempg.data) return A # Load X X = loadGraphs(fs, 70) print X.shape # Load Y ys = csv.reader(open('../data/points/Fear199.csv')) y = [y[5] for y in ys] y = [1 if x=='F' else 0 for x in y[1:]] vectorized = np.reshape(X, (X.shape[0]**2, X.shape[2])).T covar = np.cov(vectorized) plt.figure(figsize=(7,7)) plt.imshow(covar) plt.title('Covariance of KKI2009 dataset') plt.colorbar() plt.show() diag = covar.diagonal()*np.eye(covar.shape[0]) hollow = covar-diag d_det = np.linalg.det(diag) h_det = np.linalg.det(hollow) plt.figure(figsize=(11,8)) plt.subplot(121) plt.imshow(diag) plt.clim([0, np.max(covar)]) plt.title('Determinant of on-diagonal: ' + str(d_det)) plt.subplot(122) plt.imshow(hollow) plt.clim([0, np.max(covar)]) plt.title('Determinant of off-diagonal: ' + str(h_det)) plt.show() print "Ratio of on- and off-diagonal determinants: " + str(d_det/h_det) import sklearn.mixture i = np.linspace(1,15,15,dtype='int') print i bic = np.array(()) for idx in i: print "Fitting and evaluating model with " + str(idx) + " clusters." gmm = sklearn.mixture.GMM(n_components=idx,n_iter=1000,covariance_type='diag') gmm.fit(vectorized) bic = np.append(bic, gmm.bic(vectorized)) plt.figure(figsize=(7,7)) plt.plot(i, 1.0/bic) plt.title('BIC') plt.ylabel('score') plt.xlabel('number of clusters') plt.show() print bic vect = np.reshape(X, (X.shape[0]**2, X.shape[2])) covar = np.cov(vect) plt.figure(figsize=(7,7)) plt.imshow(covar) plt.title('Covariance of KKI2009 dataset') plt.colorbar() plt.show() diag = covar.diagonal()*np.eye(covar.shape[0]) hollow = covar-diag d_det = np.sum(diag) h_det = np.sum(hollow) plt.figure(figsize=(11,8)) plt.subplot(121) plt.imshow(diag) plt.clim([0, np.max(covar)]) plt.title('Sum of on-diagonal: ' + str(d_det)) plt.subplot(122) plt.imshow(hollow) plt.clim([0, np.max(covar)]) plt.title('Sum of off-diagonal: ' + str(h_det)) plt.show() print "Ratio of on- and off-diagonal covariance sums: " + str(d_det/h_det) import sklearn.mixture i = np.linspace(1,15,15,dtype='int') print i bic2 = np.array(()) for idx in i: print "Fitting and evaluating model with " + str(idx) + " clusters." gmm = sklearn.mixture.GMM(n_components=idx,n_iter=1000,covariance_type='diag') gmm.fit(vect.T) bic2 = np.append(bic2, gmm.bic(vect.T)) plt.figure(figsize=(7,7)) plt.plot(i, 1.0 / bic2) plt.title('BIC') plt.ylabel('score') plt.xlabel('number of clusters') plt.show() print bic2 import scipy.stats as ss ya = np.array(y) # clf = sklm.LinearRegression() # print vectorized.T.shape edgeprob = 1.0*np.sum(1.0*(vectorized.T>0),1)/4900 # print edgeprob # print ya.shape # vals = clf.fit(edgeprob.T, ya) # slope = clf.coef_ # intercept = clf.intercept_ vals = ss.linregress(edgeprob, ya) m = vals[0] c = vals[1] def comp_value(m, c, data): return m.T*data + c resi = np.array(()) for idx, subj in enumerate(ya): temp = comp_value(m, c, edgeprob[idx]) resi = np.append(resi, subj - temp) plt.figure(figsize=(7,7)) plt.scatter(edgeprob, resi) plt.title('Residual assignment error') plt.xlabel('edge probability') plt.ylabel('error') plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Let's Explore the data Step2: Now I'm starting to see a pattern here. Let's see how many female survived. Step3: Looks like the majority of people who survived are females.
<ASSISTANT_TASK:> Python Code: %matplotlib inline import pandas as pd import numpy as np train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') train.shape train.head() train.Age.hist() train.Age.describe() train[train['Age'] > 60][['Sex', 'Pclass', 'Age', 'Survived']] females = train[train['Sex'] == 'female'] females_who_survived = females[females['Survived'] == 1] females_who_survived.shape males = train[train['Sex'] == 'male'] males_who_survived = males[males['Survived'] == 1] males_who_survived.shape import pylab as pl from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import r2_score test.head() cols = ['Age', 'Pclass'] notnull_age = train[cols][train['Age'].notnull()] notnull_survived = train['Survived'][train['Age'].notnull()] notnull_age.head() clf = RandomForestClassifier(n_estimators=20, max_features=2, min_samples_split=5) clf.fit(notnull_age, notnull_survived) notnull_test = test[cols][test['Age'].notnull()] clf.predict(notnull_test) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. Step3: And we can see the characters encoded as integers. Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. Step5: Making training mini-batches Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. Step7: If you implemented get_batches correctly, the above output should look something like Step8: LSTM Cell Step9: RNN Output Step10: Training loss Step11: Optimizer Step12: Build the network Step13: Hyperparameters Step14: Time for training Step15: Saved checkpoints Step16: Sampling Step17: Here, pass in the path to a checkpoint and sample from the network.
<ASSISTANT_TASK:> Python Code: import time from collections import namedtuple import numpy as np import tensorflow as tf with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] encoded[:100] len(vocab) def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the batch size and number of batches we can make batch_size = len(arr) // n_seqs n_batches = batch_size // n_steps # Keep only enough characters to make full batches arr = arr[:n_batches * n_seqs * n_steps] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n: n + n_steps] # The targets, shifted by one y = np.concatenate((arr[:, n +1 : n + n_steps], arr[:, n:n+1]), axis=1) yield x, y batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps]) targets = tf.placeholder(tf.int32, [batch_size, num_steps]) # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32) return inputs, targets, keep_prob def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' ### Build the LSTM Cell # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell outputs drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- lstm_output: List of output tensors from the LSTM layer in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' L = in_size N = out_size # Reshape output so it's a bunch of rows, one row for each step for each sequence. # Concatenate lstm_output over axis 1 (the columns) seq_output = tf.concat(lstm_output, axis=1) # Reshape seq_output to a 2D tensor with lstm_size columns x = tf.reshape(seq_output, [-1, L]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): # Create the weight and bias variables here softmax_w = tf.Variable(tf.truncated_normal((L, N), stddev = 0.1)) softmax_b = tf.Variable(tf.zeros([out_size])) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits) return out, logits def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape(y_one_hot, logits.get_shape()) # Softmax cross entropy loss loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = y_reshaped)) return loss def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN with tf.nn.dynamic_rnn outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state = self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) batch_size = 10 # Sequences per batch num_steps = 50 # Number of sequence steps per batch lstm_size = 128 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.01 # Learning rate keep_prob = 0.5 # Dropout keep probability epochs = 20 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) tf.train.get_checkpoint_state('checkpoints') def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The next step is to setup the IceStick board. We import the class IceStick from Loam. Step2: Now that the IceStick setup is done, Step3: We then compile the program to verilog. This step also creates a PCF (physical constraints file). Step4: Now we run the open source tools for the Lattice ICE40. Step5: You can view the verilog file generated by Magma. Step6: Notice that the top-level module contains two arguments (ports),
<ASSISTANT_TASK:> Python Code: import magma as m m.set_mantle_target("ice40") from loam.boards.icestick import IceStick # Create an instance of an IceStick board icestick = IceStick() # Turn on the Clock # The clock must turned on because we are using a synchronous counter icestick.Clock.on() # Turn on the LED D5 icestick.D5.on(); from mantle import Counter N = 22 # Define the main Magma Circuit on the FPGA on the IceStick main = icestick.DefineMain() # Instance a 22-bit counter counter = Counter(N) # Wire bit 21 of the counter's output to D5. main.D5 <= counter.O[N-1] # End main m.EndDefine() m.compile('build/blink', main) %%bash cd build yosys -q -p 'synth_ice40 -top main -blif blink.blif' blink.v arachne-pnr -q -d 1k -o blink.txt -p blink.pcf blink.blif icepack blink.txt blink.bin #iceprog blink.bin %cat build/blink.v %cat build/blink.pcf <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an AnalysisStorage object. This makes the analysis much faster. Step2: Reaction rates Step3: The self-rates (the rate of returning the to initial state) are undefined, and return not-a-number. Step4: We normally look at these on a log scale Step5: Flux Step6: Conditional transition probability Step7: Path ensemble properties Step8: Interface crossing probabilities Step9: Path length histograms Step10: Sampling properties Step11: Replica exchange sampling Step12: Replica exchange mixing matrix Step13: Replica exchange graph Step14: Replica exchange flow
<ASSISTANT_TASK:> Python Code: from __future__ import print_function %matplotlib inline import matplotlib.pyplot as plt import openpathsampling as paths import numpy as np %%time storage = paths.AnalysisStorage("ala_mstis_production.nc") print("PathMovers:", len(storage.pathmovers)) print("Engines:", len(storage.engines)) print("Samples:", len(storage.samples)) print("Ensembles:", len(storage.ensembles)) print("SampleSets:", len(storage.samplesets)) print("Snapshots:", len(storage.snapshots)) print("Trajectories:", len(storage.trajectories)) print("Networks:", len(storage.networks)) %%time mstis = storage.networks[0] %%time for cv in storage.cvs: print(cv.name, cv._store_dict) mstis.hist_args['max_lambda'] = { 'bin_width' : 2, 'bin_range' : (0.0, 90) } mstis.hist_args['pathlength'] = { 'bin_width' : 5, 'bin_range' : (0, 100) } %%time mstis.rate_matrix(storage.steps, force=True) stateA = storage.volumes["A"] stateB = storage.volumes["B"] stateC = storage.volumes["C"] tcp_AB = mstis.transitions[(stateA, stateB)].tcp tcp_AC = mstis.transitions[(stateA, stateC)].tcp tcp_BC = mstis.transitions[(stateB, stateC)].tcp tcp_BA = mstis.transitions[(stateB, stateA)].tcp tcp_CA = mstis.transitions[(stateC, stateA)].tcp tcp_CB = mstis.transitions[(stateC, stateB)].tcp plt.plot(tcp_AB.x, tcp_AB) plt.plot(tcp_CA.x, tcp_CA) plt.plot(tcp_BC.x, tcp_BC) plt.plot(tcp_AC.x, tcp_AC) # same as tcp_AB in MSTIS plt.plot(tcp_AB.x, np.log(tcp_AB)) plt.plot(tcp_CA.x, np.log(tcp_CA)) plt.plot(tcp_BC.x, np.log(tcp_BC)) import pandas as pd flux_matrix = pd.DataFrame(columns=mstis.states, index=mstis.states) for state_pair in mstis.transitions: transition = mstis.transitions[state_pair] flux_matrix.set_value(state_pair[0], state_pair[1], transition._flux) flux_matrix outer_ctp_matrix = pd.DataFrame(columns=mstis.states, index=mstis.states) for state_pair in mstis.transitions: transition = mstis.transitions[state_pair] outer_ctp_matrix.set_value(state_pair[0], state_pair[1], transition.ctp[transition.ensembles[-1]]) outer_ctp_matrix hists_A = mstis.transitions[(stateA, stateB)].histograms hists_B = mstis.transitions[(stateB, stateC)].histograms hists_C = mstis.transitions[(stateC, stateB)].histograms for hist in [hists_A, hists_B, hists_C]: for ens in hist['max_lambda']: normalized = hist['max_lambda'][ens].normalized() plt.plot(normalized.x, normalized) # add visualization of the sum for hist in [hists_A, hists_B, hists_C]: for ens in hist['max_lambda']: reverse_cumulative = hist['max_lambda'][ens].reverse_cumulative() plt.plot(reverse_cumulative.x, reverse_cumulative) for hist in [hists_A, hists_B, hists_C]: for ens in hist['max_lambda']: reverse_cumulative = hist['max_lambda'][ens].reverse_cumulative() plt.plot(reverse_cumulative.x, np.log(reverse_cumulative)) for hist in [hists_A, hists_B, hists_C]: for ens in hist['pathlength']: normalized = hist['pathlength'][ens].normalized() plt.plot(normalized.x, normalized) for ens in hists_A['pathlength']: normalized = hists_A['pathlength'][ens].normalized() plt.plot(normalized.x, normalized) scheme = storage.schemes[0] scheme.move_summary(storage.steps) scheme.move_summary(storage.steps, 'shooting') scheme.move_summary(storage.steps, 'minus') scheme.move_summary(storage.steps, 'repex') scheme.move_summary(storage.steps, 'pathreversal') repx_net = paths.ReplicaNetwork(scheme, storage.steps) repx_net.mixing_matrix() repxG = paths.ReplicaNetworkGraph(repx_net) repxG.draw('spring') import openpathsampling.visualize as vis #reload(vis) from IPython.display import SVG tree = vis.PathTree( [step for step in storage.steps if not isinstance(step.change, paths.EmptyMoveChange)], vis.ReplicaEvolution(replica=3, accepted=False) ) tree.options.css['width'] = 'inherit' SVG(tree.svg()) decorrelated = tree.generator.decorrelated print ("We have " + str(len(decorrelated)) + " decorrelated trajectories.") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Create some data to learn from Step2: Select activation and loss functions Step3: Initialize the weights Step5: Forward propagation Step7: Backward propagation Step8: Train the neural network! Step10: Validate
<ASSISTANT_TASK:> Python Code: import numpy as np # Set up the data and network: n_outputs = 5 # We're attempting to learn XOR in this example, so our inputs and outputs will be the same. n_hidden_units = 10 # We'll use a single hidden layer with this number of hidden units in it. n_obs = 500 # How many observations of the XOR input to output vector will we use for learning? # How quickly do we want to update our weights? learning_rate = 0.1 # How many times will we try to use each observation to improve the weights? epochs = 10 # Think of this as iterations if you like. # Set random seed so that the exercise works out the same way for everyone: np.random.seed(42) # Create the inputs: training_vectors = np.random.binomial(1, 0.5, (n_obs, n_outputs)) # Each row is a binary vector to learn from. print('One instance with ' + str(n_outputs) + ' features: ' + str(training_vectors[0])) # Create the correct XOR outputs (t is for target): xor_training_vectors = training_vectors ^ 1 # This is just XOR, everything is deterministic. print('Correct label (simply XOR): ' + str(xor_training_vectors[0])) # Define an activation function and its derivative: def activ(x): # We'll use a sigmoid function: return 1.0 / (1.0 + np.exp(-x)) def activ_prime(x): # Derivative of the sigmoid function: # d/dx 1 / (1 + exp(-x)) = -(-exp(-x)) * (1 + exp(-x)) ^ (-2) return np.exp(-x) / ((1.0 + np.exp(-x)) ** 2) # Define a loss function and its derivative wrt predictions: def loss(prediction, truth): # We'll choose cross entropy loss for this demo. return -np.mean(truth * np.log(prediction) + (1 - truth) * np.log(1 - prediction)) def loss_prime(prediction, truth): # Derivative (elementwise) of cross entropy loss wrt prediction. # d/dy (-t log(y) - (1-t)log(1-y)) = -t/y + (1-t)/(1-y) = -(t-ty-y+ty) = y - t return prediction - truth # Simplest way to initialize is to choose weights uniformly at random between -1 and 1: weights1 = np.random.uniform(low=-1, high=1, size=(n_outputs, n_hidden_units)) weights2 = np.random.uniform(low=-1, high=1, size=(n_hidden_units, n_outputs)) # Note: there are much better ways to initialize weights, but our goal is simplicity here. def forward_prop(x, w1, w2): Implements forward propagation. Args: x: the input vector. w1: first set of weights mapping the input to layer 1. w2: second set of weights mapping layer 1 to layer 2. Returns: u1: unactivated unit values from layer 1 in forward prop u2: unactivated unit values from layer 2 in forward prop a1: activated unit values from layer 1 in forward prop a2: activated unit values from layer 2 in forward prop lab: the output label u1 = np.dot(x, w1) # u for unactivated weighted sum unit (other authors might prefer to call it z) a1 = activ(u1) # a for activated unit u2 = np.dot(a1, w2) a2 = activ(u2) # Let's output predicted labels too, but converting continuous a2 to binary: lab = (a2 > 0.5).astype(int) return u1, u2, a1, a2, lab def back_prop(x, t, u1, u2, a1, a2, w1, w2): Implements backward propagation. Args: x: the input vector t: the desired output vector. u1: unactivated unit values from layer 1 in forward prop u2: unactivated unit values from layer 2 in forward prop a1: activated unit values from layer 1 in forward prop a2: activated unit values from layer 2 in forward prop w1: first set of weights mapping the input to layer 1. w2: second set of weights mapping layer 1 to layer 2. Returns: d1: gradients for weights w1, used for updating w1 d2: gradients for weights w2, used for updating w2 e2 = loss_prime(a2, t) # e is for error; this is the "error" effect in the final layer d2 = np.outer(a1, e2) # d is for delta; this is the gradient value for updating weights w2 e1 = np.dot(w2, e2) * activ_prime(u1) # e is for error d1 = np.outer(x, e1) # d is for delta; this is the gradient update for the first set of weights w1 return d1, d2 # We only need the updates outputted # Train for epoch in range(epochs): loss_tracker = [] for i in range(training_vectors.shape[0]): # Input one obs at a time to become x = binary_vectors[i] (inputs) and t = xor_vectors[i] (targets) # Forward propagation: u1, u2, a1, a2, labels = forward_prop(training_vectors[i], weights1, weights2) # Backward propagation: d1, d2 = back_prop(training_vectors[i], xor_training_vectors[i], u1, u2, a1, a2, weights1, weights2) # Update the weights: weights1 -= learning_rate * d1 weights2 -= learning_rate * d2 loss_tracker.append(loss(prediction=a2, truth=xor_training_vectors[i])) print 'Epoch: %d, Average Loss: %.8f' % (epoch+1, np.mean(loss_tracker)) # Print performance to screen: def get_performance(n_valid, w1, w2): Computes performance and prints it to screen. Args: n_valid: number of validation instances we'd like to simulate. w1: first set of weights mapping the input to layer 1. w2: second set of weights mapping layer 1 to layer 2. Returns: None flawless_tracker = [] validation_vectors = np.random.binomial(1, 0.5, (n_valid, n_outputs)) xor_validation_vectors = validation_vectors ^ 1 for i in range(n_valid): u1, u2, a1, a2, labels = forward_prop(validation_vectors[i], w1, w2) if i < 3: print('********') print('Challenge ' + str(i + 1) + ': ' + str(validation_vectors[i])) print('Predicted ' + str(i + 1) + ': ' + str(labels)) print('Correct ' + str(i + 1) + ': ' + str(xor_validation_vectors[i])) instance_score = (np.array_equal(labels, xor_validation_vectors[i])) flawless_tracker.append(instance_score) print('\nProportion of flawless instances on ' + str(n_valid) + ' new examples: ' + str(round(100*np.mean(flawless_tracker),0)) + '%') get_performance(5000, weights1, weights2) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Visualise the data Step2: 2 - Data preprocessing Step3: The second part of the training set is a 5000-dimensional vector y that contains labels for the training set. Step4: One problem Step5: One hot encoding Step6: Split into train and test sets Step7: 3 - Deep Neural Network for Image Classification Step8: Build the 3-layer neural network Step10: Now we can put together all the functions to build a 3-layer neural network with this structure Step11: We will now train the model as a 3-layer neural network. Step13: 4. Results analysis Step15: 5 - Initializing parameters Step17: 6 - L2 Regularization Step19: Of course, because we changed the cost, we have to change backward propagation as well! Step21: Putting all together Step22: Let's check the new accuracy values Step26: 7 - Dropout
<ASSISTANT_TASK:> Python Code: from scipy.io import loadmat dataset = loadmat('../datasets/mnist-data.mat') # comes as dictionary dataset.keys() import matplotlib.pyplot as plt %matplotlib inline import numpy as np # Example of a picture indexImage = 4000 # try any index between 0 and 4999. They are sorted, from 1 to 10 (=0) renderImage = np.reshape(dataset['X'][indexImage], (20,20)) labelImage = dataset['y'][indexImage] plt.imshow(renderImage, cmap='gray') print ("Label: this is a ", labelImage) X = dataset['X'] # the images X.shape y = dataset['y'] # the labels y.shape y[499] list_y = [0 if i == 10 else i for i in y] # apply to each item in y y = np.asarray(list_y) y = y.reshape(-1,1) y.shape y[0:10] # verify that the label is now zero n_classes = 10 # 10 digits = 10 classes/labels # np.eye(n) creates an identity matrix of shape (n,n) OHE_y = np.eye(n_classes)[y.reshape(-1)] OHE_y.shape OHE_y[1000] # this is the new encoding for e.g. label = 2 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, OHE_y, test_size=0.2, random_state=7) input_layer_size = X.shape[1] num_px = np.sqrt(input_layer_size) # 400 = 20x20 Input Images of Digits n_y = y_train.shape[1] m_train = X_train.shape[0] m_test = X_test.shape[0] print ("Dataset dimensions:") print ("Number of training examples = " + str(m_train)) print ("Number of testing examples = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: <" + str(num_px) + ", " + str(num_px) + ">") print ("X train shape: " + str(X_train.shape)) print ("y train shape: " + str(y_train.shape)) ### CONSTANTS DEFINING THE MODEL #### # we define a neural network with total 3 layers, x, y and 1 hidden: n_h = 25 nn_layers = [input_layer_size, n_h, n_y] # length is 3 (layers) from nn_helpers import * # automatically reload the imported module in case of changes %load_ext autoreload %autoreload 2 nn_layers np.random.seed(1) train_set_x = X_train.T train_set_x.shape # y is the original output array, with labels # train_set_y is that set, one-hot-encoded train_set_y = y_train.T train_set_y.shape # FUNCTION: L_layer_model def simpleNN_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False): Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments: X -- data, numpy array of shape (number of examples, num_px * num_px) Y -- true "label" vector (containing 0 or 1), of shape (10, number of examples) layers_dims -- list containing the input size and each layer size, of length (number of layers + 1). learning_rate -- learning rate of the gradient descent update rule num_iterations -- number of iterations of the optimisation loop print_cost -- if True, it prints the cost every 200 steps Returns: parameters -- parameters learnt by the model. They can then be used to predict. costs = [] # keep track of cost iterations2cost = 200 # Print the cost every these iterations # Parameters initialization. parameters = initialise_parameters(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID. AL, caches = L_model_forward(X, parameters) # Compute cost. cost = compute_cost(AL, Y) # Backward propagation. grads = L_model_backward(AL, Y, caches) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the cost every iterations2cost training example if print_cost and i % iterations2cost == 0: print ("Cost after iteration %i: %f" %(i, cost)) if print_cost and i % iterations2cost == 0: costs.append(cost) if print_cost: # plot the cost fig, ax = plt.subplots(1,1) plt.plot(np.squeeze(costs)) ticks = ax.get_xticks() ax.locator_params(axis='x', nticks=len(costs)) ax.set_xticklabels([int(x*iterations2cost) for x in ticks]) plt.ylabel('cost') plt.xlabel('iterations') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters fit_params = simpleNN_model(train_set_x, train_set_y, nn_layers, learning_rate = 0.3, num_iterations = 3500, print_cost = True) def predict(X, yOHE, parameters): This function is used to predict the results of a L-layer neural network. It also checks them against the true labels and print the accuracy Arguments: X -- data set of examples you would like to label yOHE -- the true labels, as multi-class vectors parameters -- parameters of the trained model Returns: p -- predictions (the label) for the given dataset X m = X.shape[1] nLabels = yOHE.shape[1] n = len(parameters) // 2 # number of layers in the neural network p = np.zeros((1, m)) # the predicted output, initialised to zero y = np.zeros((1, m)) # the actual output # Forward propagation probas, caches = L_model_forward(X, parameters) # probas is a matrix of shape [nLabels, m] (one-hot-encoded) assert (probas.shape[1] == m) for i in range(0, m): # convert probs to label predictions: # just take the label with max prob p[0,i] = np.argmax(probas[:,i]) # convert expected results into label: takes the value with one y[0,i] = np.argmax(yOHE[:,i]) # print results print("Accuracy: " + str(np.sum((p == y)/m))) return p print ("On the training set:") predictions_train = predict(train_set_x, train_set_y, fit_params) print ("On the test set:") predictions_test = predict(X_test.T, y_test.T, fit_params) # FUNCTION: initialize_parameters def initialise_parameters_he(layer_dims): Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing the parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1])*np.sqrt(2./layer_dims[l-1]) parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) # unit tests assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters # GRADED FUNCTION: compute_cost_with_regularization def compute_cost_with_regularisation(A3, Y, parameters, lambdaHyper): Implement the cost function with L2 regularization. See formula (2) above. Arguments: A3 -- post-activation, output of forward propagation, of shape (output size, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) parameters -- python dictionary containing parameters of the model lambdaHyper -- the lambda regularisation hyper-parameter. Returns: cost - value of the regularized loss function (formula (2)) # This gives you the cross-entropy part of the cost cross_entropy_cost = compute_cost(A3, Y) sum_regularization_cost = 0 m = Y.shape[1] L = len(parameters) // 2 # number of layers (2 because we have W and b) for i in range(1, L+1): W_i = parameters['W' + str(i)] sum_regularization_cost += np.sum(np.square(W_i)) regularization_cost = (1/m)*(lambdaHyper/2)*(sum_regularization_cost) cost = cross_entropy_cost + regularization_cost return cost def compute_cost_with_regularisation_test_case(): np.random.seed(1) Y_assess = np.array([[1, 1, 0, 1, 0]]) W1 = np.random.randn(2, 3) b1 = np.random.randn(2, 1) W2 = np.random.randn(3, 2) b2 = np.random.randn(3, 1) W3 = np.random.randn(1, 3) b3 = np.random.randn(1, 1) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} a3 = np.array([[ 0.40682402, 0.01629284, 0.16722898, 0.10118111, 0.40682402]]) cost = compute_cost_with_regularisation(a3, Y_assess, parameters, lambdaHyper = 0.1) np.testing.assert_approx_equal(cost, 1.78649, significant=5) return "OK" compute_cost_with_regularisation_test_case() def backward_propagation_with_regularisation(X, Y, Yhat, caches, lambdaHyper): Implements the backward propagation of our baseline model to which we added an L2 regularization. Arguments: X -- input dataset, of shape (input size, number of examples) Yhat -- "true" labels vector, of shape (output size, number of examples) caches -- cache output from forward_propagation() lambdaHyper -- regularization hyperparameter, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables m = X.shape[1] L = len(caches) # the number of layers gradients = {} last_layer_cache = caches[L-1] ((A, W, b), Z) = last_layer_cache assert (Yhat.shape == Y.shape) dZ = Yhat - Y gradients["dZ" + str(L)] = dZ for i in reversed(range(L-1)): current_layer_cache = caches[i] ((A_prev, W_prev, b_prev), Z_prev) = current_layer_cache dW_entropy = 1./m * np.dot(dZ, A.T) dW_reg = (lambdaHyper/m)*W dW = dW_entropy + dW_reg db = 1./m * np.sum(dZ, axis=1, keepdims = True) dA_prev = np.dot(W.T, dZ) dZ_prev = np.multiply(dA_prev, np.int64(A > 0)) gradients["dW" + str(i + 2)] = dW gradients["db" + str(i + 2)] = db gradients["dA" + str(i + 1)] = dA_prev gradients["dZ" + str(i + 1)] = dZ_prev ((A, W, b), Z) = ((A_prev, W_prev, b_prev), Z_prev) dZ = dZ_prev # finally add the gradients for the first layer dW_prev = 1./m * np.dot(dZ_prev, X.T) + (lambdaHyper/m)*W_prev db_prev = 1./m * np.sum(dZ_prev, axis=1, keepdims = True) gradients["dW1"] = dW_prev gradients["db1"] = db_prev return gradients def NN_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False, lambdaHyper = 0, init="standard"): Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments: X -- data, numpy array of shape (number of examples, num_px * num_px * 3) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- list containing the input size and each layer size, of length (number of layers + 1). learning_rate -- learning rate of the gradient descent update rule num_iterations -- number of iterations of the optimization loop print_cost -- if True, it prints the cost every 100 steps lambdaHyper -- regularisation hyperparameter, scalar init -- type of initialisation: standard or He. Returns: parameters -- parameters learnt by the model. They can then be used to predict. costs = [] # keep track of cost iterations2cost = 200 # Print the cost every these iterations # Parameters initialization. if init == "he": parameters = initialise_parameters_he(layers_dims) else: parameters = initialise_parameters(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID. Yhat, caches = L_model_forward(X, parameters) # Compute cost. if lambdaHyper == 0: cost = compute_cost(Yhat, Y) else: cost = compute_cost_with_regularisation(Yhat, Y, parameters, lambdaHyper) # Backward propagation. if lambdaHyper == 0: grads = L_model_backward(Yhat, Y, caches) else: grads = backward_propagation_with_regularisation(X, Y, Yhat, caches, lambdaHyper) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the cost every iterations2cost training example if print_cost and i % iterations2cost == 0: print ("Cost after iteration %i: %f" %(i, cost)) if print_cost and i % iterations2cost == 0: costs.append(cost) if print_cost: # plot the cost fig, ax = plt.subplots(1,1) plt.plot(np.squeeze(costs)) ticks = ax.get_xticks() ax.locator_params(axis='x', nticks=len(costs)) ax.set_xticklabels([int(x*iterations2cost) for x in ticks]) plt.ylabel('cost') plt.xlabel('iterations') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters fit_params_reg = NN_model(train_set_x, train_set_y, nn_layers, learning_rate = 0.3, num_iterations = 3500, print_cost = True, lambdaHyper = 5, init="he") print ("On the training set:") predictions_train = predict(train_set_x, train_set_y, fit_params_reg) print ("On the test set:") predictions_test = predict(X_test.T, y_test.T, fit_params_reg) def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5): Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID. Arguments: X -- input dataset, of shape (2, number of examples) parameters -- python dictionary containing the parameters of a 3-layers network. keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: A2 -- last activation value, output of the forward propagation, of shape (1,1) cache -- tuple, information stored for computing the backward propagation L = len(parameters) // 2 # number of layers in the neural network # retrieve parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1,cache_temp = relu(Z1) D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...) D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold) A1 = A1*D1 # Step 3: shut down some neurons of A1 A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down Z2 = np.dot(W2, A1) + b2 A2, cache_temp = sigmoid(Z2) caches = (Z1, D1, A1, W1, b1, Z2, A2, W2, b2) return A2, caches def backward_propagation_with_dropout(X, Y, cache, keep_prob): Implements the backward propagation of our baseline model to which we added dropout. Arguments: X -- input dataset, of shape (2, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation_with_dropout() keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables m = X.shape[1] (Z1, D1, A1, W1, b1, Z2, A2, W2, b2) = cache dZ2 = A2 - Y dW2 = 1./m * np.dot(dZ2, A1.T) db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients def NN_model_drop(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False, keep_prob = 1, init="standard"): Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments: X -- data, numpy array of shape (number of examples, num_px * num_px * 3) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- list containing the input size and each layer size, of length (number of layers + 1). learning_rate -- learning rate of the gradient descent update rule num_iterations -- number of iterations of the optimization loop print_cost -- if True, it prints the cost every 100 steps Returns: parameters -- parameters learnt by the model. They can then be used to predict. costs = [] # keep track of cost iterations2cost = 200 # Print the cost every these iterations # Parameters initialization. if init == "he": parameters = initialise_parameters_he(layers_dims) else: parameters = initialise_parameters(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID. Yhat, caches = forward_propagation_with_dropout(X, parameters, keep_prob) # Compute cost. cost = compute_cost(Yhat, Y) # Backward propagation. grads = backward_propagation_with_dropout(X, Y, caches, keep_prob) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the cost every iterations2cost training example if print_cost and i % iterations2cost == 0: print ("Cost after iteration %i: %f" %(i, cost)) if print_cost and i % iterations2cost == 0: costs.append(cost) if print_cost: # plot the cost fig, ax = plt.subplots(1,1) plt.plot(np.squeeze(costs)) ticks = ax.get_xticks() ax.locator_params(axis='x', nticks=len(costs)) ax.set_xticklabels([int(x*iterations2cost) for x in ticks]) plt.ylabel('cost') plt.xlabel('iterations') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters fit_params_drop = NN_model_drop(train_set_x, train_set_y, nn_layers, learning_rate = 0.3, num_iterations = 3500, print_cost = True, keep_prob = 0.8, init="he") print ("On the train set:") predictions_train = predict(train_set_x, train_set_y, fit_params_drop) print ("On the test set:") predictions_test = predict(X_test.T, y_test.T, fit_params_drop) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R. Step2: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array. Step3: Singular Value Decomposition Step4: Making Predictions from the Decomposed Matrices Step5: Making Movie Recommendations Step6: So, how'd I do? Step7: Pretty cool! These look like pretty good recommendations. It's also good to see that, though I didn't actually use the genre of the movie as a feature, the truncated matrix factorization features "picked up" on the underlying tastes and preferences of the user. I've recommended some film-noirs, crime, drama, and war movies - all of which were genres of some of this user's top rated movies.
<ASSISTANT_TASK:> Python Code: import pandas as pd import numpy as np import os import matplotlib.pyplot as plt !wget http://files.grouplens.org/datasets/movielens/ml-100k.zip !ls !unzip ml-100k folder = "ml-100k" !wget http://files.grouplens.org/datasets/movielens/ml-1m.zip !unzip ml-1m !ls folder = "ml-1m" ratings_list = [ [int(x) for x in i.strip().split("::")] for i in open(os.path.join(folder, "ratings.dat"), "r").readlines() ] users_list = [i.strip().split("::") for i in open(os.path.join(folder, "users.dat"), "r").readlines()] movies_list = [ i.strip().split("::") for i in open(os.path.join(folder, "movies.dat"), "r", encoding="latin-1").readlines() ] ratings_df = pd.DataFrame(ratings_list, columns=["UserID", "MovieID", "Rating", "Timestamp"], dtype=int) movies_df = pd.DataFrame(movies_list, columns=["MovieID", "Title", "Genres"]) movies_df["MovieID"] = movies_df["MovieID"].apply(pd.to_numeric) movies_df.head() def get_movie_name(movies_df, movie_id_str): ndx = movies_df["MovieID"] == int(movie_id_str) name = movies_df["Title"][ndx].to_numpy()[0] return name print(get_movie_name(movies_df, 1)) print(get_movie_name(movies_df, "527")) def get_movie_genres(movies_df, movie_id_str): ndx = movies_df["MovieID"] == int(movie_id_str) name = movies_df["Genres"][ndx].to_numpy()[0] return name print(get_movie_genres(movies_df, 1)) print(get_movie_genres(movies_df, "527")) ratings_df.head() R_df = ratings_df.pivot(index="UserID", columns="MovieID", values="Rating").fillna(0) R_df.head() R = R_df.to_numpy() user_ratings_mean = np.mean(R, axis=1) R_demeaned = R - user_ratings_mean.reshape(-1, 1) print(R.shape) print(np.count_nonzero(R)) from scipy.sparse.linalg import svds U, sigma, Vt = svds(R_demeaned, k=50) sigma = np.diag(sigma) latents = [10, 20, 50] errors = [] for latent_dim in latents: U, sigma, Vt = svds(R_demeaned, k=latent_dim) sigma = np.diag(sigma) Rpred = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1) Rpred[Rpred < 0] = 0 Rpred[Rpred > 5] = 5 err = np.sqrt(np.sum(np.power(R - Rpred, 2))) errors.append(err) print(errors) all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1) preds_df = pd.DataFrame(all_user_predicted_ratings, columns=R_df.columns) preds_df.head() def recommend_movies(preds_df, userID, movies_df, original_ratings_df, num_recommendations=5): # Get and sort the user's predictions user_row_number = userID - 1 # UserID starts at 1, not 0 sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1 # Get the user's data and merge in the movie information. user_data = original_ratings_df[original_ratings_df.UserID == (userID)] user_full = user_data.merge(movies_df, how="left", left_on="MovieID", right_on="MovieID").sort_values( ["Rating"], ascending=False ) print("User {0} has already rated {1} movies.".format(userID, user_full.shape[0])) print("Recommending highest {0} predicted ratings movies not already rated.".format(num_recommendations)) # Recommend the highest predicted rating movies that the user hasn't seen yet. recommendations = ( movies_df[~movies_df["MovieID"].isin(user_full["MovieID"])] .merge(pd.DataFrame(sorted_user_predictions).reset_index(), how="left", left_on="MovieID", right_on="MovieID") .rename(columns={user_row_number: "Predictions"}) .sort_values("Predictions", ascending=False) .iloc[:num_recommendations, :-1] ) return user_full, recommendations already_rated, predictions = recommend_movies(preds_df, 837, movies_df, ratings_df, 10) already_rated.head(10) df = already_rated[["MovieID", "Title", "Genres"]].copy() df.head(10) predictions Rpred = all_user_predicted_ratings Rpred[Rpred < 0] = 0 Rpred[Rpred > 5] = 5 print(np.linalg.norm(R - Rpred, ord="fro")) print(np.sqrt(np.sum(np.power(R - Rpred, 2)))) import matplotlib.pyplot as plt nusers = 20 nitems = 20 plt.figure(figsize=(10, 10)) plt.imshow(R[:nusers, :nitems], cmap="jet") plt.xlabel("item") plt.ylabel("user") plt.title("True ratings") plt.colorbar() plt.figure(figsize=(10, 10)) plt.imshow(Rpred[:nusers, :nitems], cmap="jet") plt.xlabel("item") plt.ylabel("user") plt.title("Predcted ratings") plt.colorbar() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2. For a bit of EDA Step2: 3. Create a function to compute support for a given itemset from a transaction Step3: 4. Write an apriori function that handles the pruning and generation of frequent itemsets Step4: 5. Create a function that uses the frequent itemsets to generate association rules Step5: 6. Create a scatter plot to analyze the association rules
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline with open('itemsets.dat') as f: transactions = [] for row in f: transactions.append(row.strip().split(',')) transactions[0:5] items_counts = {} for tran in transactions: for item in tran: if item in items_counts: items_counts[item] += 1 else: items_counts[item] = 1 plt.bar(items_counts.keys(), items_counts.values()); itemsets_sizes = {} for tran in transactions: tran = frozenset(tran) if tran in itemsets_sizes: itemsets_sizes[tran] += 1 else: itemsets_sizes[tran] = 1 fig, ax = plt.subplots(figsize=(16, 8)) ax.bar(itemsets_sizes.keys(), itemsets_sizes.values()) ax.set_xticklabels([list(itemset) for itemset in itemsets_sizes.keys()], rotation='vertical'); def support_count(trans, itemset): count = 0 for tran in trans: if set(itemset).issubset(set(tran)): count += 1 return count print(support_count(transactions, ['A'])) print(support_count(transactions, ['A', 'C'])) print(support_count(transactions, ['F'])) def support(trans, itemset): return support_count(trans, itemset) / len(trans) print(support(transactions, ['A'])) print(support(transactions, ['A', 'C'])) print(support(transactions, ['F'])) def generate_supsets(items, sets): sups = {} for s in sets: for item in items: sup = frozenset(s.union(item)) if sup != s and sup not in sups: sups[sup] = 1 return list(sups.keys()) def apriori(trans, minsupp): frequent_itemsets = {} # get items list items = list(set([item for tran in trans for item in tran])) # initialize list of itemsets to check curr_sets = items # iterate till max itemset length for i in range(len(items)): # print(i, curr_sets) # initialize candidates itemsets for generation of supsets next_sets = [] # initialize current iteration unfrequent itemsets list unfrequent = [] for s in curr_sets: supp = support(trans, s) # print(s, supp) # if we are over minsupp add itemset to frequent list and to supsets generation candidates if supp >= minsupp: frequent_itemsets[frozenset(s)] = supp next_sets.append(frozenset(s)) #else add to unfrequent list else: unfrequent.append(frozenset(s)) # if this is the first iteration update items list in order to optimize supsets generation if i == 0: items = [item for item in items if item not in unfrequent] # generate supsets and exclude those that contain an unfrequent itemset curr_sets = generate_supsets(items, next_sets) for unfr in unfrequent: curr_sets = [s for s in curr_sets if not unfr.issubset(s)] # print(next_sets) # print(unfrequent) # print(curr_sets) if len(curr_sets) == 0: break return frequent_itemsets apriori(transactions, 0.1) def calc_metrics(trans, X, Y): supp = support(trans, X.union(Y)) supp_X = support(trans, X) supp_Y = support(trans, Y) conf = supp / supp_X lift = supp / (supp_X*supp_Y) return conf, lift, supp def generate_subsets(items, sets): subs = {} for s in sets: for item in items: sup = frozenset(s.difference(item)) if sup != s and sup not in subs: subs[sup] = 1 return list(subs.keys()) def association_rules(trans, frequent, minconf): rules = {} # get items list items = list(frequent) # initialize list of antecedents to check curr_antecedents = generate_subsets(items, [frequent]) # iterate till itemset length - 1 for i in range(len(frequent)-1): # print(i, curr_rules) # initialize candidates itemsets for generation of subsets next_antecedents = [] # initialize current iteration unfrequent itemsets list unconfident = [] for ant in curr_antecedents: cons = set(items).difference(ant) conf, lift, supp = calc_metrics(trans, ant, cons) # print(ant, conf) # if we are over minconf add rule to rules list and to subsets generation candidates if conf >= minconf: rule_ant = ', '.join('{}'.format(a) for a in (list(ant))) rule_cons = ', '.join('{}'.format(c) for c in (list(cons))) rule = '{{{}}}->{{{}}}'.format(rule_ant, rule_cons) metrics = {} metrics['confidence'] = conf metrics['lift'] = lift metrics['support'] = supp rules[rule] = metrics next_antecedents.append(frozenset(ant)) #else add to unconfident list else: unconfident.append(frozenset(ant)) # generate subsets and exclude those that are contained in an unconfident rule curr_antecedents = generate_subsets(items, next_antecedents) for uncf in unconfident: curr_antecedents = [ant for ant in curr_antecedents if not uncf.issuperset(ant)] # print(next_antecedents) # print(unconfident) # print(curr_antecedents) if len(curr_antecedents) == 0: break return rules association_rules(transactions, frozenset({'A', 'B', 'C'}), 0.2) def get_all_rules(trans, frequent_itemsets, minconf): rules_dict = {} for frequent in frequent_itemsets: rules = association_rules(trans, frequent, minconf) rules_dict.update(rules) rules_df = pd.DataFrame(rules_dict).T.reset_index() rules_df['antecedent'] = rules_df['index'].apply(lambda x: x.split('->')[0]) rules_df['consequent'] = rules_df['index'].apply(lambda x: x.split('->')[1]) rules_df.drop('index', axis=1, inplace=True) return rules_dict, rules_df frequent_itemsets = apriori(transactions, 0.1) rules_dict, rules_df = get_all_rules(transactions, frequent_itemsets, 0.3) rules_dict rules_df x = rules_df['antecedent'].astype('category') y = rules_df['consequent'].astype('category') fig, ax = plt.subplots(figsize=(12, 10)) sct = ax.scatter(x.cat.codes, y.cat.codes, s=rules_df['support']*10000, c=rules_df['lift']) ax.xaxis.set_major_locator(plt.MaxNLocator(len(x.unique()))) ax.xaxis.set_ticklabels(np.append([''], x.cat.categories)) ax.yaxis.set_major_locator(plt.MaxNLocator(len(y.unique()))) ax.yaxis.set_ticklabels(np.append([''], y.cat.categories)) fig.colorbar(sct); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We will be running a rodded 8x8 assembly with Gadolinia fuel pins. Let's create all the elemental data we would need for this case. Step2: With the elements we defined, we will now create the materials we will use later. Step3: We can now create a Materials object that can be exported to an actual XML file. Step4: Now let's move on to the geometry. The first step is to define some constants which will be used to set our dimensions and then we can start creating the surfaces and regions for the problem, the 8x8 lattice, the rods and the control blade. Step5: With the surfaces defined, we can now construct regions with these surfaces before we use those to create cells Step6: We will begin building the 8x8 assembly. To do that we will have to build the cells and universe for each pin type (fuel, gadolinia-fuel, and water). Step7: Let's use this pin information to create our 8x8 assembly. Step8: So far we have the rods and water within the assembly , but we still need the control blade and the water which fills the rest of the space. We will create those cells now Step9: OpenMC requires that there is a "root" universe. Let us create our root universe and fill it with the cells just defined. Step10: What do you do after you create your model? Check it! We will use the plotting capabilities of the Python API to do this for us. Step11: Looks pretty good to us! Step12: With the geometry and materials finished, we now just need to define simulation parameters, including how to run the model and what we want to learn from the model (i.e., define the tallies). We will start with our simulation parameters in the next block. Step13: Create an MGXS Library Step14: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the problem geometry. This library will use the default setting of isotropically-weighting the multi-group cross sections. Step15: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. Step16: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material" "cell", "universe", and "mesh" domain types. Step17: Now we will set the scattering treatment that we wish to use. Step18: Ok, we made our isotropic library with histogram-scattering! Step19: Now that our libraries have been setup, let's make sure they contain the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of the mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections. Step20: Lastly, we use our two Library objects to construct the tallies needed to compute all of the requested multi-group cross sections in each domain. Step21: The tallies within the libraries can now be exported to a "tallies.xml" input file for OpenMC. Step22: In addition, we instantiate a fission rate mesh tally for eventual comparison of results. Step23: Time to run the calculation and get our results! Step24: To make the files available and not be over-written when running the multi-group calculation, we will now rename the statepoint and summary files. Step25: Tally Data Processing Step26: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.Library to properly process the tally data. We first create a Summary object and link it with the statepoint. Normally this would not need to be performed, but since we have renamed our summary file to avoid conflicts with the Multi-Group calculation's summary file, we will load this in explicitly. Step27: The statepoint is now ready to be analyzed. To create our libraries we simply have to load the tallies from the statepoint into each Library and our MGXS objects will compute the cross sections for us under-the-hood. Step28: The next step will be to prepare the input for OpenMC to use our newly created multi-group data. Step29: Next, we can make the changes we need to the settings file. Step30: Let's clear up the tallies file so it doesn't include all the extra tallies for re-generating a multi-group library Step31: Before running the calculation let's look at our meshed model. It might not be interesting, but let's take a look anyways. Step32: So, we see a 10x10 grid with a different color for every material, sounds good! Step33: Before we go the angle-dependent case, let's save the StatePoint and Summary files so they don't get over-written Step34: Angle-Dependent Multi-Group OpenMC Calculation Step35: At this point, the problem is set up and we can run the multi-group calculation. Step36: Results Comparison Step37: Eigenvalue Comparison Step38: Let's compare the eigenvalues in units of pcm Step39: We see a large reduction in error by switching to the usage of angle-dependent multi-group cross sections! Step40: With this colormap, dark blue is the lowest power and dark red is the highest power.
<ASSISTANT_TASK:> Python Code: import os import matplotlib.pyplot as plt import numpy as np import openmc %matplotlib inline # Instantiate some elements elements = {} for elem in ['H', 'O', 'U', 'Zr', 'Gd', 'B', 'C', 'Fe']: elements[elem] = openmc.Element(elem) materials = {} # Fuel materials['Fuel'] = openmc.Material(name='Fuel') materials['Fuel'].set_density('g/cm3', 10.32) materials['Fuel'].add_element(elements['O'], 2) materials['Fuel'].add_element(elements['U'], 1, enrichment=3.) # Gadolinia bearing fuel materials['Gad'] = openmc.Material(name='Gad') materials['Gad'].set_density('g/cm3', 10.23) materials['Gad'].add_element(elements['O'], 2) materials['Gad'].add_element(elements['U'], 1, enrichment=3.) materials['Gad'].add_element(elements['Gd'], .02) # Zircaloy materials['Zirc2'] = openmc.Material(name='Zirc2') materials['Zirc2'].set_density('g/cm3', 6.55) materials['Zirc2'].add_element(elements['Zr'], 1) # Boiling Water materials['Water'] = openmc.Material(name='Water') materials['Water'].set_density('g/cm3', 0.6) materials['Water'].add_element(elements['H'], 2) materials['Water'].add_element(elements['O'], 1) # Boron Carbide for the Control Rods materials['B4C'] = openmc.Material(name='B4C') materials['B4C'].set_density('g/cm3', 0.7 * 2.52) materials['B4C'].add_element(elements['B'], 4) materials['B4C'].add_element(elements['C'], 1) # Steel materials['Steel'] = openmc.Material(name='Steel') materials['Steel'].set_density('g/cm3', 7.75) materials['Steel'].add_element(elements['Fe'], 1) # Instantiate a Materials object materials_file = openmc.Materials(materials.values()) # Export to "materials.xml" materials_file.export_to_xml() # Set constants for the problem and assembly dimensions fuel_rad = 0.53213 clad_rad = 0.61341 Np = 8 pin_pitch = 1.6256 length = float(Np + 2) * pin_pitch assembly_width = length - 2. * pin_pitch rod_thick = 0.47752 / 2. + 0.14224 rod_span = 7. * pin_pitch surfaces = {} # Create boundary planes to surround the geometry surfaces['Global x-'] = openmc.XPlane(x0=0., boundary_type='reflective') surfaces['Global x+'] = openmc.XPlane(x0=length, boundary_type='reflective') surfaces['Global y-'] = openmc.YPlane(y0=0., boundary_type='reflective') surfaces['Global y+'] = openmc.YPlane(y0=length, boundary_type='reflective') # Create cylinders for the fuel and clad surfaces['Fuel Radius'] = openmc.ZCylinder(R=fuel_rad) surfaces['Clad Radius'] = openmc.ZCylinder(R=clad_rad) surfaces['Assembly x-'] = openmc.XPlane(x0=pin_pitch) surfaces['Assembly x+'] = openmc.XPlane(x0=length - pin_pitch) surfaces['Assembly y-'] = openmc.YPlane(y0=pin_pitch) surfaces['Assembly y+'] = openmc.YPlane(y0=length - pin_pitch) # Set surfaces for the control blades surfaces['Top Blade y-'] = openmc.YPlane(y0=length - rod_thick) surfaces['Top Blade x-'] = openmc.XPlane(x0=pin_pitch) surfaces['Top Blade x+'] = openmc.XPlane(x0=rod_span) surfaces['Left Blade x+'] = openmc.XPlane(x0=rod_thick) surfaces['Left Blade y-'] = openmc.YPlane(y0=length - rod_span) surfaces['Left Blade y+'] = openmc.YPlane(y0=9. * pin_pitch) # Set regions for geometry building regions = {} regions['Global'] = \ (+surfaces['Global x-'] & -surfaces['Global x+'] & +surfaces['Global y-'] & -surfaces['Global y+']) regions['Assembly'] = \ (+surfaces['Assembly x-'] & -surfaces['Assembly x+'] & +surfaces['Assembly y-'] & -surfaces['Assembly y+']) regions['Fuel'] = -surfaces['Fuel Radius'] regions['Clad'] = +surfaces['Fuel Radius'] & -surfaces['Clad Radius'] regions['Water'] = +surfaces['Clad Radius'] regions['Top Blade'] = \ (+surfaces['Top Blade y-'] & -surfaces['Global y+']) & \ (+surfaces['Top Blade x-'] & -surfaces['Top Blade x+']) regions['Top Steel'] = \ (+surfaces['Global x-'] & -surfaces['Top Blade x-']) & \ (+surfaces['Top Blade y-'] & -surfaces['Global y+']) regions['Left Blade'] = \ (+surfaces['Left Blade y-'] & -surfaces['Left Blade y+']) & \ (+surfaces['Global x-'] & -surfaces['Left Blade x+']) regions['Left Steel'] = \ (+surfaces['Left Blade y+'] & -surfaces['Top Blade y-']) & \ (+surfaces['Global x-'] & -surfaces['Left Blade x+']) regions['Corner Blade'] = \ regions['Left Steel'] | regions['Top Steel'] regions['Water Fill'] = \ regions['Global'] & ~regions['Assembly'] & \ ~regions['Top Blade'] & ~regions['Left Blade'] &\ ~regions['Corner Blade'] universes = {} cells = {} for name, mat, in zip(['Fuel Pin', 'Gd Pin'], [materials['Fuel'], materials['Gad']]): universes[name] = openmc.Universe(name=name) cells[name] = openmc.Cell(name=name) cells[name].fill = mat cells[name].region = regions['Fuel'] universes[name].add_cell(cells[name]) cells[name + ' Clad'] = openmc.Cell(name=name + ' Clad') cells[name + ' Clad'].fill = materials['Zirc2'] cells[name + ' Clad'].region = regions['Clad'] universes[name].add_cell(cells[name + ' Clad']) cells[name + ' Water'] = openmc.Cell(name=name + ' Water') cells[name + ' Water'].fill = materials['Water'] cells[name + ' Water'].region = regions['Water'] universes[name].add_cell(cells[name + ' Water']) universes['Hole'] = openmc.Universe(name='Hole') cells['Hole'] = openmc.Cell(name='Hole') cells['Hole'].fill = materials['Water'] universes['Hole'].add_cell(cells['Hole']) # Create fuel assembly Lattice universes['Assembly'] = openmc.RectLattice(name='Assembly') universes['Assembly'].pitch = (pin_pitch, pin_pitch) universes['Assembly'].lower_left = [pin_pitch, pin_pitch] f = universes['Fuel Pin'] g = universes['Gd Pin'] h = universes['Hole'] lattices = [[f, f, f, f, f, f, f, f], [f, f, f, f, f, f, f, f], [f, f, f, g, f, g, f, f], [f, f, g, h, h, f, g, f], [f, f, f, h, h, f, f, f], [f, f, g, f, f, f, g, f], [f, f, f, g, f, g, f, f], [f, f, f, f, f, f, f, f]] # Store the array of lattice universes universes['Assembly'].universes = lattices cells['Assembly'] = openmc.Cell(name='Assembly') cells['Assembly'].fill = universes['Assembly'] cells['Assembly'].region = regions['Assembly'] # The top portion of the blade, poisoned with B4C cells['Top Blade'] = openmc.Cell(name='Top Blade') cells['Top Blade'].fill = materials['B4C'] cells['Top Blade'].region = regions['Top Blade'] # The left portion of the blade, poisoned with B4C cells['Left Blade'] = openmc.Cell(name='Left Blade') cells['Left Blade'].fill = materials['B4C'] cells['Left Blade'].region = regions['Left Blade'] # The top-left corner portion of the blade, with no poison cells['Corner Blade'] = openmc.Cell(name='Corner Blade') cells['Corner Blade'].fill = materials['Steel'] cells['Corner Blade'].region = regions['Corner Blade'] # Water surrounding all other cells and our assembly cells['Water Fill'] = openmc.Cell(name='Water Fill') cells['Water Fill'].fill = materials['Water'] cells['Water Fill'].region = regions['Water Fill'] # Create root Universe universes['Root'] = openmc.Universe(name='root universe', universe_id=0) universes['Root'].add_cells([cells['Assembly'], cells['Top Blade'], cells['Corner Blade'], cells['Left Blade'], cells['Water Fill']]) universes['Root'].plot(origin=(length / 2., length / 2., 0.), pixels=(500, 500), width=(length, length), color_by='material', colors={materials['Fuel']: (1., 0., 0.), materials['Gad']: (1., 1., 0.), materials['Zirc2']: (0.5, 0.5, 0.5), materials['Water']: (0.0, 0.0, 1.0), materials['B4C']: (0.0, 0.0, 0.0), materials['Steel']: (0.4, 0.4, 0.4)}) # Create Geometry and set root universe geometry = openmc.Geometry(universes['Root']) # Export to "geometry.xml" geometry.export_to_xml() # OpenMC simulation parameters batches = 1000 inactive = 20 particles = 1000 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': False} settings_file.verbosity = 4 # Create an initial uniform spatial source distribution over fissionable zones bounds = [pin_pitch, pin_pitch, 10, length - pin_pitch, length - pin_pitch, 10] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings_file.source = openmc.source.Source(space=uniform_dist) # Export to "settings.xml" settings_file.export_to_xml() # Instantiate a 2-group EnergyGroups object groups = openmc.mgxs.EnergyGroups() groups.group_edges = np.array([0., 0.625, 20.0e6]) # Initialize a 2-group Isotropic MGXS Library for OpenMC iso_mgxs_lib = openmc.mgxs.Library(geometry) iso_mgxs_lib.energy_groups = groups # Specify multi-group cross section types to compute iso_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission', 'nu-scatter matrix', 'multiplicity matrix', 'chi'] # Instantiate a tally Mesh mesh = openmc.Mesh() mesh.type = 'regular' mesh.dimension = [10, 10] mesh.lower_left = [0., 0.] mesh.upper_right = [length, length] # Specify a "mesh" domain type for the cross section tally filters iso_mgxs_lib.domain_type = "mesh" # Specify the mesh over which to compute multi-group cross sections iso_mgxs_lib.domains = [mesh] # Set the scattering format to histogram and then define the number of bins # Avoid a warning that corrections don't make sense with histogram data iso_mgxs_lib.correction = None # Set the histogram data iso_mgxs_lib.scatter_format = 'histogram' iso_mgxs_lib.histogram_bins = 11 # Let's repeat all of the above for an angular MGXS library so we can gather # that in the same continuous-energy calculation angle_mgxs_lib = openmc.mgxs.Library(geometry) angle_mgxs_lib.energy_groups = groups angle_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission', 'nu-scatter matrix', 'multiplicity matrix', 'chi'] angle_mgxs_lib.domain_type = "mesh" angle_mgxs_lib.domains = [mesh] angle_mgxs_lib.correction = None angle_mgxs_lib.scatter_format = 'histogram' angle_mgxs_lib.histogram_bins = 11 # Set the angular bins to 8 angle_mgxs_lib.num_azimuthal = 8 # Check the libraries - if no errors are raised, then the library is satisfactory. iso_mgxs_lib.check_library_for_openmc_mgxs() angle_mgxs_lib.check_library_for_openmc_mgxs() # Construct all tallies needed for the multi-group cross section library iso_mgxs_lib.build_library() angle_mgxs_lib.build_library() # Create a "tallies.xml" file for the MGXS Library tallies_file = openmc.Tallies() iso_mgxs_lib.add_to_tallies_file(tallies_file, merge=True) angle_mgxs_lib.add_to_tallies_file(tallies_file, merge=True) # Instantiate tally Filter mesh_filter = openmc.MeshFilter(mesh) # Instantiate the Tally tally = openmc.Tally(name='mesh tally') tally.filters = [mesh_filter] tally.scores = ['fission'] # Add tally to collection tallies_file.append(tally, merge=True) # Export all tallies to a "tallies.xml" file tallies_file.export_to_xml() # Run OpenMC openmc.run() # Move the StatePoint File ce_spfile = './statepoint_ce.h5' os.rename('statepoint.' + str(batches) + '.h5', ce_spfile) # Move the Summary file ce_sumfile = './summary_ce.h5' os.rename('summary.h5', ce_sumfile) # Load the statepoint file, but not the summary file, as it is a different filename than expected. sp = openmc.StatePoint(ce_spfile, autolink=False) su = openmc.Summary(ce_sumfile) sp.link_with_summary(su) # Initialize MGXS Library with OpenMC statepoint data iso_mgxs_lib.load_from_statepoint(sp) angle_mgxs_lib.load_from_statepoint(sp) # Allow the API to create our Library, materials, and geometry file iso_mgxs_file, materials_file, geometry_file = iso_mgxs_lib.create_mg_mode() # Tell the materials file what we want to call the multi-group library materials_file.cross_sections = 'mgxs.h5' # Write our newly-created files to disk iso_mgxs_file.export_to_hdf5('mgxs.h5') materials_file.export_to_xml() geometry_file.export_to_xml() # Set the energy mode settings_file.energy_mode = 'multi-group' # Export to "settings.xml" settings_file.export_to_xml() # Create a "tallies.xml" file for the MGXS Library tallies_file = openmc.Tallies() # Add our fission rate mesh tally tallies_file.append(tally) # Export to "tallies.xml" tallies_file.export_to_xml() geometry_file.root_universe.plot(origin=(length / 2., length / 2., 0.), pixels=(300, 300), width=(length, length), color_by='material') # Execute the Isotropic MG OpenMC Run openmc.run() # Move the StatePoint File iso_mg_spfile = './statepoint_mg_iso.h5' os.rename('statepoint.' + str(batches) + '.h5', iso_mg_spfile) # Move the Summary file iso_mg_sumfile = './summary_mg_iso.h5' os.rename('summary.h5', iso_mg_sumfile) # Let's repeat for the angle-dependent case angle_mgxs_lib.load_from_statepoint(sp) angle_mgxs_file, materials_file, geometry_file = angle_mgxs_lib.create_mg_mode() angle_mgxs_file.export_to_hdf5() # Execute the angle-dependent OpenMC Run openmc.run() # Load the isotropic statepoint file iso_mgsp = openmc.StatePoint(iso_mg_spfile, autolink=False) iso_mgsum = openmc.Summary(iso_mg_sumfile) iso_mgsp.link_with_summary(iso_mgsum) # Load the angle-dependent statepoint file angle_mgsp = openmc.StatePoint('statepoint.' + str(batches) + '.h5') ce_keff = sp.k_combined iso_mg_keff = iso_mgsp.k_combined angle_mg_keff = angle_mgsp.k_combined # Find eigenvalue bias iso_bias = 1.0E5 * (ce_keff - iso_mg_keff) angle_bias = 1.0E5 * (ce_keff - angle_mg_keff) print('Isotropic to CE Bias [pcm]: {0:1.1f}'.format(iso_bias.nominal_value)) print('Angle to CE Bias [pcm]: {0:1.1f}'.format(angle_bias.nominal_value)) sp_files = [sp, iso_mgsp, angle_mgsp] titles = ['Continuous-Energy', 'Isotropic Multi-Group', 'Angle-Dependent Multi-Group'] fiss_rates = [] fig = plt.figure(figsize=(12, 6)) for i, (case, title) in enumerate(zip(sp_files, titles)): # Get our mesh tally information mesh_tally = case.get_tally(name='mesh tally') fiss_rates.append(mesh_tally.get_values(scores=['fission'])) # Reshape the array fiss_rates[-1].shape = mesh.dimension # Normalize the fission rates fiss_rates[-1] /= np.mean(fiss_rates[-1][fiss_rates[-1] > 0.]) # Set 0s to NaNs so they show as white fiss_rates[-1][fiss_rates[-1] == 0.] = np.nan fig = plt.subplot(1, len(titles), i + 1) # Plot only the fueled regions plt.imshow(fiss_rates[-1][1:-1, 1:-1], cmap='jet', origin='lower', vmin=0.4, vmax=4.) plt.title(title + '\nFission Rates') # Calculate and plot the ratios of MG to CE for each of the 2 MG cases ratios = [] fig, axes = plt.subplots(figsize=(12, 6), nrows=1, ncols=2) for i, (case, title, axis) in enumerate(zip(sp_files[1:], titles[1:], axes.flat)): # Get our ratio relative to the CE (in fiss_ratios[0]) ratios.append(np.divide(fiss_rates[i + 1], fiss_rates[0])) # Plot only the fueled regions im = axis.imshow(ratios[-1][1:-1, 1:-1], cmap='bwr', origin='lower', vmin = 0.9, vmax = 1.1) axis.set_title(title + '\nFission Rates Relative\nto Continuous-Energy') # Add a color bar fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) fig.colorbar(im, cax=cbar_ax) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step 1 Step1: Step 2 Step2: Step 3 Step3: Step 4 Step4: Step 5
<ASSISTANT_TASK:> Python Code: from pprint import pprint from time import sleep from pynq import PL from pynq import Overlay from pynq.drivers import Trace_Buffer from pynq.iop import Pmod_OLED from pynq.iop import PMODA from pynq.iop import PMODB from pynq.iop import ARDUINO ol = Overlay("base.bit") ol.download() pprint(PL.ip_dict) oled = Pmod_OLED(PMODB) tr_buf = Trace_Buffer(PMODB,"spi",samplerate=20000000) # Start the trace buffer tr_buf.start() # Write characters oled.write("1 2 3 4 5 6") # Stop the trace buffer tr_buf.stop() # Configuration for PMODB start = 20000 stop = 40000 tri_sel = [0x80000<<32,0x40000<<32,0x20000<<32,0x10000<<32] tri_0 = [0x8<<32,0x4<<32,0x2<<32,0x1<<32] tri_1 = [0x800<<32,0x400<<32,0x200<<32,0x100<<32] mask = 0x0 # Parsing and decoding tr_buf.parse("spi_trace.csv", start,stop,mask,tri_sel,tri_0,tri_1) tr_buf.set_metadata(['CLK','NC','MOSI','CS']) tr_buf.decode("spi_trace.pd", options=':wordsize=8:cpol=0:cpha=0') s0 = 10000 s1 = 15000 tr_buf.display(s0,s1) <END_TASK>