Unnamed: 0
int64
0
15.9k
cleaned_code
stringlengths
67
124k
cleaned_prompt
stringlengths
168
30.3k
3,200
<ASSISTANT_TASK:> Python Code: mod = pysces.model('lin4_fb') mod.doLoad() # this method call is necessary to ensure that future `doLoad` method calls are executed correctly sc = psctb.Symca(mod) sc.do_symca() sc.cc_results sc.cc_results.ccJR1_R4 sc.cc_results.ccJR1_R4.expression sc.cc_results.ccJR1_R4.numerator sc.cc_results.ccJR1_R4.denominator sc.cc_results.ccJR1_R4.value sc.cc_results.ccJR1_R4.CP001 sc.cc_results.ccJR1_R4.CP002 sc.cc_results.ccJR1_R4.CP001.percentage sc.cc_results.ccJR1_R4.CP002.percentage mod.doLoad() # mod.Vf_4 has a default value of 50 mod.Vf_4 = 0.1 # calculating new steady state mod.doMca() # now ccJR1_R4 and its two control patterns should have new values sc.cc_results.ccJR1_R4 # original value was 0.000 sc.cc_results.ccJR1_R4.CP001 # original value was 0.964 sc.cc_results.ccJR1_R4.CP002 # resetting to default Vf_4 value and recalculating mod.doLoad() mod.doMca() # This path leads to the provided layout file path_to_layout = '~/Pysces/psc/lin4_fb.dict' # Correct path depending on platform - necessary for platform independent scripts if platform == 'win32': path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout) else: path_to_layout = path.expanduser(path_to_layout) sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout) # clicking on CP002 shows that this control pattern representing # the chain of effects passing through the feedback loop # is totally responsible for the observed control coefficient value. sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout) # clicking on CP001 shows that this control pattern representing # the chain of effects of the main pathway does not contribute # at all to the control coefficient value. sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout) percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4', scan_range=numpy.logspace(-1,3,200), scan_type='percentage') percentage_scan_plot = percentage_scan_data.plot() # set the x-axis to a log scale percentage_scan_plot.ax.semilogx() # enable all the lines percentage_scan_plot.toggle_category('Control Patterns', True) percentage_scan_plot.toggle_category('CP001', True) percentage_scan_plot.toggle_category('CP002', True) # display the plot percentage_scan_plot.interact() value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4', scan_range=numpy.logspace(-1,3,200), scan_type='value') value_scan_plot = value_scan_data.plot() # set the x-axis to a log scale value_scan_plot.ax.semilogx() # enable all the lines value_scan_plot.toggle_category('Control Coefficients', True) value_scan_plot.toggle_category('ccJR1_R4', True) value_scan_plot.toggle_category('Control Patterns', True) value_scan_plot.toggle_category('CP001', True) value_scan_plot.toggle_category('CP002', True) # display the plot value_scan_plot.interact() # Create a variant of mod with 'C' fixed at its steady-state value mod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3') # Instantiate Symca object the 'internal_fixed' argument set to 'True' sc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True) # Run the 'do_symca' method (internal_fixed can also be set to 'True' here) sc_fixed_S3.do_symca() sc_fixed_S3.cc_results_1 sc_fixed_S3.cc_results_0 sc.save_results() # the following code requires `pandas` to run import pandas as pd # load csv file at default path results_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv' # Correct path depending on platform - necessary for platform independent scripts if platform == 'win32': results_path = psctb.utils.misc.unix_to_windows_path(results_path) else: results_path = path.expanduser(results_path) saved_results = pd.read_csv(results_path) # show first 20 lines saved_results.head(n=20) # saving session sc.save_session() # create new Symca object and load saved results new_sc = psctb.Symca(mod) new_sc.load_session() # display saved results new_sc.cc_results <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Additionally Symca has the following arguments Step2: do_symca has the following arguments Step3: Inspecting an individual control coefficient yields a symbolic expression together with a value Step4: In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$. Step5: Numerator expression (as a SymPy expression) Step6: Denominator expression (as a SymPy expression) Step7: Value (as a float64) Step8: Additional, less pertinent, attributes are abs_value, latex_expression, latex_expression_full, latex_numerator, latex_name, name and denominator_object. Step9: Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed). Step10: Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs. Step11: Control pattern graphs Step12: highlight_patterns has the following optional arguments Step13: Parameter scans Step14: As previously described, these data can be displayed using ScanFig by calling the plot method of percentage_scan_data. Furthermore, lines can be enabled/disabled using the toggle_category method of ScanFig or by clicking on the appropriate buttons Step15: A value plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present Step16: Fixed internal metabolites Step17: The normal sc_fixed_S3.cc_results object is still generated, but will be invalid for the fixed model. Each additional cc_results_N contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These cc_results_N objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal cc_results object. Step18: cc_results_0 contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the S3 demand block consists of a single reaction, this object also contains the control coefficient of R4 on J_R4, which is equal to one. This results object is useful confirming that the results were generated as expected. Step19: If the demand block of S3 in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional cc_results_N object containing the control coefficients of that reaction block. Step20: save_results has the following optional arguments Step21: Saving/loading sessions
3,201
<ASSISTANT_TASK:> Python Code: from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", reshape=False) X_train, y_train = mnist.train.images, mnist.train.labels X_validation, y_validation = mnist.validation.images, mnist.validation.labels X_test, y_test = mnist.test.images, mnist.test.labels assert(len(X_train) == len(y_train)) assert(len(X_validation) == len(y_validation)) assert(len(X_test) == len(y_test)) print() print("Image Shape: {}".format(X_train[0].shape)) print() print("Training Set: {} samples".format(len(X_train))) print("Validation Set: {} samples".format(len(X_validation))) print("Test Set: {} samples".format(len(X_test))) import numpy as np # Pad images with 0s X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant') print("Updated Image Shape: {}".format(X_train[0].shape)) import random import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image, cmap="gray") print(y_train[index]) from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train) import tensorflow as tf EPOCHS = 10 BATCH_SIZE = 64 from tensorflow.contrib.layers import flatten def LeNet(x): # Hyperparameters mu = 0 sigma = 0.1 # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # SOLUTION: Activation. conv1 = tf.nn.relu(conv1) # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # SOLUTION: Activation. conv2 = tf.nn.relu(conv2) # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # SOLUTION: Activation. fc1 = tf.nn.relu(fc1) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # SOLUTION: Activation. fc2 = tf.nn.relu(fc2) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(10)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) with tf.device('/cpu:0'): one_hot_y = tf.one_hot(y, 10) rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples print("Complete") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_validation, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, '.\lenet') print("Model saved") with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images. Step2: Visualize Data Step3: Preprocess Data Step4: Setup TensorFlow Step5: SOLUTION Step6: Features and Labels Step7: Training Pipeline Step8: Model Evaluation Step9: Train the Model Step10: Evaluate the Model
3,202
<ASSISTANT_TASK:> Python Code: # Import all necessary libraries, this is a configuration step for the exercise. # Please run it before the simulation code! import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation # Show the plots in the Notebook. plt.switch_backend("nbagg") # Initialization of setup # -------------------------------------------------------------------------- nx = 800 # number of grid points c0 = 2500 # acoustic velocity in m/s rho = 2500 # density in kg/m^3 Z0 = rho*c0 # impedance mu = rho*c0**2 # shear modulus rho0 = rho # density mu0 = mu # shear modulus xmax = 10000 # in m eps = 0.5 # CFL tmax = 1.5 # simulation time in s isnap = 10 # plotting rate sig = 200 # argument in the inital condition x0 = 2500 # position of the initial condition # Finite Differences setup # -------------------------------------------------------------------------- dx = xmax/(nx-1) # calculate space increment xfd = np.arange(0, nx)*dx # initialize space mufd = np.zeros(xfd.size) + mu0 # initialize shear modulus rhofd = np.zeros(xfd.size) + rho0 # initialize density # Introduce inhomogeneity mufd[int((nx-1)/2) + 1:nx] = mufd[int((nx-1)/2) + 1:nx]*4 # initialize fields s = np.zeros(xfd.size) v = np.zeros(xfd.size) dv = np.zeros(xfd.size) ds = np.zeros(xfd.size) s = np.exp(-1./sig**2 * (xfd-x0)**2) # Initial condition # Finite Volumes setup # -------------------------------------------------------------------------- A = np.zeros((2,2,nx)) Z = np.zeros((1,nx)) c = np.zeros((1,nx)) # Initialize velocity c = c + c0 c[int(nx/2):nx] = c[int(nx/2):nx]*2 Z = rho*c # Initialize A for each cell for i in range(1,nx): A0 = np.array([[0, -mu], [-1/rho, 0]]) if i > nx/2: A0= np.array([[0, -4*mu], [-1/rho, 0]]) A[:,:,i] = A0 # Initialize Space x, dx = np.linspace(0,xmax,nx,retstep=True) # use wave based CFL criterion dt = eps*dx/np.max(c) # calculate tim step from stability criterion # Simulation time nt = int(np.floor(tmax/dt)) # Initialize wave fields Q = np.zeros((2,nx)) Qnew = np.zeros((2,nx)) # Initial condition #---------------------------------------------------------------- sx = np.exp(-1./sig**2 * (x-x0)**2) Q[0,:] = sx # --------------------------------------------------------------- # Plot initial condition # --------------------------------------------------------------- plt.plot(x, sx, color='r', lw=2, label='Initial condition') plt.ylabel('Amplitude', size=16) plt.xlabel('x', size=16) plt.legend() plt.grid(True) plt.show() # Initialize animated plot # --------------------------------------------------------------- fig = plt.figure(figsize=(10,6)) ax1 = fig.add_subplot(2,1,1) ax2 = fig.add_subplot(2,1,2) ax1.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b') ax2.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b') ax1.set_xlim([0, xmax]) ax2.set_xlim([0, xmax]) ax1.set_ylabel('Stress') ax2.set_ylabel('Velocity') ax2.set_xlabel(' x ') line1 = ax1.plot(x, Q[0,:], 'k', x, s, 'r--') line2 = ax2.plot(x, Q[1,:], 'k', x, v, 'r--') plt.suptitle('Heterogeneous F. volume - Lax-Wendroff method', size=16) ax1.text(0.1*xmax, 0.8*max(sx), '$\mu$ = $\mu_{o}$') ax1.text(0.8*xmax, 0.8*max(sx), '$\mu$ = $4\mu_{o}$') plt.ion() # set interective mode plt.show() # --------------------------------------------------------------- # Time extrapolation # --------------------------------------------------------------- for j in range(nt): # Finite Volume Extrapolation scheme------------------------- for i in range(1,nx-1): # Lax-Wendroff method dQl = Q[:,i] - Q[:,i-1] dQr = Q[:,i+1] - Q[:,i] Qnew[:,i] = Q[:,i] - dt/(2*dx)*A[:,:,i] @ (dQl + dQr)\ + 1/2*(dt/dx)**2 *A[:,:,i] @ A[:,:,i] @ (dQr - dQl) # Absorbing boundary conditions Qnew[:,0] = Qnew[:,1] Qnew[:,nx-1] = Qnew[:,nx-2] Q, Qnew = Qnew, Q # Finite Difference Extrapolation scheme--------------------- # Stress derivative for i in range(1, nx-1): ds[i] = (s[i+1] - s[i])/dx # Velocity extrapolation v = v + dt*ds/rhofd # Velocity derivative for i in range(1, nx-1): dv[i] = (v[i] - v[i-1])/dx # Stress extrapolation s = s + dt*mufd*dv # -------------------------------------- # Animation plot. Display solutions if not j % isnap: for l in line1: l.remove() del l for l in line2: l.remove() del l line1 = ax1.plot(x, Q[0,:], 'k', x, s, 'r--') line2 = ax2.plot(x, Q[1,:], 'k', x, v, 'r--') plt.legend(iter(line2), ('F. Volume', 'f. Diff')) plt.gcf().canvas.draw() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. Initialization of setup Step2: 2. Finite Differences setup Step3: 3. Finite Volumes setup Step4: 4. Initial condition Step5: 4. Solution for the inhomogeneous problem
3,203
<ASSISTANT_TASK:> Python Code: from __future__ import print_function # We'll need numpy for some mathematical operations import numpy as np # matplotlib for displaying the output import matplotlib.pyplot as plt import matplotlib.style as ms ms.use('seaborn-muted') %matplotlib inline # and IPython.display for audio output import IPython.display # Librosa for audio import librosa # And the display module for visualization import librosa.display audio_path = librosa.util.example_audio_file() # or uncomment the line below and point it at your favorite song: # # audio_path = '/path/to/your/favorite/song.mp3' y, sr = librosa.load(audio_path) # Let's make and display a mel-scaled power (energy-squared) spectrogram S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128) # Convert to log scale (dB). We'll use the peak power as reference. log_S = librosa.logamplitude(S, ref_power=np.max) # Make a new figure plt.figure(figsize=(12,4)) # Display the spectrogram on a mel scale # sample rate and hop length parameters are used to render the time axis librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram') # draw a color bar plt.colorbar(format='%+02.0f dB') # Make the figure layout compact plt.tight_layout() y_harmonic, y_percussive = librosa.effects.hpss(y) # What do the spectrograms look like? # Let's make and display a mel-scaled power (energy-squared) spectrogram S_harmonic = librosa.feature.melspectrogram(y_harmonic, sr=sr) S_percussive = librosa.feature.melspectrogram(y_percussive, sr=sr) # Convert to log scale (dB). We'll use the peak power as reference. log_Sh = librosa.logamplitude(S_harmonic, ref_power=np.max) log_Sp = librosa.logamplitude(S_percussive, ref_power=np.max) # Make a new figure plt.figure(figsize=(12,6)) plt.subplot(2,1,1) # Display the spectrogram on a mel scale librosa.display.specshow(log_Sh, sr=sr, y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram (Harmonic)') # draw a color bar plt.colorbar(format='%+02.0f dB') plt.subplot(2,1,2) librosa.display.specshow(log_Sp, sr=sr, x_axis='time', y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram (Percussive)') # draw a color bar plt.colorbar(format='%+02.0f dB') # Make the figure layout compact plt.tight_layout() # We'll use a CQT-based chromagram here. An STFT-based implementation also exists in chroma_cqt() # We'll use the harmonic component to avoid pollution from transients C = librosa.feature.chroma_cqt(y=y_harmonic, sr=sr) # Make a new figure plt.figure(figsize=(12,4)) # Display the chromagram: the energy in each chromatic pitch class as a function of time # To make sure that the colors span the full range of chroma values, set vmin and vmax librosa.display.specshow(C, sr=sr, x_axis='time', y_axis='chroma', vmin=0, vmax=1) plt.title('Chromagram') plt.colorbar() plt.tight_layout() # Next, we'll extract the top 13 Mel-frequency cepstral coefficients (MFCCs) mfcc = librosa.feature.mfcc(S=log_S, n_mfcc=13) # Let's pad on the first and second deltas while we're at it delta_mfcc = librosa.feature.delta(mfcc) delta2_mfcc = librosa.feature.delta(mfcc, order=2) # How do they look? We'll show each in its own subplot plt.figure(figsize=(12, 6)) plt.subplot(3,1,1) librosa.display.specshow(mfcc) plt.ylabel('MFCC') plt.colorbar() plt.subplot(3,1,2) librosa.display.specshow(delta_mfcc) plt.ylabel('MFCC-$\Delta$') plt.colorbar() plt.subplot(3,1,3) librosa.display.specshow(delta2_mfcc, sr=sr, x_axis='time') plt.ylabel('MFCC-$\Delta^2$') plt.colorbar() plt.tight_layout() # For future use, we'll stack these together into one matrix M = np.vstack([mfcc, delta_mfcc, delta2_mfcc]) # Now, let's run the beat tracker. # We'll use the percussive component for this part plt.figure(figsize=(12, 6)) tempo, beats = librosa.beat.beat_track(y=y_percussive, sr=sr) # Let's re-draw the spectrogram, but this time, overlay the detected beats plt.figure(figsize=(12,4)) librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel') # Let's draw transparent lines over the beat frames plt.vlines(librosa.frames_to_time(beats), 1, 0.5 * sr, colors='w', linestyles='-', linewidth=2, alpha=0.5) plt.axis('tight') plt.colorbar(format='%+02.0f dB') plt.tight_layout() print('Estimated tempo: %.2f BPM' % tempo) print('First 5 beat frames: ', beats[:5]) # Frame numbers are great and all, but when do those beats occur? print('First 5 beat times: ', librosa.frames_to_time(beats[:5], sr=sr)) # We could also get frame numbers from times by librosa.time_to_frames() # feature.sync will summarize each beat event by the mean feature vector within that beat M_sync = librosa.util.sync(M, beats) plt.figure(figsize=(12,6)) # Let's plot the original and beat-synchronous features against each other plt.subplot(2,1,1) librosa.display.specshow(M) plt.title('MFCC-$\Delta$-$\Delta^2$') # We can also use pyplot *ticks directly # Let's mark off the raw MFCC and the delta features plt.yticks(np.arange(0, M.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$']) plt.colorbar() plt.subplot(2,1,2) # librosa can generate axis ticks from arbitrary timestamps and beat events also librosa.display.specshow(M_sync, x_axis='time', x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats))) plt.yticks(np.arange(0, M_sync.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$']) plt.title('Beat-synchronous MFCC-$\Delta$-$\Delta^2$') plt.colorbar() plt.tight_layout() # Beat synchronization is flexible. # Instead of computing the mean delta-MFCC within each beat, let's do beat-synchronous chroma # We can replace the mean with any statistical aggregation function, such as min, max, or median. C_sync = librosa.util.sync(C, beats, aggregate=np.median) plt.figure(figsize=(12,6)) plt.subplot(2, 1, 1) librosa.display.specshow(C, sr=sr, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time') plt.title('Chroma') plt.colorbar() plt.subplot(2, 1, 2) librosa.display.specshow(C_sync, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time', x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats))) plt.title('Beat-synchronous Chroma (median aggregation)') plt.colorbar() plt.tight_layout() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: By default, librosa will resample the signal to 22050Hz. Step2: Harmonic-percussive source separation Step3: Chromagram Step4: MFCC Step5: Beat tracking Step6: By default, the beat tracker will trim away any leading or trailing beats that don't appear strong enough. Step7: Beat-synchronous feature aggregation
3,204
<ASSISTANT_TASK:> Python Code: %%javascript // From https://github.com/kmahelona/ipython_notebook_goodies $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') def add(n1, n2): return n1 + n2 def multiply(n1, n2): return n1 * n2 def exponentiate(n1, n2): Raise n1 to the power of n2 import math return math.pow(n1, n2) def is_number(n): Return True iff n is a number. # A number can always be converted to a float try: float(n) return True except ValueError: return False def add(n1, n2): if not (is_number(n1) and is_number(n2)): print("Arguments must be numbers!") return return n1 + n2 def multiply(n1, n2): if not (is_number(n1) and is_number(n2)): print("Arguments must be numbers!") return return n1 * n2 def exponentiate(n1, n2): Raise n1 to the power of n2 if not (is_number(n1) and is_number(n2)): print("Arguments must be numbers!") return import math return math.pow(n1, n2) def validate_two_arguments(n1, n2): Returns True if n1 and n2 are both numbers. if not (is_number(n1) and is_number(n2)): return False return True def add(n1, n2): if validate_two_arguments(n1, n2): return n1 + n2 def multiply(n1, n2): if validate_two_arguments(n1, n2): return n1 * n2 def exponentiate(n1, n2): Raise n1 to the power of n2 if validate_two_arguments(n1, n2): import math return math.pow(n1, n2) # The decorator: takes a function. def validate_arguments(func): # The decorator will be returning wrapped_func, a function that has the # same signature as add, multiply, etc. def wrapped_func(n1, n2): # If we don't have two numbers, we don't want to run the function. # Best practice ("be explicit") is to raise an error here # instead of just returning None. if not validate_two_arguments(n1, n2): raise Exception("Arguments must be numbers!") # We've passed our checks, so we can call the function with the passed in arguments. # If you like, think of this as # result = func(n1, n2) # return result # to distinguish it from the outer return where we're returning a function. return func(n1, n2) # This is where we return the function that has the same signature. return wrapped_func @validate_arguments def add(n1, n2): return n1 + n2 # Don't forget, the @ syntax just means # add = validate_decorator(add) print(add(1, 3)) try: add(2, 'hi') except Exception as e: print("Caught Exception: {}".format(e)) @validate_arguments # Won't work! def add3(n1, n2, n3): return n1 + n2 + n3 add3(1, 2, 3) # The decorator: takes a function. def validate_arguments(func): # Note the *args! Think of this as representing "as many arguments as you want". # So this function will take an arbitrary number of arguments. def wrapped_func(*args): # We just want to apply the check to each argument. for arg in args: if not is_number(arg): raise Exception("Arguments must be numbers!") # We also want to make sure there's at least two arguments. if len(args) < 2: raise Exception("Must specify at least 2 arguments!") # We've passed our checks, so we can call the function with the # passed-in arguments. # Right now, args is a tuple of all the different arguments passed in # (more explanation below), so we want to expand them back out when # calling the function. return func(*args) return wrapped_func @validate_arguments # This works def add3(n1, n2, n3): return n1 + n2 + n3 add3(1, 2, 3) @validate_arguments # And so does this def addn(*args): Add an arbitrary number of numbers together cumu = 0 for arg in args: cumu += arg return cumu print(addn(1, 2, 3, 4, 5)) # range(n) gives a list, so we expand the list into positional arguments... print(addn(*range(10))) def foo(*args): print("foo args: {}".format(args)) print("foo args type: {}".format(type(args))) # So foo can take an arbitrary number of arguments print("First call:") foo(1, 2, 'a', 3, True) # Which can be written using the * syntax to expand an iterable print("\nSecond call:") l = [1, 2, 'a', 3, True] foo(*l) def bar(**kwargs): print("bar kwargs: {}".format(kwargs)) # bar takes an arbitrary number of keyword arguments print("First call:") bar(location='US-PAO', ldap='awan', age=None) # Which can also be written using the ** syntax to expand a dict print("\nSecond call:") d = {'location': 'US-PAO', 'ldap': 'awan', 'age': None} bar(**d) def baz(*args, **kwargs): print("baz args: {}. kwargs: {}".format(args, kwargs)) # Calling baz with a mixture of positional and keyword arguments print("First call:") baz(1, 3, 'hi', name='Joe', age=37, occupation='Engineer') # Which is the same as print("\nSecond call:") l = [1, 3, 'hi'] d = {'name': 'Joe', 'age': 37, 'occupation': 'Engineer'} baz(*l, **d) def convert_arguments(func): Convert func arguments to floats. # Introducing the leading underscore: (weakly) marks a private # method/property that should not be accessed outside the defining # scope. Look up PEP 8 for more. def _wrapped_func(*args): new_args = [float(arg) for arg in args] return func(*new_args) return _wrapped_func @convert_arguments @validate_arguments def divide_n(*args): cumu = args[0] for arg in args[1:]: cumu = cumu / arg return cumu # The user doesn't need to think about integer division! divide_n(103, 2, 8) def convert_arguments_to(to_type=float): Convert arguments to the given to_type by casting them. def _wrapper(func): def _wrapped_func(*args): new_args = [to_type(arg) for arg in args] return func(*new_args) return _wrapped_func return _wrapper @validate_arguments def divide_n(*args): cumu = args[0] for arg in args[1:]: cumu = cumu / arg return cumu @convert_arguments_to(to_type=int) def divide_n_as_integers(*args): return divide_n(*args) @convert_arguments_to(to_type=float) def divide_n_as_float(*args): return divide_n(*args) print(divide_n_as_float(7, 3)) print(divide_n_as_integers(7, 3)) @validate_arguments def foo(*args): foo frobs bar pass print(foo.__name__) print(foo.__doc__) from functools import wraps def better_validate_arguments(func): @wraps(func) def wrapped_func(*args): for arg in args: if not is_number(arg): raise Exception("Arguments must be numbers!") if len(args) < 2: raise Exception("Must specify at least 2 arguments!") return func(*args) return wrapped_func @better_validate_arguments def bar(*args): bar frobs foo pass print(bar.__name__) print(bar.__doc__) def jedi_mind_trick(func): def _jedi_func(): return "Not the droid you're looking for" return _jedi_func @jedi_mind_trick def get_droid(): return "Found the droid!" get_droid() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: Basics Step5: Well, we only want these functions to work if both inputs are numbers. So we could do Step8: But this is yucky Step9: This is definitely better. But there's still some repeated logic. Like, what if we want to return an error if we don't get numbers, or print something before running the code? We'd still have to make the changes in multiple places. The code isn't DRY. Step10: This pattern is nice because we've even refactored out all the validation logic (even the "if blah then blah" part) into the decorator. Step12: We can't decorate this because the wrapped function expects 2 arguments. Step13: <a id='args'>*args</a> Step14: Back to the decorator Step15: And in case your head doesn't hurt yet, we can do both together Step17: Advanced decorators Step19: But now let's say we want to define a divide_n_as_integers function. We could write a new decorator, or we could alter our decorator so that we can specify what we want to convert the arguments to. Let's try the latter. Step21: Did you notice the tricky thing about creating a decorator that takes arguments? We had to create a function to "return a decorator". The outermost function, convert_arguments_to, returns a function that takes a function, which is what we've been calling a "decorator". Step23: functools.wraps solves this problem. Use it as follows Step24: Think of the @wraps decorator making it so that wrapped_func knows what function it originally wrapped.
3,205
<ASSISTANT_TASK:> Python Code: import bs4 # read in the xml file soup = bs4.BeautifulSoup(open('Ode.xml'), 'html.parser') # get the text content inside the "EEBO" tag text = soup.find('eebo').get_text() # print the text print(text) import bs4 # read in the xml file soup = bs4.BeautifulSoup(open('Ode.xml'), 'html.parser') # get a list of the div1 tags elems = soup.find_all('div1') # iterate over the div1 tags in soup for i in elems: # only proceed if the current tag has the attribute type="ode" if i['type'] == 'ode': # print the text content of this div1 element print(i.get_text()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: <h2 style='color
3,206
<ASSISTANT_TASK:> Python Code: import xgboost as xgb import shap from sklearn.model_selection import train_test_split import pandas as pd X,y = shap.datasets.boston() X.head() print(y.shape) # predict house price y[4:10] y = pd.DataFrame(y) y.head() # for regression method, I can not use stratify split with this method X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=410, train_size=0.75, test_size=0.25) X_train.reset_index(drop=True, inplace=True) X_test.reset_index(drop=True, inplace=True) y_train.reset_index(drop=True, inplace=True) y_test.reset_index(drop=True, inplace=True) param_dist = {'learning_rate':0.01} model = xgb.XGBRegressor(**param_dist) model.fit(X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], verbose=False) explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X) print(shap_values.shape) shap_values # If you JS load successfully, this will generate interactive visualization shap.initjs() check_row = 7 # check each individual case shap.force_plot(explainer.expected_value, shap_values[check_row,:], X.iloc[check_row,:]) shap.summary_plot(shap_values, X) X.iloc[7:9, :] shap_values[7:9, :] shap.summary_plot(shap_values[7:9, :], X.iloc[7:9, :]) # comparing with xgboost feature importance (by default it's using gain to rank fearure importance) print(X.columns) model.feature_importances_ # using absolute mean value of SHAP values to rank the features shap.summary_plot(shap_values, X, plot_type="bar") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Summarize Feature Importance Step2: Check Individual Cases
3,207
<ASSISTANT_TASK:> Python Code: # A bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.neural_net import TwoLayerNet %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def init_toy_data(): np.random.seed(1) X = 10 * np.random.randn(num_inputs, input_size) y = np.array([0, 1, 2, 2, 1]) return X, y net = init_toy_model() X, y = init_toy_data() scores = net.loss(X) print 'Your scores:' print scores print print 'correct scores:' correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.15226949]]) print correct_scores print # The difference should be very small. We get < 1e-7 print 'Difference between your scores and correct scores:' print np.sum(np.abs(scores - correct_scores)) loss, _ = net.loss(X, y, reg=0.1) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print 'Difference between your loss and correct loss:' print np.sum(np.abs(loss - correct_loss)) from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(X, y, reg=0.1) # these should all be less than 1e-8 or so for param_name in grads: f = lambda W: net.loss(X, y, reg=0.1)[0] param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False) print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])) net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=1e-5, num_iters=100, verbose=False) print 'Final training loss: ', stats['loss_history'][-1] # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Training Loss history') plt.show() from cs231n.data_utils import load_CIFAR10 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condensed to a single function. # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis=0) X_train -= mean_image X_val -= mean_image X_test -= mean_image # Reshape data to rows X_train = X_train.reshape(num_training, -1) X_val = X_val.reshape(num_validation, -1) X_test = X_test.reshape(num_test, -1) return X_train, y_train, X_val, y_val, X_test, y_test # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() print 'Train data shape: ', X_train.shape print 'Train labels shape: ', y_train.shape print 'Validation data shape: ', X_val.shape print 'Validation labels shape: ', y_val.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=200, learning_rate=1e-4, learning_rate_decay=0.95, reg=0.5, verbose=True) # Predict on the validation set val_acc = (net.predict(X_val) == y_val).mean() print 'Validation accuracy: ', val_acc # Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show() from cs231n.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(net) best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # # # To help debug your network, it may help to use visualizations similar to the # # ones we used above; these visualizations will have significant qualitative # # differences from the ones we saw above for the poorly tuned network. # # # # Tweaking hyperparameters by hand can be fun, but you might find it useful to # # write code to sweep through possible combinations of hyperparameters # # automatically like we did on the previous exercises. # learning_rates = [1e-4, 2e-4] regularization_strengths = [1,1e4] # results is dictionary mapping tuples of the form # (learning_rate, regularization_strength) to tuples of the form # (training_accuracy, validation_accuracy). The accuracy is simply the fraction # of data points that are correctly classified. results = {} best_val = -1 # The highest validation accuracy that we have seen so far. for learning_rate in learning_rates: for regularization_strength in regularization_strengths: net = TwoLayerNet(input_size,hidden_size,num_classes) net.train(X_train, y_train,X_val,y_val, learning_rate= learning_rate, reg=regularization_strength, num_iters=1500) y_train_predict = net.predict(X_train) y_val_predict = net.predict(X_val) accuracy_train = np.mean(y_train_predict == y_train) accuracy_validation = np.mean(y_val_predict == y_val) results[(learning_rate,regularization_strength)] = (accuracy_train,accuracy_validation) if accuracy_validation > best_val: best_val = accuracy_validation best_net = net ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # visualize the weights of the best network show_net_weights(best_net) test_acc = (best_net.predict(X_test) == y_test).mean() print 'Test accuracy: ', test_acc <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Implementing a Neural Network Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation. Step3: Forward pass Step4: Forward pass Step5: Backward pass Step6: Train the network Step8: Load the data Step9: Train a network Step10: Debug the training Step11: Tune your hyperparameters Step12: Run on the test set
3,208
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import torch a, b = load_data() c = (a[:, -1:] + b[:, :1]) / 2 result = torch.cat((a[:, :-1], c, b[:, 1:]), dim=1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
3,209
<ASSISTANT_TASK:> Python Code: import warnings warnings.filterwarnings('ignore') from tardis import run_tardis import tardis tardis.logger.setLevel(0) tardis.logging.captureWarnings(False) def display_table(sim): '''Display a table of velocities and radiative temperatures at each iteration ''' # We have direct access to the attributes of the simulation columns = zip(sim.model.v_inner[::5].to('km/s'), sim.model.t_rad[::5].to('K')) print("Iteration:", sim.iterations_executed) print(" {:<15} {:<15}".format('v_inner', 't_rad')) format_string = " {0.value:<8.2f} {0.unit:<6s}\ {1.value:<8.2f} {1.unit:<6s}" for velocity, temperature in columns: print(format_string.format(velocity, temperature)) sim = run_tardis('tardis_example.yml', simulation_callbacks=[[display_table]]) def append_t_rad_to_table(sim, table): '''append the array for the radiative temperature at each iteration to a given table''' table.append(sim.model.t_rad.copy()) t_rad_table = [] # list to store t_rad at each iteration callbacks = [[display_table], [append_t_rad_to_table, t_rad_table]] sim = run_tardis('tardis_example.yml', simulation_callbacks=callbacks) %pylab notebook for t_rad in t_rad_table: plot(t_rad) ylabel(r'$T_{rad}\ [K]$') xlabel('Iteration Number') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The command run_tardis allows users to provide a set of callbacks to the simulation. These callbacks are called at the end of each iteration. This example will show you how to create a custom callback and run a model with TARDIS. As an example, we create a custom callback that prints out some basic information about our model at every iteration. Specifically, we'll print out a table of the inner velocities of each shell as well as the radiative temperature of each shell. The first thing to note is that the callback function must have the simulation object as the first argument. This grants the user access to the state of the simulation at each iteration. Step2: Now we give the callback to run_tardis. run_tardis offers the keyword argument simulation_callbacks which takes a list of lists containing the callback as well as any optional arguments you wish to include with your callback. For this example our function requires no extra arguments and we only have a single callback, so we give run_tardis a 2D list containing the callback as its only element. Step3: Running Callbacks with Extra Arguments Step4: In order to add our new callback, we just create another entry in our list of callbacks. Since append_t_rad_to_table takes an extra argument, we will provide that argument in the inner list containing the callback. Step5: Now we can look at the way the radiative temperature changes in each shell every iteration.
3,210
<ASSISTANT_TASK:> Python Code: from bravado.client import SwaggerClient client = SwaggerClient.from_url('https://www.genomenexus.org/v2/api-docs', config={"validate_requests":False,"validate_responses":False,"validate_swagger_spec":False}) print(client) dir(client) for a in dir(client): client.__setattr__(a[:-len('-controller')], client.__getattr__(a)) variant = client.annotation.fetchVariantAnnotationGET(variant='17:g.41242962_41242963insGA').result() dir(variant) tc1 = variant.transcript_consequences[0] dir(tc1) print(tc1) import seaborn as sns %matplotlib inline sns.set_style("white") sns.set_context('talk') import matplotlib.pyplot as plt cbioportal = SwaggerClient.from_url('https://www.cbioportal.org/api/api-docs', config={"validate_requests":False,"validate_responses":False}) print(cbioportal) for a in dir(cbioportal): cbioportal.__setattr__(a.replace(' ', '_').lower(), cbioportal.__getattr__(a)) dir(cbioportal) muts = cbioportal.mutations.getMutationsInMolecularProfileBySampleListIdUsingGET( molecularProfileId="msk_impact_2017_mutations", # {study_id}_mutations gives default mutations profile for study sampleListId="msk_impact_2017_all", # {study_id}_all includes all samples projection="DETAILED" # include gene info ).result() import pandas as pd mdf = pd.DataFrame([dict(m.__dict__['_Model__dict'], **m.__dict__['_Model__dict']['gene'].__dict__['_Model__dict']) for m in muts]) mdf.groupby('uniqueSampleKey').studyId.count().plot(kind='hist', bins=400, xlim=(0,30)) plt.xlabel('Number of mutations in sample') plt.ylabel('Number of samples') plt.title('Number of mutations across samples in MSK-IMPACT (2017)') sns.despine(trim=True) mdf.variantType.astype(str).value_counts().plot(kind='bar') plt.title('Types of mutations in MSK-IMPACT (2017)') sns.despine(trim=False) snvs = mdf[(mdf.variantType == 'SNP') & (mdf.variantAllele != '-') & (mdf.referenceAllele != '-')].copy() # need query string like 9:g.22125503G>C snvs['hgvs_for_gn'] = snvs.chromosome.astype(str) + ":g." + snvs.startPosition.astype(str) + snvs.referenceAllele + '>' + snvs.variantAllele assert(snvs['hgvs_for_gn'].isnull().sum() == 0) import time qvariants = list(set(snvs.hgvs_for_gn)) gn_results = [] chunk_size = 500 print("Querying {} variants".format(len(qvariants))) for n, qvar in enumerate([qvariants[i:i + chunk_size] for i in range(0, len(qvariants), chunk_size)]): try: gn_results += client.annotation.fetchVariantAnnotationPOST(variants=qvar,fields=['hotspots']).result() print("Querying [{}, {}]: Success".format(n*chunk_size, min(len(qvariants), n*chunk_size+chunk_size))) except Exception as e: print("Querying [{}, {}]: Failed".format(n*chunk_size, min(len(qvariants), n*chunk_size+chunk_size))) pass time.sleep(1) # add a delay, to not overload server gn_dict = {v.id:v for v in gn_results} def is_sift_high(variant): return variant in gn_dict and \ len(list(filter(lambda x: x.sift_prediction == 'deleterious', gn_dict[variant].transcript_consequences))) > 0 def is_polyphen_high(variant): return variant in gn_dict and \ len(list(filter(lambda x: x.polyphen_prediction == 'probably_damaging', gn_dict[variant].transcript_consequences))) > 0 snvs['is_sift_high'] = snvs.hgvs_for_gn.apply(is_sift_high) snvs['is_polyphen_high'] = snvs.hgvs_for_gn.apply(is_polyphen_high) from matplotlib_venn import venn2 venn2(subsets=((snvs.is_sift_high & (~snvs.is_polyphen_high)).sum(), (snvs.is_polyphen_high & (~snvs.is_sift_high)).sum(), (snvs.is_polyphen_high & snvs.is_sift_high).sum()), set_labels=["SIFT","PolyPhen-2"]) plt.title("Variants as predicted to have a high impact in MSK-IMPACT (2017)") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Connect with cBioPortal API Step2: Annotate cBioPortal mutations with Genome Nexus Step3: Check overlap SIFT/PolyPhen-2
3,211
<ASSISTANT_TASK:> Python Code: import matplotlib.pyplot as plt import numpy as np import pints import pints.plot import pints.toy # Define model parameters parameters = [2, 0.015, 500, 10, 1.1, 0.05] f_0, r, k, sigma_base, eta, sigma_rel = parameters # Instantiate logistic growth model with f(t=0) = f_0 model = pints.toy.LogisticModel(initial_population_size=f_0) # Define measurement time points times = np.linspace(start=0, stop=1000, num=50) # Solve logistic growth model model_output = model.simulate(parameters=[r, k], times=times) # Add noise to the model output according to the combined Gaussian error model # Draw a standard Gaussian random variable for each model output gauss = np.random.normal(loc=0.0, scale=1.0, size=len(model_output)) # Scale standard Gaussian noise according to error model error = (sigma_base + sigma_rel * model_output**eta) * gauss # Add noise to model output observations = model_output + error # Save data as time-observation tuples data = np.vstack([times, observations]) # Create figure plt.figure(figsize=(12, 6)) # Plot model output (no noise) plt.plot(data[0, :], model_output, label='model output') # Plot generated data plt.scatter(data[0, :], data[1, :], label='data', edgecolors='black', alpha=0.5) # Create X and Y axis title plt.xlabel('Time [dimensionless]') plt.ylabel('Population size [dimensionless]') # Create legend plt.legend() # Show figure plt.show() # Get true initial population size and carrying capacity f_0 = parameters[0] k = parameters[2] # Forget about f_0 and k (we won't infer those parameters) true_parameters = np.hstack([parameters[1:2], parameters[3:]]) # Create a wrapper around the logistic model class Model(pints.ForwardModel): def __init__(self, f_0, k): self._k = k self._model = pints.toy.LogisticModel(initial_population_size=f_0) def simulate(self, parameters, times): return self._model.simulate(parameters=[parameters[0], self._k], times=times) def n_parameters(self): return 1 # Create an inverse problem which links the logistic growth model to the data problem = pints.SingleOutputProblem(model=Model(f_0=f_0, k=k), times=data[0, :], values=data[1, :]) # Create the constant and multiplicative Gaussian error log-likelihood log_likelihood = pints.ConstantAndMultiplicativeGaussianLogLikelihood(problem) # Create uniform priors for [r, sigma_base, eta, sigma_rel] log_prior_r = pints.UniformLogPrior( [0.005], [0.02] ) log_prior_sigma_base = pints.UniformLogPrior( [1], [20] ) log_prior_eta = pints.UniformLogPrior( [0.5], [1.5] ) log_prior_sigma_rel = pints.UniformLogPrior( [0.001], [1] ) log_prior = pints.ComposedLogPrior( log_prior_r, log_prior_sigma_base, log_prior_eta, log_prior_sigma_rel) # Create posterior log_posterior = pints.LogPosterior(log_likelihood, log_prior) # Choose starting points for mcmc chains xs = [ true_parameters * 1.01, true_parameters * 0.9, true_parameters * 1.15, ] # Create MCMC routine mcmc = pints.MCMCController( log_pdf=log_posterior, chains=len(xs), # number of chains x0=xs, # starting points method=pints.HaarioACMC) # Add stopping criterion mcmc.set_max_iterations(10000) # Set up modest logging mcmc.set_log_to_screen(False) # Run! print('Running...') hacmc_chains = mcmc.run() print('Done!') # Show diagnostics summary results = pints.MCMCSummary(chains=hacmc_chains, time=mcmc.time(), parameter_names=['r', 'sigma_base', 'eta', 'sigma_rel']) print(results) # Show trace and histogram pints.plot.trace(samples=hacmc_chains, parameter_names=['r', 'sigma_base', 'eta', 'sigma_rel'], ref_parameters=true_parameters) plt.show() # Show predicted time series for the first chain pints.plot.series(hacmc_chains[0, 200:], problem, true_parameters) plt.show() class LogPosteriorWrapper(pints.LogPDF): def __init__(self, log_pdf, eta): self._log_pdf = log_pdf self._eta = eta def __call__(self, parameters): # Create parameter container params = np.empty(shape=len(parameters)+1) # Fill container with parameters # (This solution is specific to the above presented problem) params[:2] = np.asarray(parameters[:2]) params[2] = self._eta params[3] = parameters[2] return self._log_pdf(params) def n_parameters(self): return self._log_pdf.n_parameters() - 1 # Get true initial population size and carrying capacity f_0 = parameters[0] k = parameters[2] # Forget about f_0 and k (we won't infer those parameters) true_parameters = np.hstack([parameters[1:2], parameters[3:]]) # Create a wrapper around the logistic model class Model(pints.ForwardModel): def __init__(self, f_0, k): self._k = k self._model = pints.toy.LogisticModel(initial_population_size=f_0) def simulate(self, parameters, times): return self._model.simulate(parameters=[parameters[0], self._k], times=times) def n_parameters(self): return 1 # Create an inverse problem which links the logistic growth model to the data problem = pints.SingleOutputProblem(model=Model(f_0=f_0, k=k), times=data[0, :], values=data[1, :]) # Create the constant and multiplicative Gaussian error log-likelihood log_likelihood = pints.ConstantAndMultiplicativeGaussianLogLikelihood(problem) # Create uniform priors for [r, sigma_base, eta, sigma_rel] log_prior_r = pints.UniformLogPrior( [0.005], [0.02] ) log_prior_sigma_base = pints.UniformLogPrior( [1], [20] ) log_prior_eta = pints.UniformLogPrior( [0.5], [1.5] ) log_prior_sigma_rel = pints.UniformLogPrior( [0.001], [1] ) log_prior = pints.ComposedLogPrior( log_prior_r, log_prior_sigma_base, log_prior_eta, log_prior_sigma_rel) # Create posterior (free eta) log_posterior = pints.LogPosterior(log_likelihood, log_prior) # Fix eta eta = true_parameters[2] log_posterior_fixed_eta = LogPosteriorWrapper(log_posterior, eta) # Extract unfixed parameters from true parameters true_parameters_fixed_eta = np.empty(shape=len(true_parameters)-1) true_parameters_fixed_eta[0:2] = true_parameters[0:2] true_parameters_fixed_eta[2] = true_parameters[3] # Choose starting points for mcmc chains xs = [ true_parameters_fixed_eta * 1.01, true_parameters_fixed_eta * 0.9, true_parameters_fixed_eta * 1.15, ] # Create MCMC routine mcmc = pints.MCMCController( log_pdf=log_posterior_fixed_eta, chains=len(xs), # number of chains x0=xs, # starting points method=pints.HaarioACMC) # Add stopping criterion mcmc.set_max_iterations(10000) # Set up modest logging mcmc.set_log_to_screen(False) # Run! print('Running...') hacmc_chains_fixed_eta = mcmc.run() print('Done!') # Show diagnostics summary results = pints.MCMCSummary(chains=hacmc_chains_fixed_eta, time=mcmc.time(), parameter_names=['r', 'sigma_base', 'sigma_rel']) print(results) # Show trace and histogram pints.plot.trace(samples=hacmc_chains_fixed_eta, parameter_names=['r', 'sigma_base', 'sigma_rel'], ref_parameters=true_parameters_fixed_eta) plt.show() # Show predicted time series for the first chain pints.plot.series(hacmc_chains_fixed_eta[0, 200:], problem, true_parameters_fixed_eta) plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Inference of model parameters Step2: Infer parameters with Haario Adaptive Covariance MCMC Step3: Show quantitative and visual diagnostics of MCMC runs Step4: We can see that although all three MCMC runs behave very similarly ($\hat R$ close to 1, and the sampled posteriors almost entirely lie on top of each other), the sampled posteriors largely fail to recover the true model parameters. While AMCMC was able to infer the growth rate $r$ quite well, the noise posteriors are unlikely to reproduce the data generating parameter values. It is further worth noticing that the effective sample size (ESS) is significantly smaller for all noise parameters in comparison to $r$. Step5: Create log-posterior with fixed $\eta $ Step6: Infer parameters with Haario Adaptive Covariance MCMC Step7: Show quantitative and visual diagnostics of MCMC runs
3,212
<ASSISTANT_TASK:> Python Code: import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') import shogun as sg import numpy as np import matplotlib.pyplot as plt %matplotlib inline #number of data points. n=100 #generate a random 2d line(y1 = mx1 + c) m = np.random.randint(1,10) c = np.random.randint(1,10) x1 = np.random.random_integers(-20,20,n) y1=m*x1+c #generate the noise. noise=np.random.random_sample([n]) * np.random.random_integers(-35,35,n) #make the noise orthogonal to the line y=mx+c and add it. x=x1 + noise*m/np.sqrt(1+np.square(m)) y=y1 + noise/np.sqrt(1+np.square(m)) twoD_obsmatrix=np.array([x,y]) #to visualise the data we must plot it. plt.rcParams['figure.figsize'] = 7, 7 figure, ax = plt.subplots(1,1) plt.xlim(-50,50) plt.ylim(-50,50) ax.plot(twoD_obsmatrix[0,:],twoD_obsmatrix[1,:],'o',color='green',markersize=6) #the line from which we generated the data is plotted in red ax.plot(x1[:],y1[:],linewidth=0.3,color='red') plt.title('One-Dimensional sub-space with noise') plt.xlabel("x axis") plt.ylabel("y axis") plt.show() #convert the observation matrix into dense feature matrix. train_features = sg.create_features(twoD_obsmatrix) #PCA(EVD) is choosen since N=100 and D=2 (N>D). #However we can also use PCA(AUTO) as it will automagically choose the appropriate method. preprocessor = sg.create_transformer('PCA', method='EVD') #since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by #setting the target dimension to 2 to visualize both the eigen vectors. #However, in future examples we will get rid of this step by implementing it directly. preprocessor.put('target_dim', 2) #Centralise the data by subtracting its mean from it. preprocessor.fit(train_features) #get the mean for the respective dimensions. mean_datapoints=preprocessor.get('mean_vector') mean_x=mean_datapoints[0] mean_y=mean_datapoints[1] #Get the eigenvectors(We will get two of these since we set the target to 2). E = preprocessor.get('transformation_matrix') #Get all the eigenvalues returned by PCA. eig_value=preprocessor.get('eigenvalues_vector') e1 = E[:,0] e2 = E[:,1] eig_value1 = eig_value[0] eig_value2 = eig_value[1] #find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E #Here M=1 #slope of e1 & e2 m1=e1[1]/e1[0] m2=e2[1]/e2[0] #generate the two lines x1=range(-50,50) x2=x1 y1=np.multiply(m1,x1) y2=np.multiply(m2,x2) #plot the data along with those two eigenvectors figure, ax = plt.subplots(1,1) plt.xlim(-50, 50) plt.ylim(-50, 50) ax.plot(x[:], y[:],'o',color='green', markersize=5, label="green") ax.plot(x1[:], y1[:], linewidth=0.7, color='black') ax.plot(x2[:], y2[:], linewidth=0.7, color='blue') p1 = plt.Rectangle((0, 0), 1, 1, fc="black") p2 = plt.Rectangle((0, 0), 1, 1, fc="blue") plt.legend([p1,p2],["1st eigenvector","2nd eigenvector"],loc='center left', bbox_to_anchor=(1, 0.5)) plt.title('Eigenvectors selection') plt.xlabel("x axis") plt.ylabel("y axis") plt.show() #The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2). #E is the feature vector. E=e2 #transform all 2-dimensional feature matrices to target-dimensional approximations. yn=preprocessor.transform(train_features).get('feature_matrix') #Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue. #The 2nd row of yn is choosen as it corresponds to the required eigenvector e2. yn1=yn[1,:] x_new=(yn1 * E[0]) + np.tile(mean_x,[n,1]).T[0] y_new=(yn1 * E[1]) + np.tile(mean_y,[n,1]).T[0] figure, ax = plt.subplots(1,1) plt.xlim(-50, 50) plt.ylim(-50, 50) ax.plot(x[:], y[:],'o',color='green', markersize=5, label="green") ax.plot(x_new, y_new, 'o', color='blue', markersize=5, label="red") plt.title('PCA Projection of 2D data into 1D subspace') plt.xlabel("x axis") plt.ylabel("y axis") #add some legend for information p1 = plt.Rectangle((0, 0), 1, 1, fc="r") p2 = plt.Rectangle((0, 0), 1, 1, fc="g") p3 = plt.Rectangle((0, 0), 1, 1, fc="b") plt.legend([p1,p2,p3],["normal projection","2d data","1d projection"],loc='center left', bbox_to_anchor=(1, 0.5)) #plot the projections in red: for i in range(n): ax.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red') plt.rcParams['figure.figsize'] = 8,8 #number of points n=100 #generate the data a=np.random.randint(1,20) b=np.random.randint(1,20) c=np.random.randint(1,20) d=np.random.randint(1,20) x1=np.random.random_integers(-20,20,n) y1=np.random.random_integers(-20,20,n) z1=-(a*x1+b*y1+d)/c #generate the noise noise=np.random.random_sample([n])*np.random.random_integers(-30,30,n) #the normal unit vector is [a,b,c]/magnitude magnitude=np.sqrt(np.square(a)+np.square(b)+np.square(c)) normal_vec=np.array([a,b,c]/magnitude) #add the noise orthogonally x=x1+noise*normal_vec[0] y=y1+noise*normal_vec[1] z=z1+noise*normal_vec[2] threeD_obsmatrix=np.array([x,y,z]) #to visualize the data, we must plot it. from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax=fig.add_subplot(111, projection='3d') #plot the noisy data generated by distorting a plane ax.scatter(x, y, z,marker='o', color='g') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.set_zlabel('z label') plt.legend([p2],["3d data"],loc='center left', bbox_to_anchor=(1, 0.5)) plt.title('Two dimensional subspace with noise') xx, yy = np.meshgrid(range(-30,30), range(-30,30)) zz=-(a * xx + b * yy + d) / c #convert the observation matrix into dense feature matrix. train_features = sg.create_features(threeD_obsmatrix) #PCA(EVD) is choosen since N=100 and D=3 (N>D). #However we can also use PCA(AUTO) as it will automagically choose the appropriate method. preprocessor = sg.create_transformer('PCA', method='EVD') #If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their #eigenvalues. preprocessor.put('target_dim', 2) preprocessor.fit(train_features) #get the mean for the respective dimensions. mean_datapoints=preprocessor.get('mean_vector') mean_x=mean_datapoints[0] mean_y=mean_datapoints[1] mean_z=mean_datapoints[2] #get the required eigenvectors corresponding to top 2 eigenvalues. E = preprocessor.get('transformation_matrix') #This can be performed by shogun's PCA preprocessor as follows: yn=preprocessor.transform(train_features).get('feature_matrix') new_data=np.dot(E,yn) x_new=new_data[0,:]+np.tile(mean_x,[n,1]).T[0] y_new=new_data[1,:]+np.tile(mean_y,[n,1]).T[0] z_new=new_data[2,:]+np.tile(mean_z,[n,1]).T[0] #all the above points lie on the same plane. To make it more clear we will plot the projection also. fig=plt.figure() ax=fig.add_subplot(111, projection='3d') ax.scatter(x, y, z,marker='o', color='g') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.set_zlabel('z label') plt.legend([p1,p2,p3],["normal projection","3d data","2d projection"],loc='center left', bbox_to_anchor=(1, 0.5)) plt.title('PCA Projection of 3D data into 2D subspace') for i in range(100): ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b') ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r') plt.rcParams['figure.figsize'] = 10, 10 import os def get_imlist(path): Returns a list of filenames for all jpg images in a directory return [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.pgm')] #set path of the training images path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/training/') #set no. of rows that the images will be resized. k1=100 #set no. of columns that the images will be resized. k2=100 filenames = get_imlist(path_train) filenames = np.array(filenames) #n is total number of images that has to be analysed. n=len(filenames) # we will be using this often to visualize the images out there. def showfig(image): imgplot=plt.imshow(image, cmap='gray') imgplot.axes.get_xaxis().set_visible(False) imgplot.axes.get_yaxis().set_visible(False) from PIL import Image # to get a hang of the data, lets see some part of the dataset images. fig = plt.figure() plt.title('The Training Dataset') for i in range(49): fig.add_subplot(7,7,i+1) train_img=np.array(Image.open(filenames[i]).convert('L')) train_img=np.array(Image.fromarray(train_img).resize([k1,k2])) showfig(train_img) #To form the observation matrix obs_matrix. #read the 1st image. train_img = np.array(Image.open(filenames[0]).convert('L')) #resize it to k1 rows and k2 columns train_img=np.array(Image.fromarray(train_img).resize([k1,k2])) #since features accepts only data of float64 datatype, we do a type conversion train_img=np.array(train_img, dtype='double') #flatten it to make it a row vector. train_img=train_img.flatten() # repeat the above for all images and stack all those vectors together in a matrix for i in range(1,n): temp=np.array(Image.open(filenames[i]).convert('L')) temp=np.array(Image.fromarray(temp).resize([k1,k2])) temp=np.array(temp, dtype='double') temp=temp.flatten() train_img=np.vstack([train_img,temp]) #form the observation matrix obs_matrix=train_img.T train_features = sg.create_features(obs_matrix) preprocessor= sg.create_transformer('PCA', method='AUTO') preprocessor.put('target_dim', 100) preprocessor.fit(train_features) mean=preprocessor.get('mean_vector') #get the required eigenvectors corresponding to top 100 eigenvalues E = preprocessor.get('transformation_matrix') #lets see how these eigenfaces/eigenvectors look like: fig1 = plt.figure() plt.title('Top 20 Eigenfaces') for i in range(20): a = fig1.add_subplot(5,4,i+1) eigen_faces=E[:,i].reshape([k1,k2]) showfig(eigen_faces) #we perform the required dot product. yn=preprocessor.transform(train_features).get('feature_matrix') re=np.tile(mean,[n,1]).T[0] + np.dot(E,yn) #lets plot the reconstructed images. fig2 = plt.figure() plt.title('Reconstructed Images from 100 eigenfaces') for i in range(1,50): re1 = re[:,i].reshape([k1,k2]) fig2.add_subplot(7,7,i) showfig(re1) #set path of the training images path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/testing/') test_files=get_imlist(path_train) test_img=np.array(Image.open(test_files[0]).convert('L')) plt.rcParams.update({'figure.figsize': (3, 3)}) #we plot the test image , for which we have to identify a good match from the training images we already have fig = plt.figure() plt.title('The Test Image') showfig(test_img) #We flatten out our test image just the way we have done for the other images test_img=np.array(Image.fromarray(test_img).resize([k1,k2])) test_img=np.array(test_img, dtype=np.float64) test_img=test_img.flatten() #We centralise the test image by subtracting the mean from it. test_f=test_img-mean #We have already projected our training images into pca subspace as yn. train_proj = yn #Projecting our test image into pca subspace test_proj = np.dot(E.T, test_f) #To get Eucledian Distance as the distance measure use EuclideanDistance. workfeat = sg.create_features(np.mat(train_proj)) testfeat = sg.create_features(np.mat(test_proj).T) RaRb = sg.create_distance('EuclideanDistance') RaRb.init(testfeat, workfeat) #The distance between one test image w.r.t all the training is stacked in matrix d. d=np.empty([n,1]) for i in range(n): d[i]= RaRb.distance(0,i) # The one having the minimum distance is found out min_distance_index = d.argmin() iden=np.array(Image.open(filenames[min_distance_index])) plt.title('Identified Image') showfig(iden) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Some Formal Background (Skip if you just want code examples) Step2: Step 2 Step3: Step 3 Step4: Step 5 Step5: In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions. Step6: Step 6 Step7: Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis. Step 7 Step8: The new data is plotted below Step9: PCA on a 3d data. Step1 Step10: Step 2 Step11: Step 3 & Step 4 Step12: Steps 5 Step13: Step 7 Step15: PCA Performance Step16: Lets have a look on the data Step17: Represent every image $I_i$ as a vector $\Gamma_i$ Step18: Step 2 Step19: Step 3 & Step 4 Step20: These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process. Step 5 Step21: Step 7 Step22: Recognition part. Step23: Here we have to project our training image as well as the test image on the PCA subspace. Step24: Shogun's way of doing things
3,213
<ASSISTANT_TASK:> Python Code: __version__ = '0.1.0' __status__ = 'Development' __date__ = '2017-May-25' __author__ = 'Jay Narhan' import os import pandas as pd import numpy as np from collections import Counter META_ROOT = os.path.realpath('../../Meta_Data_Files') + '/' DDSM_META = META_ROOT + 'Ddsm_png.csv' MIAS_META = META_ROOT + 'MIAS.txt' ddsm = pd.read_csv(DDSM_META, skiprows=[0,2]) ddsm = ddsm.replace(np.NAN, "None") ddsm = ddsm.drop(['Type', 'AbType', 'Scanner', 'SubFolder'], axis=1) ddsm['Pathology'].replace('None', 'NORMAL', inplace=True) ddsm['Detection_Res'] = np.where( ddsm['Pathology'].str.match(r'NORMAL'), 'NORMAL', 'ABNORMAL' ) ddsm['View'] = np.where( ddsm['Name'].str.contains(r'CC'), 'CC', 'MLO') ddsm['Patient_ID'] = ddsm['Name'].str.extract(r'^([^.]*)', expand=False) ddsm['Orientation'] = np.where( ddsm['Name'].str.contains(r'LEFT'), 'LEFT', 'RIGHT') ddsm = ddsm.rename(columns= {'Name': 'Image_Name', 'LesionType': 'Lesion_Type', 'Pathology': 'Pathology_Res'}) order = ['Patient_ID', 'Image_Name', 'Orientation', 'View', 'Lesion_Type', 'Detection_Res', 'Pathology_Res'] ddsm = ddsm[order] print 'Length of DDSM meta information (not necessarily files available): {}'.format(len(ddsm)) ddsm.head(n=8) pat_ids = [pat_id for pat_id in ddsm['Patient_ID']] if len(pat_ids) % 4 != 0: # this = number of images and should be divisible by 4! print 'Missing DDSM data!' print 'Number of patients: {}'.format(len(set(pat_ids))) print 'Number of images: {}'.format(len(pat_ids)) # Missing data: counts = Counter(pat_ids) for k,v in counts.iteritems(): if v != 4: print 'DDSM Patient: {0} only has {1} images'.format(k, v) with open(MIAS_META) as f: content = f.readlines() mias = pd.DataFrame(columns=order) mias_patient = 0 for i, row in enumerate(content): line = row.split(' ') img_name = 'mdb' + str(i+1).zfill(3) + '.png' # text file has error in names i.e. line[0] - do not use lesion = line[2] if lesion == 'NORM': lesion = 'None' elif lesion == 'CALC': lesion = 'CALCIFICATION' if line[3] != '\n': pathology = line[3] if pathology == 'B': pathology = 'BENIGN' else: pathology = 'MALIGNANT' else: pathology = 'NORMAL' if pathology == 'NORMAL': detection = 'NORMAL' else: detection = 'ABNORMAL' patient_id = 'MIAS_' + str(mias_patient) if i%2 == 0: mias.loc[i] = [patient_id, img_name, 'LEFT', 'MLO', lesion, detection, pathology] else: mias.loc[i] = [patient_id, img_name, 'RIGHT', 'MLO', lesion, detection, pathology] mias_patient += 1 mias.head(n=8) print 'Length of MIAS meta information (not necessarily files): {}'.format(len(mias)) all_data = pd.DataFrame(columns=order) all_data = all_data.append(ddsm) all_data = all_data.append(mias) all_data.to_csv(path_or_buf=META_ROOT +'meta_data_all.csv', index=False) all_data.head() print 'Unique Detection_Res values: {}'.format(set(all_data.Detection_Res)) print 'Unique Pathology_Res values: {}'.format(set(all_data.Pathology_Res)) meta_left = all_data.query('Orientation == "LEFT"') meta_right = all_data.query('Orientation == "RIGHT"') meta = meta_left.merge(meta_right, how='inner', on=['Patient_ID', 'View']) # Long to wide on Patients and type of view del meta['Image_Name_x'] del meta['Image_Name_y'] meta['Image_Name'] = meta.Patient_ID + '_' + meta.View + '.png' print 'Number of records in meta: {:>10}'.format(meta['Patient_ID'].count()) meta.head() meta = pd.DataFrame(meta, columns=('Patient_ID', 'Image_Name', 'View', 'Orientation_x', 'Lesion_Type_x', 'Detection_Res_x', 'Pathology_Res_x', 'Orientation_y', 'Lesion_Type_y', 'Detection_Res_y', 'Pathology_Res_y')) meta.head() malignants = np.where( ((meta['Pathology_Res_x'] == 'MALIGNANT') & (meta['Pathology_Res_y'] == 'MALIGNANT')) | ((meta['Pathology_Res_x'] == 'NORMAL') & (meta['Pathology_Res_y'] == 'MALIGNANT')) | ((meta['Pathology_Res_x'] == 'MALIGNANT') & (meta['Pathology_Res_y'] == 'NORMAL')) ) benigns = np.where( ((meta['Pathology_Res_x'] == 'BENIGN') & (meta['Pathology_Res_y'] == 'BENIGN')) | ((meta['Pathology_Res_x'] == 'NORMAL') & (meta['Pathology_Res_y'] == 'BENIGN')) | ((meta['Pathology_Res_x'] == 'BENIGN') & (meta['Pathology_Res_y'] == 'NORMAL')) | ((meta['Pathology_Res_x'] == 'BENIGN_WITHOUT_CALLBACK') & (meta['Pathology_Res_y'] == 'BENIGN_WITHOUT_CALLBACK')) | ((meta['Pathology_Res_x'] == 'BENIGN_WITHOUT_CALLBACK') & (meta['Pathology_Res_y'] == 'NORMAL')) | ((meta['Pathology_Res_x'] == 'NORMAL') & (meta['Pathology_Res_y'] == 'BENIGN_WITHOUT_CALLBACK')) | ((meta['Pathology_Res_x'] == 'BENIGN') & (meta['Pathology_Res_y'] == 'BENIGN_WITHOUT_CALLBACK')) | ((meta['Pathology_Res_x'] == 'BENIGN_WITHOUT_CALLBACK') & (meta['Pathology_Res_y'] == 'BENIGN')) ) both = np.where( ((meta['Pathology_Res_x'] == 'BENIGN') & (meta['Pathology_Res_y'] == 'MALIGNANT')) | ((meta['Pathology_Res_x'] == 'MALIGNANT') & (meta['Pathology_Res_y'] == 'BENIGN')) | ((meta['Pathology_Res_x'] == 'BENIGN_WITHOUT_CALLBACK') & (meta['Pathology_Res_y'] == 'MALIGNANT')) | ((meta['Pathology_Res_x'] == 'MALIGNANT') & (meta['Pathology_Res_y'] == 'BENIGN_WITHOUT_CALLBACK')) ) normals = np.where( (meta['Pathology_Res_x'] == 'NORMAL') & (meta['Pathology_Res_y'] == 'NORMAL') ) unproven = np.where( (meta['Pathology_Res_x'] == 'UNPROVEN') | (meta['Pathology_Res_y'] == 'UNPROVEN') ) def add_diagnosis(row_indxs, df, label): for item in row_indxs: df.loc[item, 'Pathology_Res'] = label # Pass by reference, no return needed add_diagnosis(malignants, meta, 'MALIGNANT') add_diagnosis(benigns, meta, 'BENIGN') add_diagnosis(both, meta, 'BENIGN+MALIGNANT') add_diagnosis(normals, meta, 'NORMAL') add_diagnosis(unproven, meta, 'UNPROVEN') meta[9:15] meta['Pathology_Res'].value_counts() meta['Detection_Res'] = np.where(meta['Pathology_Res']=='NORMAL', 'NORMAL', 'ABNORMAL') meta['Detection_Res'].value_counts() meta['Detection_Res'] = np.where(meta['Pathology_Res']=='UNPROVEN', 'UNPROVEN', meta['Detection_Res']) meta['Detection_Res'].value_counts() meta[9:15] cols = ['Patient_ID', 'Image_Name', 'View', 'Detection_Res', 'Pathology_Res'] meta = meta[cols] meta[9:15] mask = [] for f in meta.Image_Name: mask.append(os.path.isfile('/Users/jnarhan/Projects/CUNY_698/Docker-Shared/Data_Diff_Images/ALL_IMGS/' + f)) meta = meta[mask] print 'Number of records in meta: {:>10}'.format(meta['Patient_ID'].count()) print 'Number of Abnormals: {:>4}'.format(sum((meta.Detection_Res == 'ABNORMAL'))) print 'Number of Normals: {:>6}'.format(sum((meta.Detection_Res == 'NORMAL'))) print 'Number of Unproven: {:>5}'.format(sum((meta.Detection_Res == 'UNPROVEN'))) print 'Total: {:18}'.format(meta.Detection_Res.count()) mask = meta['Detection_Res'].isin(['ABNORMAL', 'NORMAL']) detect_meta = meta[mask] set(detect_meta.Detection_Res) print 'Number of Abnormals: {:>4}'.format(sum((detect_meta.Detection_Res == 'ABNORMAL'))) print 'Number of Normals: {:>6}'.format(sum((detect_meta.Detection_Res == 'NORMAL'))) print 'Number of Unproven: {:>5}'.format(sum((detect_meta.Pathology_Res == 'UNPROVEN'))) print 'Total: {:18}'.format(detect_meta.Detection_Res.count()) detect_meta.to_csv(path_or_buf=META_ROOT +'meta_data_detection.csv', index=False) print 'Number of Normals: {:>6}'.format(sum((meta.Pathology_Res == 'NORMAL'))) print 'Number of B&Ms: {:>9}'.format(sum((meta.Pathology_Res == 'BENIGN+MALIGNANT'))) print 'Number of Bs: {:>11}'.format(sum((meta.Pathology_Res == 'BENIGN'))) print 'Number of Ms: {:>11}'.format(sum((meta.Pathology_Res == 'MALIGNANT'))) print 'Number of Unproven: {:>5}'.format(sum((meta.Pathology_Res == 'UNPROVEN'))) print 'Total: {:18}'.format(meta.Pathology_Res.count()) #mask = meta['Detection_Res'].isin(['BENIGN', 'MALIGNANT', 'NORMAL']) mask = meta['Pathology_Res'].isin(['BENIGN', 'MALIGNANT']) diagnosis_meta = meta[mask] set(diagnosis_meta.Pathology_Res) print 'Number of Benigns: {:>7}'.format(sum((diagnosis_meta.Pathology_Res == 'BENIGN'))) print 'Number of Malignants: {:}'.format(sum((diagnosis_meta.Pathology_Res == 'MALIGNANT'))) print 'Total: {:19}'.format(diagnosis_meta.Pathology_Res.count()) diagnosis_meta.to_csv(path_or_buf=META_ROOT +'meta_data_diagnosis.csv', index=False) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: <h2>Read DDSM Meta Data Step2: <h2>Read in MIAS Meta Data Step3: <h3>Create meta_data_all.csv</h3> Step4: Step5: <h2>Creating Meta Data for Differencing Use</h2> Step6: Important Step7: Detection Labels Step8: Drop unproven cases as uncertain as to whether an abnormality exists. Step9: Diagnosis Labels Step10: B&M's (92) and Unproven (16) will need to removed from the differencing analysis when it run, as the pathology cannot be proven in the latter case (in one or both of the breasts), and because handling B&M's through differencing is unclear.
3,214
<ASSISTANT_TASK:> Python Code: import codecs with codecs.open("imdb_labelled.txt", "r", "utf-8") as arquivo: vetor = [] for linha in arquivo: vetor.append(linha) with codecs.open("amazon_cells_labelled.txt", "r", "utf-8") as arquivo: for linha in arquivo: vetor.append(linha) with codecs.open("yelp_labelled.txt", "r", "utf-8") as arquivo: for linha in arquivo: vetor.append(linha) vetor = [ x[:-1] for x in vetor ] vetor = ([s.replace('&', '').replace(' - ', '').replace('.', '').replace(',', '').replace('!', ''). replace('+', '')for s in vetor]) TextosQuebrados = [ x[:-4] for x in vetor ] TextosQuebrados = map(lambda X:X.lower(),TextosQuebrados) #TextosQuebrados = [x.split(' ') for x in TextosQuebrados] TextosQuebrados = [nltk.tokenize.word_tokenize(frase) for frase in TextosQuebrados] import nltk stopwords = nltk.corpus.stopwords.words('english') stemmer = nltk.stem.RSLPStemmer() dicionario = set() for comentarios in TextosQuebrados: validas = [stemmer.stem(palavra) for palavra in comentarios if palavra not in stopwords and len(palavra) > 0] dicionario.update(validas) totalDePalavras = len(dicionario) tuplas = zip(dicionario, xrange(totalDePalavras)) tradutor = {palavra:indice for palavra,indice in tuplas} def vetorizar_texto(texto, tradutor, stemmer): vetor = [0] * len(tradutor) for palavra in texto: if len(palavra) > 0: raiz = stemmer.stem(palavra) if raiz in tradutor: posicao = tradutor[raiz] vetor[posicao] += 1 return vetor vetoresDeTexto = [vetorizar_texto(texto, tradutor,stemmer) for texto in TextosQuebrados] X = vetoresDeTexto Y = [ x[-1:] for x in vetor ] porcentagem_de_treino = 0.8 tamanho_do_treino = porcentagem_de_treino * len(Y) tamanho_de_validacao = len(Y) - tamanho_do_treino treino_dados = X[0:int(tamanho_do_treino)] treino_marcacoes = Y[0:int(tamanho_do_treino)] validacao_dados = X[int(tamanho_do_treino):] validacao_marcacoes = Y[int(tamanho_do_treino):] fim_de_teste = tamanho_do_treino + tamanho_de_validacao teste_dados = X[int(tamanho_do_treino):int(fim_de_teste)] teste_marcacoes = Y[int(tamanho_do_treino):int(fim_de_teste)] from sklearn import svm from sklearn.model_selection import cross_val_score k = 10 # Implement poly SVC poly_svc = svm.SVC(kernel='linear') accuracy_poly_svc = cross_val_score(poly_svc, treino_dados, treino_marcacoes, cv=k, scoring='accuracy') print('poly_svc: ', accuracy_poly_svc.mean()) def fit_and_predict(modelo, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes): modelo.fit(treino_dados, treino_marcacoes) resultado = modelo.predict(teste_dados) acertos = (resultado == teste_marcacoes) total_de_acertos = sum(acertos) total_de_elementos = len(teste_dados) taxa_de_acerto = float(total_de_acertos) / float(total_de_elementos) print(taxa_de_acerto) return taxa_de_acerto resultados = {} from sklearn.naive_bayes import MultinomialNB modeloMultinomial = MultinomialNB() resultadoMultinomial = fit_and_predict(modeloMultinomial, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes) resultados[resultadoMultinomial] = modeloMultinomial from sklearn.ensemble import GradientBoostingClassifier classificador = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0).fit(treino_dados, treino_marcacoes) resultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes) from sklearn.naive_bayes import GaussianNB classificador = GaussianNB() resultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes) from sklearn.naive_bayes import BernoulliNB classificador = BernoulliNB() resultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Depois, devemos retirar cada quebra de linha no final de cada linha, ou seja, os '\n'. Step2: A seguir, retiramos os dois últimos caracteres sobrando apenas o nosso comentário. Depois, passamos ele para lowercase. Step4: Foi decidida a abordagem por poly SCV Step5: Resultado - Poly Step6: Com maior refinamento de dados Step7: GradientBoostingClassifier Step8: Gaussiano
3,215
<ASSISTANT_TASK:> Python Code: class Mesa(object): cantidad_de_patas = None color = None material = None mi_mesa = Mesa() mi_mesa.cantidad_de_patas = 4 mi_mesa.color = 'Marrón' mi_mesa.material = 'Madera' print 'Tendo una mesa de {0.cantidad_de_patas} patas de color {0.color} y esta hecha de {0.material}'.format(mi_mesa) class Mesa(object): cantidad_de_patas = None color = None material = None def __init__(self, patas, color, material): self.cantidad_de_patas = patas self.color = color self.material = material mi_mesa = Mesa(4, 'Marrón', 'Madera') print 'Tendo una mesa de {0.cantidad_de_patas} patas de color {0.color} y esta hecha de {0.material}'.format(mi_mesa) class TablaRectangular(object): base = None altura = None def __init__(self, base, altura): self.base = base self.altura = altura class TablaRedonda(object): radio = None def __init__(self, radio): self.radio = radio class Pata(object): altura = None def __init__(self, altura): self.altura = altura class Mesa(object): tabla = None patas = None def __init__(self, tabla, patas): self.tabla = tabla self.patas = patas tabla = TablaRectangular(100, 150) pata_1 = Pata(90) pata_2 = Pata(90) pata_3 = Pata(90) pata_4 = Pata(90) mi_mesa = Mesa(tabla, [pata_1, pata_2, pata_3, pata_4]) import math class TablaRectangular(object): base = None altura = None def __init__(self, base, altura): self.base = base self.altura = altura def calcular_superficie(self): return self.base * self.altura class TablaRedonda(object): radio = None def __init__(self, radio): self.radio = radio def calcular_superficie(self): return math.pi * self.radio**2 class Pata(object): altura = None def __init__(self, altura): self.altura = altura class Mesa(object): tabla = None patas = None def __init__(self, tabla, patas): self.tabla = tabla self.patas = patas def obtener_superficie_de_apoyo(self): return self.tabla.calcular_superficie() tabla = TablaRectangular(100, 150) pata_1 = Pata(90) pata_2 = Pata(90) pata_3 = Pata(90) pata_4 = Pata(90) mi_mesa = Mesa(tabla, [pata_1, pata_2, pata_3, pata_4]) sup = mi_mesa.obtener_superficie_de_apoyo() print 'La superficie de la mesa es {} cm2'.format(sup) class Alumno(object): def __init__(self, padron, nombre, apellido): self.padron = padron self.nombre = nombre self.apellido = apellido self.parciales = [] self.tps = [] self.coloquios = [] def rendir_parcial(self, nota): self.parciales.append(nota) def entregar_trabajo_practico(self, nota): self.tps.append(nota) def rendir_coloquio(self, nota): self.coloquios.append(nota) def aprobo_algun_parcial(self): aprobo_alguno = False for nota in self.parciales: if nota >= 4: aprobo_alguno = True return aprobo_alguno def aprobo_todos_los_tp(self): aprobo_todos = True for nota in self.tps: if nota < 4: aprobo_todos = False return aprobo_todos def puede_rendir_coloquio(self): return self.aprobo_algun_parcial() and self.aprobo_todos_los_tp() alum = Alumno(12345, 'Juan', 'Perez') alum.rendir_parcial(2) alum.entregar_trabajo_practico(7) alum.entregar_trabajo_practico(9) if alum.puede_rendir_coloquio(): print 'El alumno puede rendir coloquio' else: print 'El alumno no puede rendor coloquio' print '¿Y si después rinde el parcial y se saca un 7?' alum.rendir_parcial(7) if alum.puede_rendir_coloquio(): print 'El alumno puede rendir coloquio' else: print 'El alumno no puede rendor coloquio' <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Ahora, si siempre voy a tener que definir esas características de la mesa para poder usarla, lo más cómodo es definir el método __init__ que sirve para inicializar el objeto Step2: Como vemos, el método __init__ (aunque en realidad pasará lo mismo con casi todos los métodos de la clase), recibe como primer parámetro uno que se llama self. En realidad el nombre no tiene por qué ser ese, pero se suele usar por convención. <br> Step3: Y como dijimos antes, una objeto no sólo agrupa sus características, sino también los métodos que nos permiten trabajar con él, como por ejemplo, podría ser calcular su superficie de apoyo Step4: En este caso, no sólo es importante ver cómo se hace para invocar un método de un objeto (que es poniendo el nombre del objeto, un punto y el nombre del método seguido por todos sus parámetros entre paréntesis) sino también cómo se puede conjugar el uso de los objetos. <br>
3,216
<ASSISTANT_TASK:> Python Code: from bruges.transform import CoordTransform corner_ix = [[0, 0], [0, 3], [3, 0]] corner_xy = [[5000, 6000], [5000-23.176, 6000+71.329], [5000+142.658, 6000+46.353]] transform = CoordTransform(corner_ix, corner_xy) for i in range(4): for j in range(4): print(transform([i, j])) import pandas as pd import xarray as xr df = pd.read_csv('data.csv') df = df.set_index(['iline', 'xline']) da = xr.DataArray.from_series(df.z) da.plot() from bruges.transform import CoordTransform corner_ix = [[0, 0], [0, 3], [3, 0]] corner_xy = [[5000, 6000], [5000-23.176, 6000+71.329], [5000+142.658, 6000+46.353]] transform = CoordTransform(corner_ix, corner_xy) for i in range(4): for j in range(4): print(transform([i, j])) arr <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Adding (x, y) coordinates
3,217
<ASSISTANT_TASK:> Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt np.random.seed(10) dossageEffectiveness = abs(np.random.normal(5.0, 1.5, 1000)) repurchaseRate = (dossageEffectiveness + np.random.normal(0, 0.1, 1000)) * 3 repurchaseRate/=np.max(repurchaseRate) plt.scatter(dossageEffectiveness, repurchaseRate) plt.show() from scipy import stats slope, intercept, r_value, p_value, std_err = stats.linregress(dossageEffectiveness, repurchaseRate) r_value ** 2 def predict(x): return slope * x + intercept fitLine = predict(dossageEffectiveness) plt.scatter(dossageEffectiveness, repurchaseRate) plt.plot(dossageEffectiveness, fitLine, c='r') plt.show() repurchaseRate = np.random.normal(1, 0.1, 1000)*dossageEffectiveness**2 poly = np.poly1d(np.polyfit(dossageEffectiveness, repurchaseRate, 4)) xPoly = np.linspace(0, 7, 100) plt.scatter(dossageEffectiveness, repurchaseRate) plt.plot(xPoly, poly(xPoly), c='r') plt.show() dossageEffectiveness = np.sort(dossageEffectiveness) repurchaseRate = (dossageEffectiveness + np.random.normal(0, 1, 1000)) * 3 repurchaseRate/=np.max(repurchaseRate) angles = np.sort(np.random.uniform(0,np.pi,1000)) cs = np.sin(angles) repurchaseRateComplicated = repurchaseRate+(cs*100) repurchaseRateComplicated/=np.max(repurchaseRateComplicated) poly = np.poly1d(np.polyfit(dossageEffectiveness, repurchaseRateComplicated, 9)) xPoly = np.linspace(0, 7, 100) plt.scatter(dossageEffectiveness, repurchaseRateComplicated) plt.plot(xPoly, poly(xPoly), c='r') plt.show() ## for more code and details, see NaiveBayes.ipynb import os import io import numpy from pandas import DataFrame from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB ## *** Read in the emails and their classification *** def readFiles(path): # NO CODE HERE, JUST READ IN FILES FROM A DIR # AND RETURN: FULL PATH AND MESSAGES BODY def dataFrameFromDirectory(path, classification): rows = [] index = [] for filename, message in readFiles(path): rows.append({'message': message, 'class': classification}) index.append(filename) return DataFrame(rows, index=index) data = DataFrame({'message': [], 'class': []}) data = data.append(dataFrameFromDirectory('spamdir', 'spam')) #not real file/dir, just ex data = data.append(dataFrameFromDirectory('hamdir','ham'))#not real file/dir, just ex ## *** Done reading in data *** # vectorize email contents to numbers vectorizer = CountVectorizer() counts = vectorizer.fit_transform(data['message'].values) # make multinomial Naive Bayes object/func classifier = MultinomialNB() targets = data['class'].values # fit vectorized emails classifier.fit(counts, targets) # Check it worked with obviouse test casese examples = ['Free Viagra now!!!', "Hi Bob, how about a game of golf tomorrow?"] example_counts = vectorizer.transform(examples) predictions = classifier.predict(example_counts) predictions !pip install --upgrade graphviz import numpy as np import pandas as pd from sklearn import tree input_file = "PastHires.csv" df = pd.read_csv(input_file, header = 0) d = {'Y': 1, 'N': 0} df['Hired'] = df['Hired'].map(d) d = {'BS': 0, 'MS': 1, 'PhD': 2} df['Level of Education'] = df['Level of Education'].map(d) features = list(df.columns[:6]) clf = tree.DecisionTreeClassifier() clf = clf.fit(features,decisions) from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=10) clf = clf.fit(features,decisions) #Predict employment of an employed 10-year veteran print clf.predict([[10, 1, 4, 0, 0, 0]]) #...and an unemployed 10-year veteran print clf.predict([[10, 0, 4, 0, 0, 0]]) from sklearn import svm, datasets C = 1.0 #error penalty. 1 is default. svc = svm.SVC(kernel='linear', C=C).fit(features, classifications) #Check for another set of features svc.predict([[200000, 40]]) #output will be classification for those features import pandas as pd ## see 'SimilarMovies.ipynb' for basics of finding similar movies ## see 'ItemBasedCF.ipynb' for improved filtering and results ratings = pd.read_csv('ratingsData') # not a real file, just ex movies = movies = pd.read_csv('items')# not a real file, just ex userRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating') # Calculate item correlations corrMatrix = userRatings.corr() # ex of simple cleaning corrMatrix = userRatings.corr(method='pearson', min_periods=100) # group results and return top matches simCandidates = simCandidates.groupby(simCandidates.index).sum() simCandidates.sort_values(inplace = True, ascending = False) simCandidates.head(10) # filter out those current user has seen or bought filteredSims = simCandidates.drop(myRatings.index) filteredSims.head(10) ## further filtering ideas for this example at bottom of ItemBasedCF.ipynb import pandas as pd ### for more real code, see KNN.ipynb # bring in the data ratings = pd.read_csv("data")# not a real file, just ex # group by features of interst movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]}) # normalize features of interest for classification movieNumRatings = pd.DataFrame(movieProperties['rating']['size']) movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x))) from scipy import spatial def ComputeDistance(a, b): Function to comput distance between two items. genresA = a[1] genresB = b[1] genreDistance = spatial.distance.cosine(genresA, genresB) popularityA = a[2] popularityB = b[2] popularityDistance = abs(popularityA - popularityB) return genreDistance + popularityDistance import operator def getNeighbors(movieID, K): Get KNN and return sorted neighbors. distances = [] for movie in movieDict: if (movie != movieID): dist = ComputeDistance(movieDict[movieID], movieDict[movie]) distances.append((movie, dist)) distances.sort(key=operator.itemgetter(1)) neighbors = [] for x in range(K): neighbors.append(distances[x][0]) return neighbors ## again, see KNN.ipynb to see how the results can be ## displayed to see how it went from sklearn.datasets import load_iris from sklearn.decomposition import PCA import pylab as pl from itertools import cycle # load data iris = load_iris() #numSamples, numFeatures = iris.data.shape # apply PCA X = iris.data pca = PCA(n_components=2, whiten=True).fit(X) X_pca = pca.transform(X) print pca.components_ #check remaining variance print pca.explained_variance_ratio_ print sum(pca.explained_variance_ratio_) #1.0 would implies 100% variance kept ## see rest of PCA.ipynb to see how to plot resuls <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Modify this to a multivariate/polynomial regression example Step2: Make distribution more complicated to see if scikit-learn can fit it Step3: With a high-N polynomial, it is unlikely to hold up to future testing and only fits the test data well. Step4: K-Means Clustering Step5: Decision Trees Step6: push these into a 'features' list Step7: use graphviz to display resulting tree Step8: Ensemble Learning Step9: Recommender Systems Steps Step10: Calculate correlation between rating/frequency bought with pandas Step11: Use cleaned correlations array(s) to make recommendations Step14: K-Nearest Neighbours (KNN) Steps Step15: Principle Component Analysis (PCA)
3,218
<ASSISTANT_TASK:> Python Code: %matplotlib nbagg import numpy as np import matplotlib.pyplot as plt from plots import plot_tree_interactive plot_tree_interactive() from plots import plot_forest_interactive plot_forest_interactive() from sklearn import grid_search from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestClassifier digits = load_digits() X, y = digits.data, digits.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) rf = RandomForestClassifier(n_estimators=200, n_jobs=-1) parameters = {'max_features':['sqrt', 'log2'], 'max_depth':[5, 7, 9]} clf_grid = grid_search.GridSearchCV(rf, parameters) clf_grid.fit(X_train, y_train) clf_grid.score(X_train, y_train) clf_grid.score(X_test, y_test) # %load solutions/forests.py <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Decision Tree Classification Step2: Random Forests Step3: Selecting the Optimal Estimator via Cross-Validation Step4: Exercises
3,219
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'aerosol') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 1.3. Scheme Scope Step7: 1.4. Basic Approximations Step8: 1.5. Prognostic Variables Form Step9: 1.6. Number Of Tracers Step10: 1.7. Family Approach Step11: 2. Key Properties --&gt; Software Properties Step12: 2.2. Code Version Step13: 2.3. Code Languages Step14: 3. Key Properties --&gt; Timestep Framework Step15: 3.2. Split Operator Advection Timestep Step16: 3.3. Split Operator Physical Timestep Step17: 3.4. Integrated Timestep Step18: 3.5. Integrated Scheme Type Step19: 4. Key Properties --&gt; Meteorological Forcings Step20: 4.2. Variables 2D Step21: 4.3. Frequency Step22: 5. Key Properties --&gt; Resolution Step23: 5.2. Canonical Horizontal Resolution Step24: 5.3. Number Of Horizontal Gridpoints Step25: 5.4. Number Of Vertical Levels Step26: 5.5. Is Adaptive Grid Step27: 6. Key Properties --&gt; Tuning Applied Step28: 6.2. Global Mean Metrics Used Step29: 6.3. Regional Metrics Used Step30: 6.4. Trend Metrics Used Step31: 7. Transport Step32: 7.2. Scheme Step33: 7.3. Mass Conservation Scheme Step34: 7.4. Convention Step35: 8. Emissions Step36: 8.2. Method Step37: 8.3. Sources Step38: 8.4. Prescribed Climatology Step39: 8.5. Prescribed Climatology Emitted Species Step40: 8.6. Prescribed Spatially Uniform Emitted Species Step41: 8.7. Interactive Emitted Species Step42: 8.8. Other Emitted Species Step43: 8.9. Other Method Characteristics Step44: 9. Concentrations Step45: 9.2. Prescribed Lower Boundary Step46: 9.3. Prescribed Upper Boundary Step47: 9.4. Prescribed Fields Mmr Step48: 9.5. Prescribed Fields Mmr Step49: 10. Optical Radiative Properties Step50: 11. Optical Radiative Properties --&gt; Absorption Step51: 11.2. Dust Step52: 11.3. Organics Step53: 12. Optical Radiative Properties --&gt; Mixtures Step54: 12.2. Internal Step55: 12.3. Mixing Rule Step56: 13. Optical Radiative Properties --&gt; Impact Of H2o Step57: 13.2. Internal Mixture Step58: 14. Optical Radiative Properties --&gt; Radiative Scheme Step59: 14.2. Shortwave Bands Step60: 14.3. Longwave Bands Step61: 15. Optical Radiative Properties --&gt; Cloud Interactions Step62: 15.2. Twomey Step63: 15.3. Twomey Minimum Ccn Step64: 15.4. Drizzle Step65: 15.5. Cloud Lifetime Step66: 15.6. Longwave Bands Step67: 16. Model Step68: 16.2. Processes Step69: 16.3. Coupling Step70: 16.4. Gas Phase Precursors Step71: 16.5. Scheme Type Step72: 16.6. Bulk Scheme Species
3,220
<ASSISTANT_TASK:> Python Code: import os import sys # Modify the path sys.path.append("..") import yellowbrick as yb import matplotlib.pyplot as plt from download import download_all from sklearn.datasets.base import Bunch ## The path to the test data sets FIXTURES = os.path.join(os.getcwd(), "data") ## Dataset loading mechanisms datasets = { "hobbies": os.path.join(FIXTURES, "hobbies") } def load_data(name, download=True): Loads and wrangles the passed in text corpus by name. If download is specified, this method will download any missing files. # Get the path from the datasets path = datasets[name] # Check if the data exists, otherwise download or raise if not os.path.exists(path): if download: download_all() else: raise ValueError(( "'{}' dataset has not been downloaded, " "use the download.py module to fetch datasets" ).format(name)) # Read the directories in the directory as the categories. categories = [ cat for cat in os.listdir(path) if os.path.isdir(os.path.join(path, cat)) ] files = [] # holds the file names relative to the root data = [] # holds the text read from the file target = [] # holds the string of the category # Load the data from the files in the corpus for cat in categories: for name in os.listdir(os.path.join(path, cat)): files.append(os.path.join(path, cat, name)) target.append(cat) with open(os.path.join(path, cat, name), 'r') as f: data.append(f.read()) # Return the data bunch for use similar to the newsgroups example return Bunch( categories=categories, files=files, data=data, target=target, ) from yellowbrick.text import TSNEVisualizer from sklearn.feature_extraction.text import TfidfVectorizer # Load the data and create document vectors corpus = load_data('hobbies') tfidf = TfidfVectorizer() docs = tfidf.fit_transform(corpus.data) labels = corpus.target # Create the visualizer and draw the vectors tsne = TSNEVisualizer() tsne.fit(docs, labels) tsne.poof() # Only visualize the sports, cinema, and gaming classes tsne = TSNEVisualizer(classes=['sports', 'cinema', 'gaming']) tsne.fit(docs, labels) tsne.poof() # Don't color points with their classes tsne = TSNEVisualizer() tsne.fit(docs) tsne.poof() # Apply clustering instead of class names. from sklearn.cluster import MiniBatchKMeans clusters = MiniBatchKMeans(n_clusters=5) clusters.fit(docs) tsne = TSNEVisualizer() tsne.fit(docs, ["c{}".format(c) for c in clusters.labels_]) tsne.poof() from yellowbrick.text.freqdist import FreqDistVisualizer from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() docs = vectorizer.fit_transform(corpus.data) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer() visualizer.fit(docs, features) visualizer.poof() vectorizer = CountVectorizer(stop_words='english') docs = vectorizer.fit_transform(corpus.data) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer() visualizer.fit(docs, features) visualizer.poof() hobby_types = {} for category in corpus['categories']: texts = [] for idx in range(len(corpus['data'])): if corpus['target'][idx] == category: texts.append(corpus['data'][idx]) hobby_types[category] = texts vectorizer = CountVectorizer(stop_words='english') docs = vectorizer.fit_transform(text for text in hobby_types['cooking']) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer() visualizer.fit(docs, features) visualizer.poof() vectorizer = CountVectorizer(stop_words='english') docs = vectorizer.fit_transform(text for text in hobby_types['gaming']) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer() visualizer.fit(docs, features) visualizer.poof() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: Load Text Corpus for Example Code Step3: t-SNE Step4: Frequency Distribution Visualization Step5: Note that the FreqDistVisualizer does not perform any normalization or vectorization, and it expects text that has already be count vectorized. Step6: Visualizing Stopwords Removal Step7: Visualizing tokens across corpora
3,221
<ASSISTANT_TASK:> Python Code: from __future__ import unicode_literals, print_function from axon.api import loads, dumps from axon.objects import node, attribute, Attribute, Node from axon.objects import Builder, register_builder from axon import dump_as_str, as_unicode, factory, reduce from xml.etree import ElementTree import json from io import StringIO @reduce(ElementTree.Element) def element_reduce(elem): children = elem.getchildren() children = children[:] if elem.text and elem.text.strip(): children.append(elem.text) return node(elem.tag, elem.attrib, children) @reduce(ElementTree.ElementTree) def etree_reduce(element): return element_reduce(element.getroot()) class ElementTreeBuilder(Builder): def node(self, name, attrs, vals): str_type = type(u'') if type(vals[-1]) is str_type: text = vals.pop(-1) else: text = None attribs = {} children = [] if attrs: for name, val in attrs.items(): attribs[name] = val if vals: for val in vals: children.append(val) e = ElementTree.Element(name, attribs) if children: e.extend(children) if text: e.text = text return e register_builder('etree', ElementTreeBuilder()) xml_text = u <person> <name>John Smith</name> <age>25</age> <address type="home"> <street>21 2nd Street</street> <city>New York</city> <state>NY</state> </address> <address type="current"> <street>1410 NE Campus Parkway</street> <city>Seattle</city> <state>WA</state> </address> <phone type="home">212-555-1234</phone> <phone type="fax">646-555-4567</phone> </person> tree = ElementTree.parse(StringIO(xml_text)) ElementTree.dump(tree) ElementTree.dump(tree) axon_text = dumps([tree], pretty=1, braces=1) print(axon_text) xml_tree = loads(axon_text, mode='etree')[0] print(xml_tree) ElementTree.dump(xml_tree) axon_compact_text = dumps([xml_tree], braces=1) print(axon_compact_text) json_text = u {"person": { "name": "John Smith", "age": 25, "address": [ {"type": "home", "street": "21 2nd Street", "city": "New York", "state": "NY" }, {"type": "current", "street": "1410 NE Campus Parkway", "city": "Seattle", "state": "WA" } ], "phone": [ {"type": "home", "number": "212-555-1234"}, {"type": "fax", "number": "646-555-4567"} ] }} json.dumps(json.loads(json_text)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: There are reduce functions for ElementTree.Element and ElementTree.ElementTree types from xml.etree package. These functions will used for dumping ElementTree into AXON text. Step2: There is the class ElementTreeBuilder for construction of ElementTree from AXON text. Step3: Let's register ElementTree builder with name etree. This is new value for mode parameter in load/loads functions. Step5: Let's consider XML text Step6: Let's parse it into ElementTree that represents XML document. Step7: Here we dumping ElementTree object into AXON text. Step8: And load again from AXON text into ElementTree object Step9: There is AXON compact representation for comparison Step11: There is JSON representation for comparison too
3,222
<ASSISTANT_TASK:> Python Code: import sys # system module import pandas as pd # data package import matplotlib.pyplot as plt # graphics module import datetime as dt # date and time module import numpy as np # foundation for Pandas import seaborn.apionly as sns # fancy matplotlib graphics (no styling) from pandas_datareader import data, wb # plotly imports from plotly.offline import iplot, iplot_mpl # plotting functions import plotly.graph_objs as go # ditto import plotly # just to print version and init notebook import cufflinks as cf # gives us df.iplot that feels like df.plot cf.set_config_file(offline=True, offline_show_link=False) # these lines make our graphics show up in the notebook %matplotlib inline plotly.offline.init_notebook_mode() # check versions (overkill, but why not?) print('Python version:', sys.version) print('Pandas version: ', pd.__version__) print('Plotly version: ', plotly.__version__) print('Today: ', dt.date.today()) url = 'http://home.cc.gatech.edu/ice-gt/uploads/556/DetailedStateInfoAP-CS-A-2006-2013-with-PercentBlackAndHIspanicByState-fixed.xlsx' ap0 = pd.read_excel(url, sheetname=1, skiprows=[51, 52, 53, 54, 55], header=0) ap0.shape ap = ap0.drop("Unnamed: 2", 1) ap = ap.replace({"":0,"*":0}) ap['# male']=ap["Total #"]-ap["# female"] ap['# male passed']=ap['# passed']-ap['# female passed'] ap['% fem passed']=ap["# female passed"]/ap["# female"] ap["% male passed"]=ap["# male passed"]/ap["# male"] ap.set_index("2013 data") ap.head() sns.set(style="whitegrid") f, ax = plt.subplots(figsize=(20,25)) sns.set_color_codes("pastel") sns.barplot(x="Total #", y="2013 data", data=ap, label="Male Test Takers", color="b") sns.set_color_codes("muted") sns.barplot(x="# female", y="2013 data", data=ap, label="Female Test Takers", color="g") ax.legend(ncol=2, loc="lower right", frameon=True) ax.set(xlim=(0, 5000), ylabel="", xlabel="Female Test Takers as a Proportion of Total Test Takers") sns.despine(left=True, bottom=True) sns.set(style="whitegrid") sns.set_color_codes("pastel") sns.factorplot(x="# male", y="% male passed", data=ap, label="Male Test Takers", color="b") sns.set_color_codes("muted") sns.factorplot(x="# female", y="% fem passed", data=ap, label="Female Test Takers", color="g") ap1= sns.swarmplot(y="# schools", x="# female", data=ap, split = True) fig_cluster = ap1.get_figure() ap2= sns.swarmplot(y="yield per teacher", x="# female", data=ap, split = True) fig_cluster = ap1.get_figure() sns.set(style="whitegrid") f, ax = plt.subplots(figsize=(20,25)) sns.set_color_codes("pastel") sns.barplot(x="# female", y="2013 data", data=ap, label="Female Not Passed", color="r") sns.set_color_codes("muted") sns.barplot(x="# female passed", y="2013 data", data=ap, label="Female Passed", color="r") ax.legend(ncol=2, loc="lower right", frameon=True) ax.set(xlim=(0, 1000), ylabel="", xlabel="Passed Females as a Proportion of Not Passed Females") sns.despine(left=True, bottom=True) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Data Step2: Analysis Step3: Does size matter Step4: Do clusters of CompSci programs in a state make female participation more likely? Step5: Do larger or smaller classes encourage female participation better? Step6: Girl Power
3,223
<ASSISTANT_TASK:> Python Code: from petal_helper import * # Detect TPU, return appropriate distribution strategy try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver() print('Running on TPU ', tpu.master()) except ValueError: tpu = None if tpu: tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu) else: strategy = tf.distribute.get_strategy() print("REPLICAS: ", strategy.num_replicas_in_sync) ds_train = get_training_dataset() ds_valid = get_validation_dataset() ds_test = get_test_dataset() print("Training:", ds_train) print ("Validation:", ds_valid) print("Test:", ds_test) print("Number of classes: {}".format(len(CLASSES))) print("First five classes, sorted alphabetically:") for name in sorted(CLASSES)[:5]: print(name) print ("Number of training images: {}".format(NUM_TRAINING_IMAGES)) print("Training data shapes:") for image, label in ds_train.take(3): print(image.numpy().shape, label.numpy().shape) print("Training data label examples:", label.numpy()) print("Test data shapes:") for image, idnum in ds_test.take(3): print(image.numpy().shape, idnum.numpy().shape) print("Test data IDs:", idnum.numpy().astype('U')) # U=unicode string one_batch = next(iter(ds_train.unbatch().batch(20))) display_batch_of_images(one_batch) with strategy.scope(): pretrained_model = tf.keras.applications.VGG16( weights='imagenet', include_top=False , input_shape=[*IMAGE_SIZE, 3] ) pretrained_model.trainable = False model = tf.keras.Sequential([ # To a base pretrained on ImageNet to extract features from images... pretrained_model, # ... attach a new head to act as a classifier. tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(len(CLASSES), activation='softmax') ]) model.compile( optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy'], ) model.summary() # Define the batch size. This will be 16 with TPU off and 128 with TPU on BATCH_SIZE = 16 * strategy.num_replicas_in_sync # Define training epochs for committing/submitting. (TPU on) EPOCHS = 12 STEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE history = model.fit( ds_train, validation_data=ds_valid, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, ) display_training_curves( history.history['loss'], history.history['val_loss'], 'loss', 211, ) display_training_curves( history.history['sparse_categorical_accuracy'], history.history['val_sparse_categorical_accuracy'], 'accuracy', 212, ) cmdataset = get_validation_dataset(ordered=True) images_ds = cmdataset.map(lambda image, label: image) labels_ds = cmdataset.map(lambda image, label: label).unbatch() cm_correct_labels = next(iter(labels_ds.batch(NUM_VALIDATION_IMAGES))).numpy() cm_probabilities = model.predict(images_ds) cm_predictions = np.argmax(cm_probabilities, axis=-1) labels = range(len(CLASSES)) cmat = confusion_matrix( cm_correct_labels, cm_predictions, labels=labels, ) cmat = (cmat.T / cmat.sum(axis=1)).T # normalize score = f1_score( cm_correct_labels, cm_predictions, labels=labels, average='macro', ) precision = precision_score( cm_correct_labels, cm_predictions, labels=labels, average='macro', ) recall = recall_score( cm_correct_labels, cm_predictions, labels=labels, average='macro', ) display_confusion_matrix(cmat, score, precision, recall) dataset = get_validation_dataset() dataset = dataset.unbatch().batch(20) batch = iter(dataset) images, labels = next(batch) probabilities = model.predict(images) predictions = np.argmax(probabilities, axis=-1) display_batch_of_images((images, labels), predictions) test_ds = get_test_dataset(ordered=True) print('Computing predictions...') test_images_ds = test_ds.map(lambda image, idnum: image) probabilities = model.predict(test_images_ds) predictions = np.argmax(probabilities, axis=-1) print(predictions) print('Generating submission.csv file...') # Get image ids from test set and convert to integers test_ids_ds = test_ds.map(lambda image, idnum: idnum).unbatch() test_ids = next(iter(test_ids_ds.batch(NUM_TEST_IMAGES))).numpy().astype('U') # Write the submission file np.savetxt( 'submission.csv', np.rec.fromarrays([test_ids, predictions]), fmt=['%s', '%d'], delimiter=',', header='id,label', comments='', ) # Look at the first few predictions !head submission.csv <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Create Distribution Strategy Step2: Loading the Competition Data Step3: Explore the Data Step4: Examine the shape of the data. Step5: Peek at training data. Step6: Define Model Step7: Train Model Step8: Examine training curves. Step9: Validation Step10: Look at examples from the dataset, with true and predicted classes. Step11: Test Predictions
3,224
<ASSISTANT_TASK:> Python Code: %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 %%bigquery CREATE OR REPLACE MODEL babyweight.model_1 OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT # TODO: Add base features and label ML.FEATURE_CROSS( # TODO: Cross categorical features ) AS gender_plurality_cross FROM babyweight.babyweight_data_train %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_1, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval )) %%bigquery SELECT # TODO: Select just the calculated RMSE FROM ML.EVALUATE(MODEL babyweight.model_1, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval )) %%bigquery CREATE OR REPLACE MODEL babyweight.model_2 OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, ML.FEATURE_CROSS( STRUCT( is_male, ML.BUCKETIZE( # TODO: Bucketize mother_age ) AS bucketed_mothers_age, plurality, ML.BUCKETIZE( # TODO: Bucketize gestation_weeks ) AS bucketed_gestation_weeks ) ) AS crossed FROM babyweight.babyweight_data_train %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2) %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_2, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval)) %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_2, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval)) %%bigquery CREATE OR REPLACE MODEL babyweight.model_3 TRANSFORM( # TODO: Add base features and label as you would in select # TODO: Add transformed features as you would in select ) OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT * FROM babyweight.babyweight_data_train %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3) %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_3, ( SELECT * FROM babyweight.babyweight_data_eval )) %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_3, ( SELECT * FROM babyweight.babyweight_data_eval )) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Lab Task #1 Step2: Create two SQL statements to evaluate the model. Step3: Lab Task #2 Step4: Create three SQL statements to EVALUATE the model. Step5: We now evaluate our model on our eval dataset Step6: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse. Step7: Lab Task #3 Step8: Let's retrieve the training statistics Step9: We now evaluate our model on our eval dataset Step10: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
3,225
<ASSISTANT_TASK:> Python Code: import pandas as pd id=["Train A","Train A","Train A","Train B","Train B","Train B"] arrival_time = ["0"," 2016-05-19 13:50:00","2016-05-19 21:25:00","0","2016-05-24 18:30:00","2016-05-26 12:15:00"] departure_time = ["2016-05-19 08:25:00","2016-05-19 16:00:00","2016-05-20 07:45:00","2016-05-24 12:50:00","2016-05-25 23:00:00","2016-05-26 19:45:00"] df = pd.DataFrame({'id': id, 'arrival_time':arrival_time, 'departure_time':departure_time}) import numpy as np def g(df): df['arrival_time'] = pd.to_datetime(df['arrival_time'].replace('0', np.nan)) df['departure_time'] = pd.to_datetime(df['departure_time']) df['Duration'] = df['arrival_time'] - df.groupby('id')['departure_time'].shift() return df df = g(df.copy()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
3,226
<ASSISTANT_TASK:> Python Code: df = pd.read_csv('totaal.csv') df = df.set_index('id') df['start'] = pd.to_datetime(df['start']) # Starttijden converteren naar datetimes df['einde'] = pd.to_datetime(df['einde']) # Eindtijden converteren naar datetimes df['duur'] = df['einde'] - df['start'] # Hoe lang parkeert iedereen? df.head() start = df['start'] hours = start.map(lambda x: x.hour) hours.hist() einde = df['einde'] hours = einde.map(lambda x: x.hour) hours.hist() duur = df['duur'] hours = duur.map(lambda x: x / np.timedelta64(1, 'm')) hours = hours[hours > 0] # Filter negatives hours = hours[hours < 1440] # Filter longer then a day. hours.hist(bins=100) kosten = df['kosten'] kosten.hist(bins=50) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Vervolgens kijken we of we het correct is ingeladen Step2: Starttijden Step3: Eindtijden Step4: Duur Step5: Interessant! Van de distributie kan je al een hoop leren, zo zie je dat iedereen bijna altijd tegen een uur aan parkeert... Dat zijn de pieken die er bovenuit steken... En de bult op ongeveer 850 minuten, dat is dus vaak een gedurende de nacht... en de volgende ochtend weer vertrekken..
3,227
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'ocnbgchem') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 1.3. Model Type Step7: 1.4. Elemental Stoichiometry Step8: 1.5. Elemental Stoichiometry Details Step9: 1.6. Prognostic Variables Step10: 1.7. Diagnostic Variables Step11: 1.8. Damping Step12: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Step13: 2.2. Timestep If Not From Ocean Step14: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Step15: 3.2. Timestep If Not From Ocean Step16: 4. Key Properties --&gt; Transport Scheme Step17: 4.2. Scheme Step18: 4.3. Use Different Scheme Step19: 5. Key Properties --&gt; Boundary Forcing Step20: 5.2. River Input Step21: 5.3. Sediments From Boundary Conditions Step22: 5.4. Sediments From Explicit Model Step23: 6. Key Properties --&gt; Gas Exchange Step24: 6.2. CO2 Exchange Type Step25: 6.3. O2 Exchange Present Step26: 6.4. O2 Exchange Type Step27: 6.5. DMS Exchange Present Step28: 6.6. DMS Exchange Type Step29: 6.7. N2 Exchange Present Step30: 6.8. N2 Exchange Type Step31: 6.9. N2O Exchange Present Step32: 6.10. N2O Exchange Type Step33: 6.11. CFC11 Exchange Present Step34: 6.12. CFC11 Exchange Type Step35: 6.13. CFC12 Exchange Present Step36: 6.14. CFC12 Exchange Type Step37: 6.15. SF6 Exchange Present Step38: 6.16. SF6 Exchange Type Step39: 6.17. 13CO2 Exchange Present Step40: 6.18. 13CO2 Exchange Type Step41: 6.19. 14CO2 Exchange Present Step42: 6.20. 14CO2 Exchange Type Step43: 6.21. Other Gases Step44: 7. Key Properties --&gt; Carbon Chemistry Step45: 7.2. PH Scale Step46: 7.3. Constants If Not OMIP Step47: 8. Tracers Step48: 8.2. Sulfur Cycle Present Step49: 8.3. Nutrients Present Step50: 8.4. Nitrous Species If N Step51: 8.5. Nitrous Processes If N Step52: 9. Tracers --&gt; Ecosystem Step53: 9.2. Upper Trophic Levels Treatment Step54: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Step55: 10.2. Pft Step56: 10.3. Size Classes Step57: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Step58: 11.2. Size Classes Step59: 12. Tracers --&gt; Disolved Organic Matter Step60: 12.2. Lability Step61: 13. Tracers --&gt; Particules Step62: 13.2. Types If Prognostic Step63: 13.3. Size If Prognostic Step64: 13.4. Size If Discrete Step65: 13.5. Sinking Speed If Prognostic Step66: 14. Tracers --&gt; Dic Alkalinity Step67: 14.2. Abiotic Carbon Step68: 14.3. Alkalinity
3,228
<ASSISTANT_TASK:> Python Code: import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.fashion_mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data() import matplotlib.pyplot as plt plt.imshow(training_images[0]) print(training_labels[0]) print(training_images[0]) training_images = training_images / 255.0 test_images = test_images / 255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = tf.train.AdamOptimizer(), loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(1024, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([#tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation=tf.nn.relu), tf.keras.layers.Dense(5, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(256, activation=tf.nn.relu), tf.keras.layers.Dense(5, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(5, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=30) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[34]) print(test_labels[34]) import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data() training_images=training_images/255.0 test_images=test_images/255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) import tensorflow as tf print(tf.__version__) class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('loss')<0.4): print("\nReached 60% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() mnist = tf.keras.datasets.fashion_mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data() training_images=training_images/255.0 test_images=test_images/255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The Fashion MNIST data is available directly in the tf.keras datasets API. You load it like this Step2: Calling load_data on this object will give you two sets of two lists, these will be the training and testing values for the graphics that contain the clothing items and their labels. Step3: What does these values look like? Let's print a training image, and a training label to see...Experiment with different indices in the array. For example, also take a look at index 42...that's a a different boot than the one at index 0 Step4: You'll notice that all of the values in the number are between 0 and 255. If we are training a neural network, for various reasons it's easier if we treat all values as between 0 and 1, a process called 'normalizing'...and fortunately in Python it's easy to normalize a list like this without looping. You do it like this Step5: Now you might be wondering why there are 2 sets...training and testing -- remember we spoke about this in the intro? The idea is to have 1 set of data for training, and then another set of data...that the model hasn't yet seen...to see how good it would be at classifying values. After all, when you're done, you're going to want to try it out with data that it hadn't previously seen! Step6: Sequential Step7: Once it's done training -- you should see an accuracy value at the end of the final epoch. It might look something like 0.9098. This tells you that your neural network is about 91% accurate in classifying the training data. I.E., it figured out a pattern match between the image and the labels that worked 91% of the time. Not great, but not bad considering it was only trained for 5 epochs and done quite quickly. Step8: For me, that returned a accuracy of about .8838, which means it was about 88% accurate. As expected it probably would not do as well with unseen data as it did with data it was trained on! As you go through this course, you'll look at ways to improve this. Step9: Hint Step10: What does this list represent? Step11: Question 1. Increase to 1024 Neurons -- What's the impact? Step12: Exercise 4 Step13: Exercise 5 Step14: Exercise 6 Step15: Exercise 7 Step16: Exercise 8
3,229
<ASSISTANT_TASK:> Python Code: from nipype import Node, JoinNode, Workflow # Specify fake input node A a = Node(interface=A(), name="a") # Iterate over fake node B's input 'in_file? b = Node(interface=B(), name="b") b.iterables = ('in_file', [file1, file2]) # Pass results on to fake node C c = Node(interface=C(), name="c") # Join forked execution workflow in fake node D d = JoinNode(interface=D(), joinsource="b", joinfield="in_files", name="d") # Put everything into a workflow as usual workflow = Workflow(name="workflow") workflow.connect([(a, b, [('subject', 'subject')]), (b, c, [('out_file', 'in_file')]) (c, d, [('out_file', 'in_files')]) ]) from nipype import JoinNode, Node, Workflow from nipype.interfaces.utility import Function, IdentityInterface # Create iteration node from nipype import IdentityInterface iternode = Node(IdentityInterface(fields=['number_id']), name="iternode") iternode.iterables = [('number_id', [1, 4, 9])] # Create join node - compute square root for each element in the joined list def compute_sqrt(numbers): from math import sqrt return [sqrt(e) for e in numbers] joinnode = JoinNode(Function(input_names=['numbers'], output_names=['sqrts'], function=compute_sqrt), name='joinnode', joinsource='iternode', joinfield=['numbers']) # Create the workflow and run it joinflow = Workflow(name='joinflow') joinflow.connect(iternode, 'number_id', joinnode, 'numbers') res = joinflow.run() res.nodes()[0].result.outputs res.nodes()[0].inputs <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: As you can see, setting up a JoinNode is rather simple. The only difference to a normal Node are the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node. Step2: Now, let's look at the input and output of the joinnode
3,230
<ASSISTANT_TASK:> Python Code: %matplotlib widget !pip install nanslice import urllib.request import tarfile url = 'https://osf.io/hmtyr/download' urllib.request.urlretrieve(url, 'nanslice_example.tar.gz') tgz = tarfile.open('nanslice_example.tar.gz') tgz.extractall() tgz.close() data_dir = 'nanslice_example/' import nanslice.jupyter as ns ns.three_plane(data_dir + 'template_T2w.nii.gz', title='An Image') ns.three_plane(data_dir + 'template_T2w.nii.gz', interactive=True, title='Drag the Sliders') base = ns.Layer(data_dir + 'template_T2w.nii.gz', mask=data_dir + 'study_mask.nii.gz') ns.three_plane(base, title='Mask') base.cmap = 'viridis' ns.three_plane(base, cbar=True, title='Colormap') base.cmap = 'gist_gray' pval = ns.Layer(data_dir + 'T1_tfce_p_tstat1.nii.gz', cmap='Reds', clim=(0.95,1.0), label='1-p', mask_threshold=0.95) ns.three_plane([base, pval],cbar=1, title='P-Value Overlay') dual = ns.Layer(data_dir + 'T1_difference.nii.gz', cmap='RdYlBu_r', clim=(-100, 100), scale=1000, label='T1 Difference (ms)', alpha=data_dir + 'T1_tfce_p_tstat1.nii.gz', alpha_lim=(0.5, 1.0), alpha_label='1-p') ns.three_plane([base, dual], cbar=1, contour=0.95, title='All the Bells & Whistles') ns.slice_axis([base, dual], nrows=2, ncols=5, slice_axis='z', slice_lims=(0.3, 0.75), cbar=1, contour=0.95, title='Two Rows') slice_ax = ['x','y','z','x','y','z','x','y','z'] slice_pos = [0.2, 0.2, 0.2, 0.3, 0.3, 0.3, 0.5, 0.5, 0.5] ns.slices([base, dual], nrows=3, ncols=3, slice_axes=slice_ax, slice_pos=slice_pos, cbar=1, contour=0.95, title='3x3 Grid') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Basic Slicing Step2: However, if you are going to use the same image multiple times, e.g. a structural template image, then it makes sense to load it first Step3: Now that we have a Layer object, we can re-use it for future plots without having to reload the image from disk. For example, we can change the colormap (this can also be specified when creating the Layer). We can also add a colormap to the plot. Step4: Overlays Step5: One of the key reasons nanslice was written was to demonstrate dual-coded overlays, where both color and transparency have meaning - see https Step6: Controlling Slices
3,231
<ASSISTANT_TASK:> Python Code: import os import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from google.cloud import bigquery from tensorflow.keras.utils import to_categorical from tensorflow.keras.models import Sequential from tensorflow.keras.layers import (Dense, DenseFeatures, Conv1D, MaxPool1D, Reshape, RNN, LSTM, GRU, Bidirectional) from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint from tensorflow.keras.optimizers import Adam # To plot pretty figures %matplotlib inline mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # For reproducible results. from numpy.random import seed seed(1) tf.random.set_seed(2) PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME BUCKET = "your-gcp-bucket-here" # REPLACE WITH YOUR BUCKET REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 %env PROJECT = PROJECT BUCKET = BUCKET REGION = REGION %%time bq = bigquery.Client(project=PROJECT) bq_query = ''' #standardSQL SELECT symbol, Date, direction, close_values_prior_260 FROM `stock_market.eps_percent_change_sp500` LIMIT 100 ''' df_stock_raw = bq.query(bq_query).to_dataframe() df_stock_raw.head() def clean_data(input_df): Cleans data to prepare for training. Args: input_df: Pandas dataframe. Returns: Pandas dataframe. df = input_df.copy() # TF doesn't accept datetimes in DataFrame. df['Date'] = pd.to_datetime(df['Date'], errors='coerce') df['Date'] = df['Date'].dt.strftime('%Y-%m-%d') # TF requires numeric label. df['direction_numeric'] = df['direction'].apply(lambda x: {'DOWN': 0, 'STAY': 1, 'UP': 2}[x]) return df df_stock = clean_data(df_stock_raw) df_stock.head() STOCK_HISTORY_COLUMN = 'close_values_prior_260' COL_NAMES = ['day_' + str(day) for day in range(0, 260)] LABEL = 'direction_numeric' def _scale_features(df): z-scale feature columns of Pandas dataframe. Args: features: Pandas dataframe. Returns: Pandas dataframe with each column standardized according to the values in that column. avg = df.mean() std = df.std() return (df - avg) / std def create_features(df, label_name): Create modeling features and label from Pandas dataframe. Args: df: Pandas dataframe. label_name: str, the column name of the label. Returns: Pandas dataframe # Expand 1 column containing a list of close prices to 260 columns. time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series) # Rename columns. time_series_features.columns = COL_NAMES time_series_features = _scale_features(time_series_features) # Concat time series features with static features and label. label_column = df[LABEL] return pd.concat([time_series_features, label_column], axis=1) df_features = create_features(df_stock, LABEL) df_features.head() ix_to_plot = [0, 1, 9, 5] fig, ax = plt.subplots(1, 1, figsize=(15, 8)) for ix in ix_to_plot: label = df_features['direction_numeric'].iloc[ix] example = df_features[COL_NAMES].iloc[ix] ax = example.plot(label=label, ax=ax) ax.set_ylabel('scaled price') ax.set_xlabel('prior days') ax.legend() def _create_split(phase): Create string to produce train/valid/test splits for a SQL query. Args: phase: str, either TRAIN, VALID, or TEST. Returns: String. floor, ceiling = '2002-11-01', '2010-07-01' if phase == 'VALID': floor, ceiling = '2010-07-01', '2011-09-01' elif phase == 'TEST': floor, ceiling = '2011-09-01', '2012-11-30' return ''' WHERE Date >= '{0}' AND Date < '{1}' '''.format(floor, ceiling) def create_query(phase): Create SQL query to create train/valid/test splits on subsample. Args: phase: str, either TRAIN, VALID, or TEST. sample_size: str, amount of data to take for subsample. Returns: String. basequery = #standardSQL SELECT symbol, Date, direction, close_values_prior_260 FROM `stock_market.eps_percent_change_sp500` return basequery + _create_split(phase) bq = bigquery.Client(project=PROJECT) for phase in ['TRAIN', 'VALID', 'TEST']: # 1. Create query string query_string = create_query(phase) # 2. Load results into DataFrame df = bq.query(query_string).to_dataframe() # 3. Clean, preprocess dataframe df = clean_data(df) df = create_features(df, label_name='direction_numeric') # 3. Write DataFrame to CSV if not os.path.exists('../data'): os.mkdir('../data') df.to_csv('../data/stock-{}.csv'.format(phase.lower()), index_label=False, index=False) print("Wrote {} lines to {}".format( len(df), '../data/stock-{}.csv'.format(phase.lower()))) ls -la ../data N_TIME_STEPS = 260 N_LABELS = 3 Xtrain = pd.read_csv('../data/stock-train.csv') Xvalid = pd.read_csv('../data/stock-valid.csv') ytrain = Xtrain.pop(LABEL) yvalid = Xvalid.pop(LABEL) ytrain_categorical = to_categorical(ytrain.values) yvalid_categorical = to_categorical(yvalid.values) def plot_curves(train_data, val_data, label='Accuracy'): Plot training and validation metrics on single axis. Args: train_data: list, metrics obtrained from training data. val_data: list, metrics obtained from validation data. label: str, title and label for plot. Returns: Matplotlib plot. plt.plot(np.arange(len(train_data)) + 0.5, train_data, "b.-", label="Training " + label) plt.plot(np.arange(len(val_data)) + 1, val_data, "r.-", label="Validation " + label) plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True)) plt.legend(fontsize=14) plt.xlabel("Epochs") plt.ylabel(label) plt.grid(True) sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0] # TODO 1a model = Sequential() model.add(Dense(units=N_LABELS, activation='softmax', kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=30, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) # TODO 1b dnn_hidden_units = [16, 8] model = Sequential() for layer in dnn_hidden_units: model.add(Dense(units=layer, activation="relu")) model.add(Dense(units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=10, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) # TODO 1c model = Sequential() # Convolutional layer model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) model.add(Conv1D(filters=5, kernel_size=5, strides=2, padding="valid", input_shape=[None, 1])) model.add(MaxPool1D(pool_size=2, strides=None, padding='valid')) # Flatten the result and pass through DNN. model.add(tf.keras.layers.Flatten()) model.add(Dense(units=N_TIME_STEPS//4, activation="relu")) model.add(Dense(units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.compile(optimizer=Adam(lr=0.01), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=10, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) # TODO 2a model = Sequential() # Reshape inputs to pass through RNN layer. model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) model.add(LSTM(N_TIME_STEPS // 8, activation='relu', return_sequences=False)) model.add(Dense(units=N_LABELS, activation='softmax', kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) # Create the model. model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=40, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) # TODO 2b rnn_hidden_units = [N_TIME_STEPS // 16, N_TIME_STEPS // 32] model = Sequential() # Reshape inputs to pass through RNN layer. model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) for layer in rnn_hidden_units[:-1]: model.add(GRU(units=layer, activation='relu', return_sequences=True)) model.add(GRU(units=rnn_hidden_units[-1], return_sequences=False)) model.add(Dense(units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=50, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) # TODO 3a model = Sequential() # Reshape inputs for convolutional layer model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) model.add(Conv1D(filters=20, kernel_size=4, strides=2, padding="valid", input_shape=[None, 1])) model.add(MaxPool1D(pool_size=2, strides=None, padding='valid')) model.add(LSTM(units=N_TIME_STEPS//2, return_sequences=False, kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.add(Dense(units=N_LABELS, activation="softmax")) model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=30, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) # TODO 3b rnn_hidden_units = [N_TIME_STEPS // 32, N_TIME_STEPS // 64] model = Sequential() # Reshape inputs and pass through RNN layer. model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) for layer in rnn_hidden_units: model.add(LSTM(layer, return_sequences=True)) # Apply 1d convolution to RNN outputs. model.add(Conv1D(filters=5, kernel_size=3, strides=2, padding="valid")) model.add(MaxPool1D(pool_size=4, strides=None, padding='valid')) # Flatten the convolution output and pass through DNN. model.add(tf.keras.layers.Flatten()) model.add(Dense(units=N_TIME_STEPS // 32, activation="relu", kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.add(Dense(units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1))) model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=80, verbose=0) plot_curves(history.history['loss'], history.history['val_loss'], label='Loss') plot_curves(history.history['accuracy'], history.history['val_accuracy'], label='Accuracy') np.mean(history.history['val_accuracy'][-5:]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Explore time series data Step3: The function clean_data below does three things Step6: Read data and preprocessing Step7: Let's plot a few examples and see that the preprocessing steps were implemented correctly. Step11: Make train-eval-test split Step12: Modeling Step14: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy. Step15: Baseline Step16: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set. Step17: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training. Step18: Deep Neural Network Step19: Convolutional Neural Network Step20: Recurrent Neural Network Step21: Multi-layer RNN Step22: Combining CNN and RNN architecture Step23: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN.
3,232
<ASSISTANT_TASK:> Python Code: from pyspark import SparkContext sc = SparkContext(master = 'local') from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() cuse = spark.read.csv('data/cuse_binary.csv', header=True, inferSchema=True) cuse.show(5) from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler from pyspark.ml import Pipeline # categorical columns categorical_columns = cuse.columns[0:3] stringindexer_stages = [StringIndexer(inputCol=c, outputCol='strindexed_' + c) for c in categorical_columns] # encode label column and add it to stringindexer_stages stringindexer_stages += [StringIndexer(inputCol='y', outputCol='label')] onehotencoder_stages = [OneHotEncoder(inputCol='strindexed_' + c, outputCol='onehot_' + c) for c in categorical_columns] feature_columns = ['onehot_' + c for c in categorical_columns] vectorassembler_stage = VectorAssembler(inputCols=feature_columns, outputCol='features') # all stages all_stages = stringindexer_stages + onehotencoder_stages + [vectorassembler_stage] pipeline = Pipeline(stages=all_stages) pipeline_model = pipeline.fit(cuse) final_columns = feature_columns + ['features', 'label'] cuse_df = pipeline_model.transform(cuse).\ select(final_columns) cuse_df.show(5) training, test = cuse_df.randomSplit([0.8, 0.2], seed=1234) from pyspark.ml.regression import GeneralizedLinearRegression from pyspark.ml.classification import LogisticRegression, DecisionTreeClassifier dt = DecisionTreeClassifier(featuresCol='features', labelCol='label') from pyspark.ml.tuning import ParamGridBuilder param_grid = ParamGridBuilder().\ addGrid(dt.maxDepth, [2,3,4,5]).\ build() from pyspark.ml.evaluation import BinaryClassificationEvaluator evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction", metricName="areaUnderROC") from pyspark.ml.tuning import CrossValidator cv = CrossValidator(estimator=dt, estimatorParamMaps=param_grid, evaluator=evaluator, numFolds=4) cv_model = cv.fit(cuse_df) show_columns = ['features', 'label', 'prediction', 'rawPrediction', 'probability'] pred_training_cv = cv_model.transform(training) pred_training_cv.select(show_columns).show(5, truncate=False) pred_test_cv = cv_model.transform(test) pred_test_cv.select(show_columns).show(5, truncate=False) label_and_pred = cv_model.transform(cuse_df).select('label', 'prediction') label_and_pred.rdd.zipWithIndex().countByKey() print('The best MaxDepth is:', cv_model.bestModel._java_obj.getMaxDepth()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Decision tree classification with pyspark Step2: Process categorical columns Step3: Build StringIndexer stages Step4: Build OneHotEncoder stages Step5: Build VectorAssembler stage Step6: Build pipeline model Step7: Fit pipeline model Step8: Transform data Step9: Split data into training and test datasets Step10: Build cross-validation model Step11: Parameter grid Step12: Evaluator Step13: Build cross-validation model Step14: Fit cross-validation mode Step15: Prediction Step16: Prediction on training data Step17: Prediction on test data Step18: Confusion matrix Step19: Parameters from the best model
3,233
<ASSISTANT_TASK:> Python Code: from __future__ import unicode_literals, print_function import boto3 import json import numpy as np import pandas as pd import spacy from verta import Client client = Client('http://localhost:3000/') proj = client.set_project('Tweet Classification') expt = client.set_experiment('SpaCy') S3_BUCKET = "verta-starter" S3_KEY = "positive-english-tweets.csv" FILENAME = S3_KEY boto3.client('s3').download_file(S3_BUCKET, S3_KEY, FILENAME) import utils data = pd.read_csv(FILENAME).sample(frac=1).reset_index(drop=True) utils.clean_data(data) data.head() from verta.code import Notebook from verta.configuration import Hyperparameters from verta.dataset import S3 from verta.environment import Python code_ver = Notebook() # Notebook & git environment config_ver = Hyperparameters({'n_iter': 20}) dataset_ver = S3("s3://{}/{}".format(S3_BUCKET, S3_KEY)) env_ver = Python(Python.read_pip_environment()) # pip environment and Python version repo = client.set_repository('Tweet Classification') commit = repo.get_commit(branch='master') commit.update("notebooks/tweet-analysis", code_ver) commit.update("config/hyperparams", config_ver) commit.update("data/tweets", dataset_ver) commit.update("env/python", env_ver) commit.save("Update tweet dataset") commit nlp = spacy.load('en_core_web_sm') import training training.train(nlp, data, n_iter=20) run = client.set_experiment_run() run.log_model(nlp) run.log_commit( commit, { 'notebook': "notebooks/tweet-analysis", 'hyperparameters': "config/hyperparams", 'training_data': "data/tweets", 'python_env': "env/python", }, ) commit commit.revert() commit <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: ...and instantiate Verta's ModelDB Client. Step2: Prepare Data Step3: Capture and Version Model Ingredients Step4: You may verify through the Web App that this commit updates the dataset, as well as the Notebook. Step5: Revert Commit
3,234
<ASSISTANT_TASK:> Python Code: import numpy as np n=200 x_tr = np.linspace(0.0, 2.0, n) y_tr = np.exp(3*x_tr) import random mu, sigma = 0,50 random.seed(1) y = y_tr + np.random.normal(loc=mu, scale= sigma, size=len(x_tr)) import matplotlib.pyplot as plt %matplotlib inline plt.plot(x_tr,y,".",mew=3); plt.plot(x_tr, y_tr,"--r",lw=3); plt.xlabel('Explanatory variable (x)') plt.ylabel('Dependent variable (y)') ignored=plt.hist(y,30, color="g") import sklearn.linear_model as lm lr=lm.LinearRegression() #We can see that the dimensions indicated are different #In fact, the data in the second expression is "reshape" #This is necessary if we want to use the linear regression command with scikit learn #Otherwise, python send us a message error print np.shape(x_tr) print np.shape(x_tr[:, np.newaxis]) #We regress y on x, then estimate y lr.fit(x_tr[:, np.newaxis],y) y_hat=lr.predict(x_tr[:, np.newaxis]) plt.plot(x_tr,y,".",mew=2) plt.plot(x_tr, y_hat,"-g",lw=4, label='Estimations with linear regression') plt.xlabel('Explanatory variable (x)') plt.ylabel('Dependent variable (y)') plt.legend(bbox_to_anchor=(1.8, 1.03)) #And then fit the model lr.fit(x_tr[:, np.newaxis]**2,y) y_hat2=lr.predict(x_tr[:, np.newaxis]**2) #Let's check it out plt.plot(x_tr,y,".",mew=2); plt.plot(x_tr, y_hat,"-g",lw=4, label='Estimations with linear regression') plt.plot(x_tr, y_hat2,"-r",lw=4, label='Estimations with linear regression (Quadratic term)'); plt.xlabel('Explanatory variable (x)') plt.ylabel('Dependent variable (y)') plt.legend(bbox_to_anchor=(2.1, 1.03)) index=y>90 z=(1*(y>90)-0.5)*2 #print index, z #The tilt symbol ~ below means the opposite of the boolean value plt.figure() plt.plot(x_tr[index],z[index],".r",mew=3) plt.plot(x_tr[~index],z[~index],".b",mew=3) plt.ylim(-1.5,1.5) plt.xlabel('Explanatory variable (x)') plt.ylabel('Dependent variable (y)') lr.fit(x_tr[:, np.newaxis],z) z_hat=lr.predict(x_tr[:, np.newaxis]) #We define a threshold overwhat the z estimation will be considered as 1 threshold = 0 z_class= 2*(z_hat>threshold) - 1 #This function simply calculate the classification rate on the training set def plotbc(x, y, z): #Plot the classification plt.plot(x[z==1],z[z==1],".r", markersize=3, label='True positive') plt.plot(x[z==-1],z[z==-1],".b", markersize=3, label='True negative') #Plot the classification errors plt.plot(x[(z==-1) & (y==1)],z[(z==-1) & (y==1)],"^y", markersize=10, label='False negative') plt.plot(x[(z==1) & (y==-1)],z[(z==1) & (y==-1)],"^c", markersize=10, label='False positive') plt.legend(bbox_to_anchor=(1.55, 1.03)) plt.ylim(-1.5,1.5) #This function simply calculate the classification rate on the training set def precision(y, z): print "The classification rate is :" print np.mean(y==z) plotbc(x_tr, z, z_class) plt.plot(x_tr,z_hat,"-g",lw=1, label='Predictions by the linear regression model'); plt.legend(bbox_to_anchor=(2, 1.03)) plt.xlabel('Explanatory variable (x)') plt.ylabel('Dependent variable (y)') precision(z_class, z) from sklearn.metrics import confusion_matrix confusion_matrix(z,z_class)/float(len(z)) from sklearn import linear_model, datasets #The C parameter (Strictly positive) controls the regularization strength #Smaller values specify stronger regularization logreg = linear_model.LogisticRegression(C=1e5) logreg.fit(x_tr[:, np.newaxis], z) z_hat=logreg.predict(x_tr[:, np.newaxis]) plotbc(x_tr, z, z_hat) plt.plot(x_tr,z_hat,"-g",lw=1, label='Predictions by the logistic regression'); plt.legend(bbox_to_anchor=(2.3, 1.03)) confusion_matrix(z,z_hat)/float(len(z)) from sklearn.model_selection import train_test_split x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=3) clf = logreg.fit(x_train[:, np.newaxis], z_train) #z_hat_train=logreg.predict(x_train[:, np.newaxis]) #z_hat_test=logreg.predict(x_test[:, np.newaxis]) score_train = clf.score(x_train[:, np.newaxis], z_train) score_valid = clf.score(x_valid[:, np.newaxis], z_valid) print("The prediction error rate on the train set is : ") print(score_train) print("The prediction error rate on the test set is : ") print(score_valid) #Number of iterations n=1000 score_train_vec_log = np.zeros(n) score_valid_vec_log = np.zeros(n) #Loop of iterations for k in np.arange(n): x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=k) clf = logreg.fit(x_train[:, np.newaxis], z_train) score_train_vec_log[k] = clf.score(x_train[:, np.newaxis], z_train) score_valid_vec_log[k] = clf.score(x_valid[:, np.newaxis], z_valid) print("The average prediction error rate on the train set is : ") print(np.mean(score_train_vec_log)) print("The average prediction error rate on the test set is : ") print(np.mean(score_valid_vec_log)) img = plt.imread("../data/hyperplanes.png") plt.imshow(img) plt.axis("off") img = plt.imread("../data/maximal.margin.png") plt.imshow(img) plt.axis("off") img = plt.imread("../data/non.separable.png") plt.imshow(img) plt.axis("off") img = plt.imread("../data/support.vector.png") plt.imshow(img) plt.axis("off") img = plt.imread("../data/kernel.example.1.png") plt.imshow(img) plt.axis("off") img = plt.imread("../data/kernel.example.2.png") plt.imshow(img) plt.axis("off") img = plt.imread("../data/kernel.example.3.png") plt.imshow(img) plt.axis("off") n=100 np.random.seed(0) X=np.vstack((np.random.multivariate_normal([1,1],[[1,0],[0,1]] ,n), np.random.multivariate_normal([3,3],[[1,0],[0,1]] ,n))) Y =np.array([0] * n + [1] * n) index=(Y==0) plt.scatter(X[index,0], X[index,1], color="r", label='X1 distribution') plt.scatter(X[~index,0], X[~index,1], color="b", label='X2 distribution') plt.xlabel('First dimension') plt.ylabel('Second dimension') plt.legend(bbox_to_anchor=(1.5, 1.03)) from sklearn import svm clf = svm.SVC(kernel="rbf", gamma=2 ,C=10).fit(X,Y) Z=clf.predict(X) index=(Z==0) plt.scatter(X[index,0], X[index,1], edgecolors="b") plt.scatter(X[~index,0], X[~index,1], edgecolors="r") xx, yy = np.meshgrid(np.linspace(-3, 6, 500), np.linspace(-3, 6, 500)) Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, alpha=1, cmap=plt.cm.seismic) plt.scatter(X[:, 0], X[:, 1], c=Y, s=2, alpha=0.9, cmap=plt.cm.spectral) #Number of iterations n=1000 score_train_vec_svm = np.zeros(n) score_valid_vec_svm = np.zeros(n) #Loop of iterations for k in np.arange(n): x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=k) #Command for the SVM clf = svm.SVC(kernel='rbf', C=.1, gamma=3.2).fit(x_train[:, np.newaxis], z_train) score_train_vec_svm[k] = clf.score(x_train[:, np.newaxis], z_train) score_valid_vec_svm[k] = clf.score(x_valid[:, np.newaxis], z_valid) print("The SVM's average prediction error rate on the train set is : ") print(np.mean(score_train_vec_svm)) print("The SVM's average prediction error rate on the test set is : ") print(np.mean(score_valid_vec_svm)) print("The logistic regression's average prediction error rate on the train set is : ") print(np.mean(score_train_vec_log)) print("The logistic regression's average prediction error rate on the test set is : ") print(np.mean(score_valid_vec_log)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The red curve is defined by the function Step2: Let's fit a simple linear model on $y$ and $x$. Step3: Well, that's not really good... We can do better! Step4: Question Step5: Linear regression Step6: We create 2 functions. The first one, called plotbc should plot the predictions done (and their accurancy) by the linear regression. The second one calculate the classification rate. Step7: We now call the functions previously defined. Step8: Let's compute the confusion rate. Step9: But maybe we could get more information with the confusion matrix! Step10: Logistic regression Step11: The classification rate seem slightly better... Step12: This being said, we can now calculate the prediction error rate on the train and the test sets. Step13: We created the train and validation sets randomly. Hence, considering that the original dataset has a small number of observations, 200, the division of the data may favorized either the train or even the validation dataset. Step14: Support Vector Machines (SVM) Step15: The maximal margin hyperplane is shown in as a solide black line. The margin is the distance form the solid line to either of the dashed lines. The two blue points and the purple point that lie on the dashed lines are the support vectors. The blue and the purple grid indicates the decision rule made by a classifier based on this separating hyperplane. Step16: Some motivations behind the SVM method have their roots in the linearly separable concept. Sometimes, the data is not linearly separable. Thus we can't use a maximal margin classifier. Step17: A good strategy could be to consider a classifier based on a hyperplane that does not perfectly separate the two classes. Thus, it could be worthwile to misclassify somes observations in order to do a better job in classifying the remaining observations. We call this technic the support vector classifier (with soft margin). Step18: Sometimes, good margins don't even exist and support vector classifier are useless. Step19: In this specific case, a smart strategy would be to enlarge the feature space with a non-linear transformation. Then, find a good margin. Step20: The new margin (in $\mathbb{R}^2$) corresponds to the following margins in $\mathbb{R}$. Step21: Support Vector Machines (SVM) with RBF kernel Step22: for the list of colormap options see colormap help and to learn more about SVM and related options check svm tutorial and support vector classification (svc) examples.
3,235
<ASSISTANT_TASK:> Python Code: %matplotlib inline from matplotlib import pyplot as plt import desolver as de import desolver.backend as D D.set_float_fmt('float64') def Fij(ri, rj, G): rel_r = rj - ri return G*(1/D.norm(rel_r, ord=2)**3)*rel_r def rhs(t, state, masses, G): total_acc = D.zeros_like(state) for idx, (ri, mi) in enumerate(zip(state, masses)): for jdx, (rj, mj) in enumerate(zip(state[idx+1:], masses[idx+1:])): partial_force = Fij(ri[:3], rj[:3], G) total_acc[idx, 3:] += partial_force * mj total_acc[idx+jdx+1, 3:] -= partial_force * mi total_acc[:, :3] = state[:, 3:] return total_acc Msun = 1.98847*10**30 ## Mass of the Sun, kg AU = 149597871e3 ## 1 Astronomical Unit, m year = 365.25*24*3600 ## 1 year, s G = 4*D.pi**2 ## in solar masses, AU, years V = D.sqrt(G) ## Speed scale corresponding to the orbital speed required for a circular orbit at 1AU with a period of 1yr initial_state = D.array([ [0.0, 0.0, 1.0, 0.0, -1.0, 0.0], [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], [0.25, 0.9682458365518543, 0.0, 0.9682458365518543*0, -0.25*0, 0.0], [-0.5, -0.8660254037844386, 0.0, -0.8660254037844386*0, 0.5*0, 0.0], ]) masses = D.array([ 1, 1, 1, 1, ]) rhs(0.0, initial_state, masses, G) a = de.OdeSystem(rhs, y0=initial_state, dense_output=True, t=(0, 2.0), dt=0.00001, rtol=1e-14, atol=1e-14, constants=dict(G=G, masses=masses)) a.method = "RK1412" a.integrate() fig = plt.figure(figsize=(16,16)) com_motion = D.sum(a.y[:, :, :] * masses[None, :, None], axis=1) / D.sum(masses) fig = plt.figure(figsize=(16,16)) ax1 = fig.add_subplot(131, aspect=1) ax2 = fig.add_subplot(132, aspect=1) ax3 = fig.add_subplot(133, aspect=1) ax1.set_xlabel("x (AU)") ax1.set_ylabel("y (AU)") ax2.set_xlabel("y (AU)") ax2.set_ylabel("z (AU)") ax3.set_xlabel("z (AU)") ax3.set_ylabel("x (AU)") for i in range(a.y.shape[1]): ax1.plot(a.y[:, i, 0], a.y[:, i, 1], color=f"C{i}") ax2.plot(a.y[:, i, 1], a.y[:, i, 2], color=f"C{i}") ax3.plot(a.y[:, i, 2], a.y[:, i, 0], color=f"C{i}") ax1.scatter(com_motion[:, 0], com_motion[:, 1], color='k') ax2.scatter(com_motion[:, 1], com_motion[:, 2], color='k') ax3.scatter(com_motion[:, 2], com_motion[:, 0], color='k') plt.tight_layout() def close_encounter(t, state, masses, G): distances_between_bodies = [] total_mass = D.sum(masses) center_of_mass = D.sum(state[:, :3] * masses[:, None], axis=1) / total_mass com_distances = D.norm(state[:, :3] - center_of_mass[:, None], axis=1) hill_radii = com_distances * D.pow(masses/(3*total_mass), 1/3) for idx,ri in enumerate(state[:, :3]): for jdx, rj in enumerate(state[idx+1:, :3]): distances_between_bodies.append(D.norm(ri - rj) - D.min([hill_radii[idx], hill_radii[jdx]])/2.0) return D.min(distances_between_bodies) a.reset() a.integrate(events=close_encounter) fig = plt.figure(figsize=(16,16)) com_motion = D.sum(a.y[:, :, :] * masses[None, :, None], axis=1) / D.sum(masses) fig = plt.figure(figsize=(16,16)) ax1 = fig.add_subplot(131, aspect=1) ax2 = fig.add_subplot(132, aspect=1) ax3 = fig.add_subplot(133, aspect=1) ax1.set_xlabel("x (AU)") ax1.set_ylabel("y (AU)") ax2.set_xlabel("y (AU)") ax2.set_ylabel("z (AU)") ax3.set_xlabel("z (AU)") ax3.set_ylabel("x (AU)") for i in range(a.y.shape[1]): ax1.plot(a.y[:, i, 0], a.y[:, i, 1], color=f"C{i}", alpha=0.33) ax2.plot(a.y[:, i, 1], a.y[:, i, 2], color=f"C{i}", alpha=0.33) ax3.plot(a.y[:, i, 2], a.y[:, i, 0], color=f"C{i}", alpha=0.33) for j in a.events: ax1.scatter(j.y[i, 0], j.y[i, 1], c=f"C{i}", marker='x', alpha=1.0) ax2.scatter(j.y[i, 1], j.y[i, 2], c=f"C{i}", marker='x', alpha=1.0) ax3.scatter(j.y[i, 2], j.y[i, 0], c=f"C{i}", marker='x', alpha=1.0) ax1.scatter(com_motion[:, 0], com_motion[:, 1], color='k') ax2.scatter(com_motion[:, 1], com_motion[:, 2], color='k') ax3.scatter(com_motion[:, 2], com_motion[:, 0], color='k') plt.tight_layout() from matplotlib import animation, rc # set to location of ffmpeg to get animations working # For Linux or Mac # plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg' # For Windows plt.rcParams['animation.ffmpeg_path'] = 'C:\\ProgramData\\chocolatey\\bin\\ffmpeg.exe' from IPython.display import HTML %%capture # This magic command prevents the creation of a static figure image so that we can view the animation in the next cell t = a.t all_states = a.y planets = [all_states[:, i, :] for i in range(all_states.shape[1])] com_motion = D.sum(all_states * masses[None, :, None], axis=1) / D.sum(masses) plt.ioff() fig = plt.figure(figsize=(16,8)) ax1 = fig.add_subplot(131, aspect=1) ax2 = fig.add_subplot(132, aspect=1) ax3 = fig.add_subplot(133, aspect=1) ax1.set_xlabel("x (AU)") ax1.set_ylabel("y (AU)") ax2.set_xlabel("y (AU)") ax2.set_ylabel("z (AU)") ax3.set_xlabel("z (AU)") ax3.set_ylabel("x (AU)") xlims = D.abs(a.y[:, :, 0]).max() ylims = D.abs(a.y[:, :, 1]).max() zlims = D.abs(a.y[:, :, 2]).max() ax1.set_xlim(-xlims-0.25, xlims+0.25) ax2.set_xlim(-ylims-0.25, ylims+0.25) ax3.set_xlim(-zlims-0.25, zlims+0.25) ax1.set_ylim(-ylims-0.25, ylims+0.25) ax2.set_ylim(-zlims-0.25, zlims+0.25) ax3.set_ylim(-xlims-0.25, xlims+0.25) planets_pos_xy = [] planets_pos_yz = [] planets_pos_zx = [] planets_xy = [] planets_yz = [] planets_zx = [] com_xy, = ax1.plot([], [], color='k', linestyle='', marker='o', markersize=5.0, zorder=10) com_yz, = ax2.plot([], [], color='k', linestyle='', marker='o', markersize=5.0, zorder=10) com_zx, = ax3.plot([], [], color='k', linestyle='', marker='o', markersize=5.0, zorder=10) event_counter = 0 close_encounter_xy = [] close_encounter_yz = [] close_encounter_zx = [] for i in range(len(planets)): close_encounter_xy.append(ax1.plot([], [], color=f"k", marker='x', markersize=3.0, linestyle='', zorder=9)[0]) close_encounter_yz.append(ax2.plot([], [], color=f"k", marker='x', markersize=3.0, linestyle='', zorder=9)[0]) close_encounter_zx.append(ax3.plot([], [], color=f"k", marker='x', markersize=3.0, linestyle='', zorder=9)[0]) for i in range(a.y.shape[1]): planets_xy.append(ax1.plot([], [], color=f"C{i}", zorder=8)[0]) planets_yz.append(ax2.plot([], [], color=f"C{i}", zorder=8)[0]) planets_zx.append(ax3.plot([], [], color=f"C{i}", zorder=8)[0]) planets_pos_xy.append(ax1.plot([], [], color=f"C{i}", linestyle='', marker='.', zorder=8)[0]) planets_pos_yz.append(ax2.plot([], [], color=f"C{i}", linestyle='', marker='.', zorder=8)[0]) planets_pos_zx.append(ax3.plot([], [], color=f"C{i}", linestyle='', marker='.', zorder=8)[0]) def init(): global event_counter for i in range(len(planets)): planets_xy[i].set_data([], []) planets_yz[i].set_data([], []) planets_zx[i].set_data([], []) planets_pos_xy[i].set_data([], []) planets_pos_yz[i].set_data([], []) planets_pos_zx[i].set_data([], []) com_xy.set_data([], []) com_yz.set_data([], []) com_zx.set_data([], []) for i in range(len(planets)): close_encounter_xy[i].set_data(a.events[event_counter].y[i, 0], a.events[event_counter].y[i, 1]) close_encounter_yz[i].set_data(a.events[event_counter].y[i, 1], a.events[event_counter].y[i, 2]) close_encounter_zx[i].set_data(a.events[event_counter].y[i, 2], a.events[event_counter].y[i, 0]) return tuple(planets_xy + planets_yz + planets_zx + planets_pos_xy + planets_pos_yz + planets_pos_zx + [com_xy, com_yz, com_zx] + [close_encounter_xy, close_encounter_yz, close_encounter_zx]) def animate(frame_num): global event_counter for i in range(len(planets)): planets_xy[i].set_data(planets[i][max(frame_num-5, 0):frame_num, 0], planets[i][max(frame_num-5, 0):frame_num, 1]) planets_yz[i].set_data(planets[i][max(frame_num-5, 0):frame_num, 1], planets[i][max(frame_num-5, 0):frame_num, 2]) planets_zx[i].set_data(planets[i][max(frame_num-5, 0):frame_num, 2], planets[i][max(frame_num-5, 0):frame_num, 0]) planets_pos_xy[i].set_data(planets[i][frame_num:frame_num+1, 0], planets[i][frame_num:frame_num+1, 1]) planets_pos_yz[i].set_data(planets[i][frame_num:frame_num+1, 1], planets[i][frame_num:frame_num+1, 2]) planets_pos_zx[i].set_data(planets[i][frame_num:frame_num+1, 2], planets[i][frame_num:frame_num+1, 0]) com_xy.set_data(com_motion[frame_num:frame_num+1, 0], com_motion[frame_num:frame_num+1, 1]) com_yz.set_data(com_motion[frame_num:frame_num+1, 1], com_motion[frame_num:frame_num+1, 2]) com_zx.set_data(com_motion[frame_num:frame_num+1, 2], com_motion[frame_num:frame_num+1, 0]) if t[frame_num] >= a.events[event_counter].t and event_counter + 1 < len(a.events): event_counter += 1 for i in range(len(planets)): close_encounter_xy[i].set_data(a.events[event_counter].y[i, 0], a.events[event_counter].y[i, 1]) close_encounter_yz[i].set_data(a.events[event_counter].y[i, 1], a.events[event_counter].y[i, 2]) close_encounter_zx[i].set_data(a.events[event_counter].y[i, 2], a.events[event_counter].y[i, 0]) return tuple(planets_xy + planets_yz + planets_zx + planets_pos_xy + planets_pos_yz + planets_pos_zx + [com_xy, com_yz, com_zx] + [close_encounter_xy, close_encounter_yz, close_encounter_zx]) ani = animation.FuncAnimation(fig, animate, list(range(1, len(t))), interval=1500./60., blit=False, init_func=init) rc('animation', html='html5') # Uncomment to save an mp4 video of the animation # ani.save('Nbodies.mp4', fps=60) display(ani) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Specifying the Dynamical System Step2: NOTE Step3: I've added 3 massive bodies at the ends of a scalene triangle Step4: The Numerical Integration Step5: Close Encounters and Event Detection Step6: We see that there are many close encounters and furthermore, the encounters are not restricted to any particular pairs of bodies, but sometimes happen with three bodies simultaneously. We will see this better in the next section where we look at an animation of the bodies. Step7: Here we see that the animation slows down whenever the bodies come close to each other and this is due to the adaptive timestepping of the numerical integration which takes more steps whenever there is a close encounter. Each "x" marks the point of all the bodies whenever there is a close encounter.
3,236
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np from sklearn import datasets, metrics, model_selection, preprocessing, pipeline import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import autosklearn.classification wine = datasets.load_wine() print(wine.DESCR) X = pd.DataFrame(wine.data, columns=wine.feature_names) y = wine.target X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, train_size=0.5, stratify=y) df_train = pd.DataFrame(y_train, columns=['target']) df_train['type'] = 'train' df_test = pd.DataFrame(y_test, columns=['target']) df_test['type'] = 'test' df_set = df_train.append(df_test) _ = sns.countplot(x='target', hue='type', data=df_set) print('train samples:', len(X_train)) print('test samples', len(X_test)) model = autosklearn.classification.AutoSklearnClassifier(time_left_for_this_task=30, ensemble_size=3) %%capture # ignore oput from model fit with capture magic command model.fit(X_train, y_train) for m in model.get_models_with_weights(): print(m) predicted = model.predict(X_test) confusion_matrix = pd.DataFrame(metrics.confusion_matrix(y_test, predicted)) confusion_matrix _ = sns.heatmap(confusion_matrix, annot=True, cmap="Blues") print("accuracy: {:.3f}".format(metrics.accuracy_score(y_test, predicted))) print("precision: {:.3f}".format(metrics.precision_score(y_test, predicted, average='weighted'))) print("recall: {:.3f}".format(metrics.recall_score(y_test, predicted, average='weighted'))) print("f1 score: {:.3f}".format(metrics.f1_score(y_test, predicted, average='weighted'))) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Note Step2: Print the final ensemble constructed by auto-sklearn
3,237
<ASSISTANT_TASK:> Python Code: %matplotlib inline import pandas as pd import numpy as np from scipy import stats import seaborn as sns from matplotlib import pyplot as plt sns.set_style('white') data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta') # number of callbacks for black-sounding names print(sum(data[data.race=='b'].call)) # number of callbacks for white-sounding names print(sum(data[data.race=='w'].call)) # difference sum(data[data.race=='w'].call) - sum(data[data.race=='b'].call) sns.countplot(data.race) plt.show() sns.countplot(data.call) plt.show() print(sum(data.race == 'w')) print(sum(data.race == 'b')) # 1. A permutation test to see whether the difference can be based on coincidence. # No, CLT does not apply, there are only 2 values, not multiple values from which you extract a mean and std. # We can use the permuted distribution which will be normally distributed and CLT will apply there. (more than 30 samples) # On the other hand we could see it as a proportion of callbacks for two populations with n=2435 and k=#calls # In that way CLT does apply. n>30, hence assume normal distribution. So do Z-test. # Can't find a package that made a Z-test, hence I'll be using the T-test instead (gives similar results with many samples) # 2. H0: Race has no effect on callbask. H1: race has an effect on callback. # The question is whether race has a significant impact, not whether being black has a significant impact, # hence test is two-sided. from numpy.random import permutation def permutate(X): new_array = permutation(X) return sum(new_array[0:2435]) - sum(new_array[2435::]) # calculate difference between first group and second difference = [] for i in range(0,100000): difference.append(permutate(data.call)) # Confidence interval 95%, # our result is very much outside the confidence interval of the difference between two groups print(np.percentile(difference, [2.5, 97.5])) sns.distplot(difference) # permuted data, normally distributed (CLT applies on this) # Margin of error with Z-table # Critical value is 1.96 in the Z-statistic for 0.95% (more than 30 samples and not skewed, hence normally distributed) print(1.96 * np.std(difference)) # hence our value is outside the margin of error # margin of error is the difference between the border of the confidence interval and the mean, # which is 0 in this case, hence margin of error that calculated that way is 38. np.percentile(difference, [2.5, 97.5])[1] - np.mean(difference) diff = sum(data[data.race=='w'].call) - sum(data[data.race=='b'].call) times = sum(difference > diff) + sum(difference < -diff) # times the difference is bigger than the found difference print(times) print(times / 100000) # p-value, hence clearly significant # Alle measurements lead to the conclusion that it's very unlikely that our value would come from the permuted distribution. # Therefore race is concluded to have an effect on callback. nw = sum(data.race == 'w') nb = sum(data.race == 'b') kw = sum(data[data.race=='w'].call) kb = sum(data[data.race=='b'].call) pw = kw/nw pb = kb/nb pw - pb # difference in means # You should actually use the Z-test, since it's normally distributed and over 30 samples, # but T-test gives similar results. from scipy.stats import ttest_ind ttest_ind(data[data.race=='w'].call, data[data.race=='b'].call) # p-value clearly significant # 95% confidence interval print((pw - pb) - 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb))) # lower limit print((pw - pb) + 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb))) # upper limit # No difference: 0, lies outside the confidence interval, hence race seems to have an effect # margin of error (pw - pb) + 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb)) - (pw - pb) # Our mean is 0.03, hence outside the margin of error, if the true mean would be 0. # Also from this calculation it's clear that it's very likely that race has an effect on callback. <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Permutation Step2: T-test
3,238
<ASSISTANT_TASK:> Python Code: w, h, b, d, c1, c2, k1, k2, r_sys, r_ref = symbols("w, h, b, d, c_1, c_2, k_1, k_2, r_{sys}, r_{ref}", real=True) # Constraints for hyperboloids: k1_constraint = k1 > 2 k2_constraint = k2 > 2 c1_constraint = c1 > 0 c2_constraint = c2 > 0 xw, yw, zw = symbols("x_w, y_w, z_w", real=True) # Local image: # Image(filename='../images/geometric_model.png') h_eqn = abs((c1/2)+(c2/2) - d + sqrt((k1-2)*(c1)**(2)/(4*k1)+ (w**2)*(1+k1/2))+ sqrt((k2-2)*(c2)**(2)/(4*k2)+ (w**2)*(1+k2/2))) h_eqn display(simplify(h_eqn)) baseline_eqn = b-(c1 + c2 - d) pp(baseline_eqn) #print(sympy.latex(h_eqn)) #h_eqn #print(sympy.latex(baseline_eqn)) baseline_sln = solve(baseline_eqn, b) display(baseline_sln) x1, y1, z1 = symbols("x_1, y_1, z_1", real=True) mirror_1_eqn = (z1 - c1/2)**2 - (x1**2 + y1**2)*(k1/2 - 1) - (c1**2/4)*((k1 - 2)/k1) display(expand(mirror_1_eqn)) mirror_1_eqn_top = z1 - c1/2 - abs(sqrt((x1**2 + y1**2)*(k1/2 - 1) + (c1**2/4)*((k1 - 2)/k1))) display(mirror_1_eqn_top) display(expand(mirror_1_eqn_top)) lambda1, s1 = symbols("lambda_1, s_1", real=True) pos_F1 = (0, 0, c1) x1_on_line = xw + s1 * (pos_F1[0]-xw) display(x1_on_line) y1_on_line = yw + s1 * (pos_F1[1]-yw) display(y1_on_line) z1_on_line = zw + s1 * (pos_F1[2]-zw) display(z1_on_line) s1_eqn = 1 - lambda1 xr1_on_line = simplify(x1_on_line.subs({s1: s1_eqn})) display(xr1_on_line) yr1_on_line = simplify(y1_on_line.subs({s1: s1_eqn})) display(yr1_on_line) zr1_on_line = expand(z1_on_line.subs({s1: s1_eqn})) display(zr1_on_line) mirror_1_intersection = mirror_1_eqn_top.subs({x1: xr1_on_line, y1: yr1_on_line, z1:zr1_on_line}) display(mirror_1_intersection) lambda1_sln = solve([mirror_1_intersection], lambda1) lambda1_sln_minus = simplify(lambda1_sln[0]) display(lambda1_sln_minus) lambda1_sln_plus = together(lambda1_sln[1]) display(lambda1_sln_plus) lambda1_func_minus = lambdify([c1, k1, xw, yw, zw], lambda1_sln_minus, modules=['numpy']) lambda1_func_plus = lambdify([c1, k1, xw, yw, zw], lambda1_sln_plus, modules=['numpy']) L1w = sqrt(xw**2 + yw**2 + (c1-zw)**2) lambda1_simple = c1/(L1w*sqrt(k1*(k1-2)) + k1*(c1-zw)) display(lambda1_simple) lambda1_simple_den = (abs(sqrt(k1*(k1-2)*xw**2 + yw**2 + (c1-zw)**2)) + k1*(c1-zw)) den1_simple = lambda1_simple_den*lambda1_simple_den x2, y2, z2, = symbols("x_2, y_2, z_2", real=True) mirror_2_eqn = (z2 - (d-c2/2))**2 - (x2**2 + y2**2)*(k2/2 - 1) - (c2**2/4)*((k2 - 2)/k2) display(expand(mirror_2_eqn)) mirror_2_eqn_bottom = z2 - d + c2/2 + abs(sqrt((x2**2 + y2**2)*(k2/2 - 1) + (c2**2/4)*((k2 - 2)/k2))) display(mirror_2_eqn_bottom) lambda2, s2 = symbols("lambda_2, s_2", real=True) pos_F2 = (0, 0, d - c2) x2_on_line = xw + s2 * (pos_F2[0]-xw) display(x2_on_line) y2_on_line = yw + s2 * (pos_F2[1]-yw) display(y2_on_line) z2_on_line = zw + s2 * (pos_F2[2] - zw) display(z2_on_line) s2_eqn = 1 - lambda2 xr2_on_line = simplify(x2_on_line.subs({s2: s2_eqn})) display(xr2_on_line) yr2_on_line = simplify(y2_on_line.subs({s2: s2_eqn})) display(yr2_on_line) zr2_on_line = simplify(z2_on_line.subs({s2: s2_eqn})) display(zr2_on_line) display(expand(zr2_on_line)) display(zw*lambda2 + (d-c2)*(1-lambda2)) mirror_2_intersection = mirror_2_eqn_bottom.subs({x2: xr2_on_line, y2: yr2_on_line, z2:zr2_on_line}) display(mirror_2_intersection) lambda2_sln = solve([mirror_2_intersection], lambda2) lambda2_sln_minus = simplify(lambda2_sln[0]) display(lambda2_sln_minus) lambda2_sln_plus = simplify(lambda2_sln[1]) display(lambda2_sln_plus) display(lambda1_sln_plus) lambda2_func_minus = lambdify([c2, k2, xw, yw, zw, d], lambda2_sln_minus, modules=['numpy']) lambda2_func_plus = lambdify([c2, k2, xw, yw, zw, d], lambda2_sln_plus, modules=['numpy']) display(expand(d - lambda2*zw - (d - c2)*(1 - lambda2))) display(expand(c2 + lambda2*(d - c2 - zw))) theta_1_max_eqn = -atan((c1* sqrt(k1)-sqrt(-2+k1)* sqrt(c1**2 + 2* k1 * r_sys**2))/(2*sqrt(k1) * r_sys)) display(ratsimp(theta_1_max_eqn)) theta_1_min_eqn = atan((-d/2+c1)/(r_ref)) display(cancel(theta_1_min_eqn)) theta_2_min_eqn = atan((-c2/2 + sqrt((c2**2 *(-2+k2))/(4*k2)+r_sys**2 * (-1+k2/2)))/r_sys) display(trigsimp(theta_2_min_eqn)) display(cancel(theta_2_min_eqn)) q1, xq1, yq1, zq1, t1, theta1, phi1 = symbols("q_1, x_q1, y_q1, z_q1, t_1, theta_1, phi_1", real=True) x1_bp_line_eqn = t1*xq1 y1_bp_line_eqn = t1*yq1 z1_bp_line_eqn = t1 mirror1_bp_intersection = mirror_1_eqn_top.subs({x1: x1_bp_line_eqn, y1: y1_bp_line_eqn, z1:z1_bp_line_eqn}) display(mirror1_bp_intersection) t1_sln = solve([mirror1_bp_intersection], t1) t1_sln_minus = simplify(t1_sln[0]) display(t1_sln_minus) t1_sln_plus = together(t1_sln[1]) display(t1_sln_plus) q2, xq2, yq2, zq2, t2, theta2, phi2 = symbols("q_2, x_q2, y_q2, z_q2, t_2, theta_2, phi_2", real=True) x2_bp_line_eqn = t2*xq2 y2_bp_line_eqn = t2*yq2 z2_bp_line_eqn = d-t2 mirror2_bp_intersection = mirror_2_eqn_bottom.subs({x2: x2_bp_line_eqn, y2: y2_bp_line_eqn, z2:z2_bp_line_eqn}) display(mirror2_bp_intersection) t2_sln = solve([mirror2_bp_intersection], t2) t2_sln_minus = simplify(t2_sln[0]) display(t2_sln_minus) t2_sln_plus = together(t2_sln[1]) display(t2_sln_plus) theta1, theta2 = symbols("theta_1, theta_2", real=True) D_eqn = simplify(-B*cos(theta1)*cos(theta2)/(sin(theta2)*cos(theta1)+sin(theta1)*cos(theta2))) display(D_eqn) D_eqn_2 = B*sin(pi/2-theta1)*sin(pi/2-theta2)/sin(pi-theta1-theta2) display(D_eqn_2) D_eqn == D_eqn_2 <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: System Height Step2: Lens Hole Radius Step3: Line passing through points $P_w$ and $F_1$ Step4: Let $\lambda_1 = 1 - s_1$, so that Step5: Solving for intersection point $P_1$ Step6: $\lambda_1$, can be simplified to be Step7: Mirror 2 (Bottom) Step8: Line passing through points $P_w$ and $F_2$ Step9: Let $\lambda_2 = 1 - s_2$, so that Step10: Solving for intersection point $P_2$ Step11: $\lambda_2$, can be simplified to be Step12: Fielf of View Angles Step13: From Geometry Expressions, we obtain Step14: From Geometry Expressions, we obtain Step15: Back Projections Step16: Simplifying the + solution of $t_1$, we get Step17: Test and Plot Forward Projections
3,239
<ASSISTANT_TASK:> Python Code: def fit_normal_to_hist(h): if not all(h==0): bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0]) orig_hist = np.array(h).astype(float) norm_hist = orig_hist/float(sum(orig_hist)) mid_points = (bins[1:] + bins[:-1])/2 popt,pcov = opt.curve_fit(lambda x,mu,sig: stats.norm.pdf(x,mu,sig), mid_points,norm_hist) else: popt = [float('nan'),float('nan')] return popt[1] def ZL_std(h): intervals =[[-2.0,-1.1],[-1.0,-0.6],[-0.5,-0.1],[0.0,0.4],[0.5,0.9],[1.0,1.4], [1.5,1.9],[2.0,2.4],[2.5,2.9],[3.0,3.4],[3.5,3.9],[4.0,5.0]] if not all(h==0): sum_i1 = 0 sum_i2 = 0 for i in range(1,len(h)): p = h[i]/100 v1,v2 = intervals[i] sum_i1 += p*(v2**3 - v1**3)/(3*(v2-v1)) sum_i2 += p*(v2**2 - v1**2)/(2*(v2-v1)) zl_std = np.sqrt(sum_i1 - sum_i2**2) else: zl_std = float('nan') return zl_std def Hist_std(h): if not all(h==0): bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0]) orig_hist = np.array(h).astype(float) norm_hist = orig_hist/float(sum(orig_hist)) mid_points = (bins[1:] + bins[:-1])/2 MeanCrude = np.dot(norm_hist,mid_points) VarCrude = np.dot(norm_hist,(mid_points-MeanCrude)**2) bin_widths = np.diff(bins) BinWidth = bin_widths.mean() VarSheppard = VarCrude - (BinWidth**2)/12 #variance, Sheppard's correction hist_std = np.sqrt(VarSheppard) else: hist_std = float('nan') return hist_std mask = df.columns.str.contains(',') mask df.columns[mask] df['GA_std'] = df.iloc[:,mask].apply(fit_normal_to_hist,axis=1) df['ZL_std'] = df.iloc[:,mask].apply(ZL_std,axis=1) df['Hist_std'] = df.iloc[:,mask].apply(Hist_std,axis=1) df.head(10) dfList = [] writer = pd.ExcelWriter(out_data + 'PointForecasts.xlsx') years = [2014,2015,2016] quarters = [1,2,3,4] for year in years: for q in quarters: f = str(year) + 'Q' + str(q) fname = f + '.csv' if os.path.isfile(raw_data_path + '\\' + fname): raw_df = pd.read_csv(raw_data_path + '\\' + fname,header = True) # find the row where the growth expectations start dum = raw_df[raw_df['TARGET_PERIOD'] == 'GROWTH EXPECTATIONS; YEAR-ON-YEAR CHANGE IN REAL GDP'].index[0] mask_columns = ~raw_df.columns.str.contains('Unnamed') df = raw_df.iloc[0:dum-1,[0,1,2]] df['source'] = str(year) + '-Q' + str(q) df = df.rename(columns={'TARGET_PERIOD':'target','FCT_SOURCE':'id','POINT':'point'}) df = df[['source','target','id','point']] df['id'] = df['id'].astype('int') df['point'] = df['point'].astype('float32') df.to_excel(writer,f,index=False) dfList.append(df) writer.save() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0]) Step2: h = df.iloc[10,mask]
3,240
<ASSISTANT_TASK:> Python Code: from pprint import * import pyspark from pyspark import SparkConf, SparkContext sc = None print(pyspark.status) conf = (SparkConf() .setMaster("local") .setAppName("MyApp") .set("spark.executor.memory", "1g")) if sc is None: sc = SparkContext(conf = conf) print(type(sc)) print(sc) print(sc.applicationId) print(conf) conf_kv = conf.getAll() pprint(conf_kv) fl = sc.textFile("../data/muenchen.osm_node.json") for node in fl.collect()[0:2]: node_dict = eval(node) pprint(node_dict) lines = fl.filter(lambda line: "soemisch" in line) print(lines.count()) print(lines.collect()[0]) from pyspark.sql import SQLContext sqlContext = SQLContext(sc) nodeDF = sqlContext.read.json("../data/muenchen.osm_node.json") #print(nodeDF) nodeDF.printSchema() nodeDF.select("id","lat","lon","timestamp").show(10,True) #help(nodeDF.show) wayDF = sqlContext.read.json("../data/muenchen.osm_way.json") wayDF.printSchema() wayDF.select("id","tag","nd").show(10,True) def sepator(): print("===============================================================") #### 将给定way的nd对象的nodeID列表提取出来,并生成一个查询的过滤字符串。 def nodelist_way(nd_list): print("WayID:",nd_list["id"],"\tNode count:",len(nd_list["nd"])) ndFilter = "(" for nd in nd_list["nd"]: ndFilter = ndFilter + nd["ref"] + "," ndFilter = ndFilter.strip(',') + ")" print(ndFilter) return ndFilter #### 根据way的节点ID从nodeDF中提取node信息,包含经纬度等坐标域。 def nodecoord_way(nodeID_list): nodeDF.registerTempTable("nodeDF") nodeset = sqlContext.sql("select id,lat,lon,timestamp from nodeDF where nodeDF.id in " + nodeID_list) nodeset.show(10,True) for wayset in wayDF.select("id","nd").collect()[4:6]: ndFilter = nodelist_way(wayset) nodecoord_way(ndFilter) #pprint(nd_list["nd"]) #sepator() relationDF = sqlContext.read.json("../data/muenchen.osm_relation.json") #print(relationDF) relationDF.printSchema() relationDF.show(10,True) def myFunc(s): words = s.split() return len(words) #wc = fl.map(myFunc).collect() wc = fl.map(myFunc).collect() wc #df = sqlContext.read.format("com.databricks.spark.xml").option("rowTag", "result").load("../data/muenchen.osm") #df <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 配置环境SparkConf和创建SparkContext运行环境对象。 Step2: 显示Spark的配置信息。 Step3: Spark的文本RDD操作。 Step4: 从RDD中按照文本方式进行关键词查询。 Step5: Spark的DataFrame操作。 Step6: Spark DataFrame的 select() 操作。show()方法可以指定最多显示的记录数。 Step7: 读取osm的way表。 Step8: 查看way表中的数据。 Step9: 构建way的几何对象。 Step10: 将多个way的node信息查询出来。 Step11: 将经纬度坐标转换为一个GeoJSON的几何对象表示,并保存回way的geometry字段。 Step12: 查找指定关键词。
3,241
<ASSISTANT_TASK:> Python Code: import numpy as np import holoviews as hv hv.notebook_extension('matplotlib') fractal = hv.Image(np.load('mandelbrot.npy')) ((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0)) %%opts Points [scaling_factor=50] Contours (color='w') dots = np.linspace(-0.45, 0.45, 19) layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) + fractal.sample(y=y) + hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) + hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)])) for y in np.linspace(-0.3, 0.3, 21)} hv.HoloMap(layouts, kdims=['Y']).collate().cols(2) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Fundamentally, a HoloViews object is just a thin wrapper around your data, with the data always being accessible in its native numerical format, but with the data displaying itself automatically whether alone or alongside or overlaid with other HoloViews objects as shown above. The actual rendering is done using a separate library like matplotlib or bokeh, but all of the HoloViews objects can be used without any plotting library available, so that you can easily create, save, load, and manipulate HoloViews objects from within your own programs for later analysis. HoloViews objects support arbitrarily high dimensions, using continuous, discrete, or categorical indexes and values, with flat or hierarchical organizations, and sparse or dense data formats. The objects can then be flexibly combined, selected, sliced, sorted, sampled, or animated, all by specifying what data you want to see rather than by writing plotting code. The goal is to put the plotting code into the background, as an implementation detail to be written once and reused often, letting you focus clearly on your data itself in daily work.
3,242
<ASSISTANT_TASK:> Python Code: import cvxpy as cp import numpy as np import scipy as scipy # Fix random number generator so we can repeat the experiment. np.random.seed(0) # Dimension of matrix. n = 10 # Number of samples, y_i N = 1000 # Create sparse, symmetric PSD matrix S A = np.random.randn(n, n) # Unit normal gaussian distribution. A[scipy.sparse.rand(n, n, 0.85).todense().nonzero()] = 0 # Sparsen the matrix. Strue = A.dot(A.T) + 0.05 * np.eye(n) # Force strict pos. def. # Create the covariance matrix associated with S. R = np.linalg.inv(Strue) # Create samples y_i from the distribution with covariance R. y_sample = scipy.linalg.sqrtm(R).dot(np.random.randn(n, N)) # Calculate the sample covariance matrix. Y = np.cov(y_sample) # The alpha values for each attempt at generating a sparse inverse cov. matrix. alphas = [10, 2, 1] # Empty list of result matrixes S Ss = [] # Solve the optimization problem for each value of alpha. for alpha in alphas: # Create a variable that is constrained to the positive semidefinite cone. S = cp.Variable(shape=(n,n), PSD=True) # Form the logdet(S) - tr(SY) objective. Note the use of a set # comprehension to form a set of the diagonal elements of S*Y, and the # native sum function, which is compatible with cvxpy, to compute the trace. # TODO: If a cvxpy trace operator becomes available, use it! obj = cp.Maximize(cp.log_det(S) - sum([(S*Y)[i, i] for i in range(n)])) # Set constraint. constraints = [cp.sum(cp.abs(S)) <= alpha] # Form and solve optimization problem prob = cp.Problem(obj, constraints) prob.solve(solver=cp.CVXOPT) if prob.status != cp.OPTIMAL: raise Exception('CVXPY Error') # If the covariance matrix R is desired, here is how it to create it. R_hat = np.linalg.inv(S.value) # Threshold S element values to enforce exact zeros: S = S.value S[abs(S) <= 1e-4] = 0 # Store this S in the list of results for later plotting. Ss += [S] print('Completed optimization parameterized by alpha = {}, obj value = {}'.format(alpha, obj.value)) import matplotlib.pyplot as plt # Show plot inline in ipython. %matplotlib inline # Plot properties. plt.rc('text', usetex=True) plt.rc('font', family='serif') # Create figure. plt.figure() plt.figure(figsize=(12, 12)) # Plot sparsity pattern for the true covariance matrix. plt.subplot(2, 2, 1) plt.spy(Strue) plt.title('Inverse of true covariance matrix', fontsize=16) # Plot sparsity pattern for each result, corresponding to a specific alpha. for i in range(len(alphas)): plt.subplot(2, 2, 2+i) plt.spy(Ss[i]) plt.title('Estimated inv. cov matrix, $\\alpha$={}'.format(alphas[i]), fontsize=16) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Solve for several $\alpha$ values Step2: Result plots
3,243
<ASSISTANT_TASK:> Python Code: from __future__ import print_function, division % matplotlib inline import warnings warnings.filterwarnings('ignore') import math import numpy as np from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalNormalPdf import thinkplot import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('ageVsHeight.csv', skiprows=0, delimiter='\t') df ages = np.array(df['age']) heights = np.array(df['height']) plt.plot(ages, heights, 'o', label='Original data', markersize=10) def leastSquares(x, y): leastSquares takes in two arrays of values. Then it returns the slope and intercept of the least squares of the two. Args: x (numpy array): numpy array of values. y (numpy array): numpy array of values. Returns: slope, intercept (tuple): returns a tuple of floats. A = np.vstack([x, np.ones(len(x))]).T slope, intercept = np.linalg.lstsq(A, y)[0] return slope, intercept slope, intercept = leastSquares(ages, heights) print(slope, intercept) alpha_range = .03 * intercept beta_range = .05 * slope plt.plot(ages, heights, 'o', label='Original data', markersize=10) plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line') plt.legend() plt.show() alphas = np.linspace(intercept - alpha_range, intercept + alpha_range, 20) betas = np.linspace(slope - beta_range, slope + beta_range, 20) sigmas = np.linspace(2, 4, 15) hypos = ((alpha, beta, sigma) for alpha in alphas for beta in betas for sigma in sigmas) data = [(age, height) for age in ages for height in heights] class leastSquaresHypos(Suite, Joint): def Likelihood(self, data, hypo): Likelihood calculates the probability of a particular line (hypo) based on data (ages Vs height) of our original dataset. This is done with a normal pmf as each hypo also contains a sigma. Args: data (tuple): tuple that contains ages (float), heights (float) hypo (tuple): intercept (float), slope (float), sigma (float) Returns: P(data|hypo) intercept, slope, sigma = hypo total_likelihood = 1 for age, measured_height in data: hypothesized_height = slope * age + intercept error = measured_height - hypothesized_height total_likelihood *= EvalNormalPdf(error, mu=0, sigma=sigma) return total_likelihood LeastSquaresHypos = leastSquaresHypos(hypos) for item in data: LeastSquaresHypos.Update([item]) LeastSquaresHypos[LeastSquaresHypos.MaximumLikelihood()] marginal_intercepts = LeastSquaresHypos.Marginal(0) thinkplot.hist(marginal_intercepts) marginal_slopes = LeastSquaresHypos.Marginal(1) thinkplot.hist(marginal_slopes) marginal_sigmas = LeastSquaresHypos.Marginal(2) thinkplot.hist(marginal_sigmas) def getHeights(hypo_samples, random_months): getHeights takes in random hypos and random months and returns the corresponding random height random_heights = np.zeros(len(random_months)) for i in range(len(random_heights)): intercept = hypo_samples[i][0] slope = hypo_samples[i][1] sigma = hypo_samples[i][2] month = random_months[i] random_heights[i] = np.random.normal((slope * month + intercept), sigma, 1) return random_heights def getRandomData(start_month, end_month, n, LeastSquaresHypos): start_month (int): Starting x range of our data end_month (int): Ending x range of our data n (int): Number of samples LeastSquaresHypos (Suite): Contains the hypos we want to sample random_hypos = LeastSquaresHypos.Sample(n) random_months = np.random.uniform(start_month, end_month, n) random_heights = getHeights(random_hypos, random_months) return random_months, random_heights num_samples = 10000 random_months, random_heights = getRandomData(18, 40, num_samples, LeastSquaresHypos) num_buckets = 70 #num_buckets^2 is actual number # create horizontal and vertical linearly spaced ranges as buckets. hori_range, hori_step = np.linspace(18, 40 , num_buckets, retstep=True) vert_range, vert_step = np.linspace(65, 100, num_buckets, retstep=True) hori_step = hori_step / 2 vert_step = vert_step / 2 # store each bucket as a tuple in a the buckets dictionary. buckets = dict() keys = [(hori, vert) for hori in hori_range for vert in vert_range] # set each bucket as empty for key in keys: buckets[key] = 0 # loop through the randomly sampled data for month, height in zip(random_months, random_heights): # check each bucket and see if randomly sampled data for key in buckets: if month > key[0] - hori_step and month < key[0] + hori_step: if height > key[1] - vert_step and height < key[1] + vert_step: buckets[key] += 1 break # can only fit in a single bucket pcolor_months = [] pcolor_heights = [] pcolor_intensities = [] for key in buckets: pcolor_months.append(key[0]) pcolor_heights.append(key[1]) pcolor_intensities.append(buckets[key]) print(len(pcolor_months), len(pcolor_heights), len(pcolor_intensities)) plt.plot(random_months, random_heights, 'o', label='Random Sampling') plt.plot(ages, heights, 'o', label='Original data', markersize=10) plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line') # plt.legend() plt.show() def append_to_file(path, data): append_to_file appends a line of data to specified file. Then adds new line Args: path (string): the file path Return: VOID with open(path, 'a') as file: file.write(data + '\n') def delete_file_contents(path): delete_file_contents deletes the contents of a file Args: path: (string): the file path Return: VOID with open(path, 'w'): pass def intensityCSV(x, y, z): file_name = 'intensityData.csv' delete_file_contents(file_name) for xi, yi, zi in zip(x, y, z): append_to_file(file_name, "{}, {}, {}".format(xi, yi, zi)) def monthHeightCSV(ages, heights): file_name = 'monthsHeights.csv' delete_file_contents(file_name) for month, height in zip(ages, heights): append_to_file(file_name, "{}, {}".format(month, height)) def fittedLineCSV(ages, slope, intercept): file_name = 'fittedLineCSV.csv' delete_file_contents(file_name) for age in ages: append_to_file(file_name, "{}, {}".format(age, slope*age + intercept)) def makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept): intensityCSV(pcolor_months, pcolor_heights, pcolor_intensities) monthHeightCSV(ages, heights) fittedLineCSV(ages, slope, intercept) makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: From Step2: Next, let's create vectors of our ages and heights. Step3: Now let's visualize our data to make sure that linear regression is appropriate for predicting its distributions. Step5: Our data looks pretty linear. We can now calculate the slope and intercept of the line of least squares. We abstract numpy's least squares function using a function of our own. Step6: To use our leastSquares function, we input our age and height vectors as our x and y arguments. Next, let's call leastSquares to get the slope and intercept, and use the slope and intercept to calculate the size of our alpha (intercept) and beta (slope) ranges. Step7: Now we can visualize the slope and intercept on the same plot as the data to make sure it is working correctly. Step8: Looks great! Based on the plot above, we are confident that bayesian linear regression will give us reasonable distributions for predicting future values. Now we need to create our hypotheses. Each hypothesis will consist of a range of intercepts (alphas), slopes (betas) and sigmas. Step10: Next make a least squares class that inherits from Suite and Joint where likelihood is calculated based on error from data. The likelihood function will depend on the data and normal distributions for each hypothesis. Step11: Now instantiate a LeastSquaresHypos suite with our hypos. Step12: And update the suite with our data. Step13: We can now plot marginal distributions to visualize the probability distribution for each of our hypotheses for intercept, slope, and sigma values. Our hypotheses were carefully picked based on ranges that we found worked well, which is why all the intercepts, slopes, and sigmas that are important to this dataset are included in our hypotheses. Step16: Next, we want to sample random data from our hypotheses. To do this, we will make two functions, getHeights and getRandomData. getRandomData calls getHeights to obtain random height values. Step17: Now we take 10000 random samples of pairs of months and heights. Here we want at least 10000 items so that we can get very smooth sampling. Step18: Next, we want to get the intensity of the data at locations. We do that by adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range. Step21: Since density plotting is much simpler in Mathematica, we have written these funcitons to export all our data to csv files and plot them in Mathematica.
3,244
<ASSISTANT_TASK:> Python Code: # from typing import Callable, Sequence # used ? import flax from flax import linen as nn # Simple module with matmul layer. Note that we could build this in many # different ways using the `scope` for parameter handling. class Matmul: def __init__(self, features): self.features = features def kernel_init(self, key, shape): return jax.random.normal(key, shape) def __call__(self, scope, x): kernel = scope.param( "kernel", self.kernel_init, (x.shape[1], self.features) ) return x @ kernel class Model: def __init__(self, features): self.matmuls = [Matmul(f) for f in features] def __call__(self, scope, x): x = x.reshape([len(x), -1]) for i, matmul in enumerate(self.matmuls): x = scope.child(matmul, f"matmul_{i + 1}")(x) if i < len(self.matmuls) - 1: x = jax.nn.relu(x) x = jax.nn.log_softmax(x) return x model = Model([ds_info.features["label"].num_classes]) y, variables = flax.core.init(model)(key, train_images[:1]) assert (y == flax.core.apply(model)(variables, train_images[:1])).all() # YOUR ACTION REQUIRED: # Check out the parameter structure, try adding/removing "layers" and see how it # changes ##-snip model = Model([50, ds_info.features["label"].num_classes]) _, variables = flax.core.init(model)(key, train_images[:1]) jax.tree_map(jnp.shape, variables) # YOUR ACTION REQUIRED: # Redefine loss_fun(), update_step(), and train() from above to train the new # model. ##-snip @jax.jit def update_step(variables, inputs, targets): def loss_fun(variables): logits = flax.core.apply(model)(variables, inputs) logprobs = logits - jax.scipy.special.logsumexp( logits, axis=-1, keepdims=True ) return -logprobs[jnp.arange(len(targets)), targets].mean() loss, grads = jax.value_and_grad(loss_fun)(variables) updated_variables = jax.tree_multimap( lambda variable, grad: variable - 0.05 * grad, variables, grads ) return updated_variables, loss def train(variables, steps, batch_size=128): losses = [] steps_per_epoch = len(train_images) // batch_size for step in range(steps): i0 = (step % steps_per_epoch) * batch_size variables, loss = update_step( variables, train_images[i0 : i0 + batch_size], train_labels[i0 : i0 + batch_size], ) losses.append(float(loss)) return variables, jnp.array(losses) learnt_variables, losses = train(variables, steps=1_000) plt.plot(losses) print("final loss:", np.mean(losses[-100])) # Reimplementation of above model using the Linen API. class Model(nn.Module): num_classes: int def setup(self): self.dense = nn.Dense(self.num_classes) def __call__(self, x): x = x.reshape([len(x), -1]) x = self.dense(x) x = nn.log_softmax(x) return x model = Model(num_classes=ds_info.features["label"].num_classes) variables = model.init(jax.random.PRNGKey(0), train_images[:1]) jax.tree_map(jnp.shape, variables) # YOUR ACTION REQUIRED: # 1. Rewrite above model using the @nn.compact notation. # 2. Extend the model to use additional layers, see e.g. # convolutions in # http://google3/third_party/py/flax/linen/linear.py ##-snip class Model(nn.Module): num_classes: int @nn.compact def __call__(self, x): x = nn.Conv(features=32, kernel_size=(3, 3))(x) x = nn.relu(x) x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) x = nn.Conv(features=64, kernel_size=(3, 3))(x) x = nn.relu(x) x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) x = x.reshape((x.shape[0], -1)) # flatten x = nn.Dense(features=256)(x) x = nn.relu(x) x = nn.Dense(features=10)(x) x = nn.log_softmax(x) return x model = Model(ds_info.features["label"].num_classes) variables = model.init(key, train_images[:1]) jax.tree_map(jnp.shape, variables) # Reimplementation of training loop using a Flax optimizer. @jax.jit def update_step_optim(optim, inputs, targets): def loss_fun(params): logits = model.apply(dict(params=params), inputs) logprobs = logits - jax.scipy.special.logsumexp( logits, axis=-1, keepdims=True ) return -logprobs[jnp.arange(len(targets)), targets].mean() loss, grads = jax.value_and_grad(loss_fun)(optim.target) return optim.apply_gradient(grads), loss def train_optim(optim, steps, batch_size=128): losses = [] steps_per_epoch = len(train_images) // batch_size for step in range(steps): i0 = (step % steps_per_epoch) * batch_size optim, loss = update_step_optim( optim, train_images[i0 : i0 + batch_size], train_labels[i0 : i0 + batch_size], ) losses.append(float(loss)) return optim, jnp.array(losses) optim = flax.optim.adam.Adam(learning_rate=0.01).create(variables["params"]) learnt_optim, losses = train_optim(optim, steps=1_000) plt.plot(losses) print("final loss:", np.mean(losses[-100])) # Re-evaluate accuracy. ( model.apply(dict(params=learnt_optim.target), test_images).argmax(axis=-1) == test_labels ).mean() # Let's add batch norm! # I'm not saying it's a good idea here, but it will allow us study the changes # we need to make for models that have state. class Model(nn.Module): num_classes: int @nn.compact def __call__(self, x, *, train): x = x.reshape([len(x), -1]) x = nn.BatchNorm(use_running_average=not train)(x) x = nn.Dense(self.num_classes)(x) x = nn.log_softmax(x) return x model = Model(num_classes=ds_info.features["label"].num_classes) variables = model.init(jax.random.PRNGKey(0), train_images[:1], train=True) jax.tree_map(jnp.shape, variables) # Note the new "batch_stats" collection ! # YOUR ACTION REQUIRED: # Check below code and add comments for every change compared to the model above # without state. @jax.jit def update_step_optim(optim, batch_stats, inputs, targets): def loss_fun(params): logits, mutated_state = model.apply( dict(params=params, batch_stats=batch_stats), inputs, mutable="batch_stats", train=True, ) logprobs = logits - jax.scipy.special.logsumexp( logits, axis=-1, keepdims=True ) return ( -logprobs[jnp.arange(len(targets)), targets].mean(), variables["batch_stats"], ) (loss, state), grads = jax.value_and_grad(loss_fun, has_aux=True)( optim.target ) return optim.apply_gradient(grads), batch_stats, loss def train_optim(optim, batch_stats, steps, batch_size=128): losses = [] steps_per_epoch = len(train_images) // batch_size for step in range(steps): i0 = (step % steps_per_epoch) * batch_size optim, batch_stats, loss = update_step_optim( optim, batch_stats, train_images[i0 : i0 + batch_size], train_labels[i0 : i0 + batch_size], ) losses.append(float(loss)) return optim, batch_stats, jnp.array(losses) optim = flax.optim.adam.Adam(learning_rate=0.01).create(variables["params"]) learnt_optim, batch_stats, losses = train_optim( optim, variables["batch_stats"], steps=1_000 ) plt.plot(losses) print("final loss:", np.mean(losses[-100])) # YOUR ACTION REQUIRED: # Make predictions with above model with state ##-snip ( model.apply( dict(params=learnt_optim.target, batch_stats=batch_stats), test_images, train=False, ).argmax(axis=-1) == test_labels ).mean() # YOUR ACTION REQURIED: # Store the Colab in your personal drive and modify it to use the dataset from # above. # While this might sound boring, you will learn the following things: # - how to load files in public Colab from Github, modify them in the UI and # optionally store them on your personal Google Drive. # - how to use inline TensorBoard on public Colab and export it to tensorboard.dev <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Functional core Step2: Stateless Linen module Step3: Linen module with state Step4: Modify MNIST example
3,245
<ASSISTANT_TASK:> Python Code: # Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt import numpy.linalg as la import seaborn as sns import itertools import pandas as pd sns.set_style('whitegrid') # create a palette generator palette = itertools.cycle(sns.color_palette()) # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (12.0, 12.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) dists2 = classifier.compute_distances_one_loop(X_test) dists3 = classifier.compute_distances_no_loops(X_test) distances = [dists, dists2, dists3] names = ['two loop', 'one loop', 'no loop'] for distance, name in zip(distances, names): print(name) plt.imshow(dists, interpolation='none') plt.show() # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got {} / {} correct => accuracy: {:.3f}'.format(num_correct, num_test, accuracy)) # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists3, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got {} / {} correct => accuracy: {:.3f}'.format(num_correct, num_test, accuracy)) y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are: def time_function(f, *args): Call a function f with args and return the time (in seconds) that it took to execute. import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation def run_knn(X_train, y_train, X_validation, y_validation, k): # initalize KNN classifer = KNearestNeighbor() # train the classifer on training set classifier.train(X_train, y_train) # get distance for X validation dist = classifier.compute_distances_no_loops(X_validation) # make prediction based on k y_pred = classifier.predict_labels(dist, k=k) # get the number of correct predictions num_correct = np.sum(y_pred == y_validation) # score the classifer accuracy = float(num_correct)/len(y_validation) return accuracy num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] n = X_train.shape[0] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ indices = range(n) cv_indices = np.array_split(np.array(indices), num_folds) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for k in k_choices: k_to_accuracies[k] = [run_knn(X_train[np.setdiff1d(cv_indices, subset)], y_train[np.setdiff1d(cv_indices, subset)], X_train[subset], y_train[subset], k) for subset in cv_indices] ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations plt.figure(figsize=(14,6)) for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies, label=k, c=next(palette), lw=.25) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.legend() plt.show() # eye ball the variance with a heatmap k_to_accuracies_df = pd.DataFrame(k_to_accuracies) plt.figure(figsize=(10,4)) sns.heatmap(k_to_accuracies_df, annot=True, linecolor='white', linewidths=.005) plt.ylabel("Fold") plt.xlabel("K") plt.title("Cross-validation accuracy"); print(k_to_accuracies_df.describe().T) # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 10 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps Step2: Inline Question #1 Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5 Step5: You should expect to see a slightly better performance than with k = 1. Step6: Cross-validation
3,246
<ASSISTANT_TASK:> Python Code: import subprocess import numpy as np from IPython.display import Image PI = np.pi POV_SCENE_FILE = "hopf_fibration.pov" POV_DATA_FILE = "torus-data.inc" POV_EXE = "povray" COMMAND = "{} +I{} +W500 +H500 +Q11 +A0.01 +R2".format(POV_EXE, POV_SCENE_FILE) IMG = POV_SCENE_FILE[:-4] + ".png" def hopf_inverse(phi, psi, theta): Inverse map of Hopf fibration. It's a circle in 4d parameterized by theta. return np.array([np.cos((theta + psi) / 2) * np.sin(phi / 2), np.sin((theta + psi) / 2) * np.sin(phi / 2), np.cos((theta - psi) / 2) * np.cos(phi / 2), np.sin((theta - psi) / 2) * np.cos(phi / 2)]) def stereo_projection(v): Stereographic projection of a 4d vector with pole at (0, 0, 0, 1). v = normalize(v) x, y, z, w = v return np.array([x, y, z]) / (1 + 1e-8 - w) def normalize(v): Normalize a vector. return np.array(v) / np.linalg.norm(v) def norm2(v): Return squared Euclidean norm of a vector. return np.dot(v, v) def get_circle(A, B, C): Compute the center, radius and normal of the circle passes through 3 given points (A, B, C) in 3d space. See "https://en.wikipedia.org/wiki/Circumscribed_circle" a = A - C b = B - C axb = np.cross(a, b) center = C + np.cross((norm2(a) * b - norm2(b) * a), axb) / (2 * norm2(axb)) radius = np.sqrt(norm2(a) * norm2(b) * norm2(a - b) / (4 * norm2(axb))) normal = normalize(axb) return center, radius, normal def pov_vector(v): Convert a vector to POV-Ray format. return "<{}>".format(", ".join([str(x) for x in v])) def pov_matrix(M): Convert a 3x3 matrix to a POV-Ray 3x3 array. return "array[3]{{{}}}\n".format(", ".join([pov_vector(v) for v in M])) # write a test to see if they work as expected: v = (1, 0, 0) print("POV-Ray format of {}: {}".format(v, pov_vector(v))) M = np.eye(3) print("POV-Ray format of {}: {}".format(M, pov_matrix(M))) def transform_matrix(v): Return a 3x3 orthogonal matrix that transforms y-axis (0, 1, 0) to v. This matrix is not uniquely determined, we simply choose one with a simple form. y = normalize(v) a, b, c = y if a == 0: x = [1, 0, 0] else: x = normalize([-b, a, 0]) z = np.cross(x, y) return np.array([x, y, z]) def export_fiber(phi, psi, color): Export the data of a fiber to POV-Ray format. A, B, C = [stereo_projection(hopf_inverse(phi, psi, theta)) for theta in (0, PI/2, PI)] center, radius, normal = get_circle(A, B, C) matrix = transform_matrix(normal) return "Torus({}, {}, {}, {})\n".format(pov_vector(center), radius, pov_matrix(matrix), pov_vector(color)) def draw_random_fibers(N): Draw fibers of some random points on the 2-sphere. `N` is the number of fibers. phi_range = (PI / 6, PI * 4 / 5) psi_range = (0, 2 * PI) phi_list = np.random.random(N) * (phi_range[1] - phi_range[0]) + phi_range[0] psi_list = np.random.random(N) * (psi_range[1] - psi_range[0]) + psi_range[0] with open(POV_DATA_FILE, "w") as f: for phi, psi in zip(phi_list, psi_list): color = np.random.random(3) f.write(export_fiber(phi, psi, color)) subprocess.call(COMMAND, shell=True) draw_random_fibers(N=200) Image(IMG) def draw_flower(petals=7, fattness=0.5, amp=-PI/7, lat=PI/2, num_fibers=200): parameters ---------- petals: controls the number of petals. fattness: controls the fattness of the petals. amp: controls the amplitude of the polar angle range. lat: controls latitude of the flower. with open(POV_DATA_FILE, "w") as f: for t in np.linspace(0, 1, num_fibers): phi = amp * np.sin(petals * 2 * PI * t) + lat psi = PI * 2 * t + fattness * np.cos(petals * 2 * PI * t) color = np.random.random(3) f.write(export_fiber(phi, psi, color)) subprocess.call(COMMAND, shell=True) draw_flower() Image(IMG) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step3: Hopf inverse map and stereographic projection Step7: Circle passes through three points Step10: Convert vector/matrix to POV-Ray format Step12: Orient a circle in 3d space Step14: Export data to POV-Ray Step16: Let's draw some examples! Step18: And also a flower pattern
3,247
<ASSISTANT_TASK:> Python Code: import itertools import os import sys os.environ['OPENBLAS_NUM_THREADS'] = '1' import numpy as np import pandas as pd from scipy import sparse import content_wmf import batched_inv_joblib import rec_eval DATA_DIR = '/hdd2/dawen/data/ml-20m/pro/' unique_uid = list() with open(os.path.join(DATA_DIR, 'unique_uid.txt'), 'r') as f: for line in f: unique_uid.append(line.strip()) unique_sid = list() with open(os.path.join(DATA_DIR, 'unique_sid.txt'), 'r') as f: for line in f: unique_sid.append(line.strip()) n_items = len(unique_sid) n_users = len(unique_uid) print n_users, n_items def load_data(csv_file, shape=(n_users, n_items)): tp = pd.read_csv(csv_file) timestamps, rows, cols = np.array(tp['timestamp']), np.array(tp['uid']), np.array(tp['sid']) seq = np.concatenate((rows[:, None], cols[:, None], np.ones((rows.size, 1), dtype='int'), timestamps[:, None]), axis=1) data = sparse.csr_matrix((np.ones_like(rows), (rows, cols)), dtype=np.int16, shape=shape) return data, seq train_data, train_raw = load_data(os.path.join(DATA_DIR, 'train.csv')) vad_data, vad_raw = load_data(os.path.join(DATA_DIR, 'validation.csv')) num_factors = 100 num_iters = 50 batch_size = 1000 n_jobs = 4 lam_theta = lam_beta = 1e-5 best_ndcg = -np.inf U_best = None V_best = None best_alpha = 0 for alpha in [2, 5, 10, 30, 50]: S = content_wmf.linear_surplus_confidence_matrix(train_data, alpha=alpha) U, V, vad_ndcg = content_wmf.factorize(S, num_factors, vad_data=vad_data, num_iters=num_iters, init_std=0.01, lambda_U_reg=lam_theta, lambda_V_reg=lam_beta, dtype='float32', random_state=98765, verbose=False, recompute_factors=batched_inv_joblib.recompute_factors_batched, batch_size=batch_size, n_jobs=n_jobs) if vad_ndcg > best_ndcg: best_ndcg = vad_ndcg U_best = U.copy() V_best = V.copy() best_alpha = alpha print best_alpha, best_ndcg test_data, _ = load_data(os.path.join(DATA_DIR, 'test.csv')) test_data.data = np.ones_like(test_data.data) # alpha = 10 gives the best validation performance print 'Test Recall@20: %.4f' % rec_eval.recall_at_k(train_data, test_data, U_best, V_best, k=20, vad_data=vad_data) print 'Test Recall@50: %.4f' % rec_eval.recall_at_k(train_data, test_data, U_best, V_best, k=50, vad_data=vad_data) print 'Test NDCG@100: %.4f' % rec_eval.normalized_dcg_at_k(train_data, test_data, U_best, V_best, k=100, vad_data=vad_data) print 'Test MAP@100: %.4f' % rec_eval.map_at_k(train_data, test_data, U_best, V_best, k=100, vad_data=vad_data) np.savez('WMF_K100_ML20M.npz', U=U_best, V=V_best) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load pre-processed data Step2: Train the model
3,248
<ASSISTANT_TASK:> Python Code: # Authors: Marijn van Vliet <w.m.vanvliet@gmail.com> # Ezequiel Mikulan <e.mikulan@gmail.com> # Manorama Kadwani <manorama.kadwani@gmail.com> # # License: BSD-3-Clause import os import shutil import mne data_path = mne.datasets.sample.data_path() subjects_dir = data_path / 'subjects' bem_dir = subjects_dir / 'sample' / 'bem' / 'flash' surf_dir = subjects_dir / 'sample' / 'surf' # Put the converted surfaces in a separate 'conv' folder conv_dir = subjects_dir / 'sample' / 'conv' os.makedirs(conv_dir, exist_ok=True) # Load the inner skull surface and create a problem # The metadata is empty in this example. In real study, we want to write the # original metadata to the fixed surface file. Set read_metadata=True to do so. coords, faces = mne.read_surface(bem_dir / 'inner_skull.surf') coords[0] *= 1.1 # Move the first vertex outside the skull # Write the inner skull surface as an .obj file that can be imported by # Blender. mne.write_surface(conv_dir / 'inner_skull.obj', coords, faces, overwrite=True) # Also convert the outer skull surface. coords, faces = mne.read_surface(bem_dir / 'outer_skull.surf') mne.write_surface(conv_dir / 'outer_skull.obj', coords, faces, overwrite=True) coords, faces = mne.read_surface(conv_dir / 'inner_skull.obj') coords[0] /= 1.1 # Move the first vertex back inside the skull mne.write_surface(conv_dir / 'inner_skull_fixed.obj', coords, faces, overwrite=True) # Read the fixed surface coords, faces = mne.read_surface(conv_dir / 'inner_skull_fixed.obj') # Backup the original surface shutil.copy(bem_dir / 'inner_skull.surf', bem_dir / 'inner_skull_orig.surf') # Overwrite the original surface with the fixed version # In real study you should provide the correct metadata using ``volume_info=`` # This could be accomplished for example with: # # _, _, vol_info = mne.read_surface(bem_dir / 'inner_skull.surf', # read_metadata=True) # mne.write_surface(bem_dir / 'inner_skull.surf', coords, faces, # volume_info=vol_info, overwrite=True) # Load the fixed surface coords, faces = mne.read_surface(bem_dir / 'outer_skin.surf') # Make sure we are in the correct directory head_dir = bem_dir.parent # Remember to backup the original head file in advance! # Overwrite the original head file # # mne.write_head_bem(head_dir / 'sample-head.fif', coords, faces, # overwrite=True) # If ``-head-dense.fif`` does not exist, you need to run # ``mne make_scalp_surfaces`` first. # [0] because a list of surfaces is returned surf = mne.read_bem_surfaces(head_dir / 'sample-head.fif')[0] # For consistency only coords = surf['rr'] faces = surf['tris'] # Write the head as an .obj file for editing mne.write_surface(conv_dir / 'sample-head.obj', coords, faces, overwrite=True) # Usually here you would go and edit your meshes. # # Here we just use the same surface as if it were fixed # Read in the .obj file coords, faces = mne.read_surface(conv_dir / 'sample-head.obj') # Remember to backup the original head file in advance! # Overwrite the original head file # # mne.write_head_bem(head_dir / 'sample-head.fif', coords, faces, # overwrite=True) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Exporting surfaces to Blender Step2: Editing in Blender Step3: Back in Python, you can read the fixed .obj files and save them as Step4: Editing the head surfaces Step5: High-resolution head
3,249
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-3', 'aerosol') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 1.3. Scheme Scope Step7: 1.4. Basic Approximations Step8: 1.5. Prognostic Variables Form Step9: 1.6. Number Of Tracers Step10: 1.7. Family Approach Step11: 2. Key Properties --&gt; Software Properties Step12: 2.2. Code Version Step13: 2.3. Code Languages Step14: 3. Key Properties --&gt; Timestep Framework Step15: 3.2. Split Operator Advection Timestep Step16: 3.3. Split Operator Physical Timestep Step17: 3.4. Integrated Timestep Step18: 3.5. Integrated Scheme Type Step19: 4. Key Properties --&gt; Meteorological Forcings Step20: 4.2. Variables 2D Step21: 4.3. Frequency Step22: 5. Key Properties --&gt; Resolution Step23: 5.2. Canonical Horizontal Resolution Step24: 5.3. Number Of Horizontal Gridpoints Step25: 5.4. Number Of Vertical Levels Step26: 5.5. Is Adaptive Grid Step27: 6. Key Properties --&gt; Tuning Applied Step28: 6.2. Global Mean Metrics Used Step29: 6.3. Regional Metrics Used Step30: 6.4. Trend Metrics Used Step31: 7. Transport Step32: 7.2. Scheme Step33: 7.3. Mass Conservation Scheme Step34: 7.4. Convention Step35: 8. Emissions Step36: 8.2. Method Step37: 8.3. Sources Step38: 8.4. Prescribed Climatology Step39: 8.5. Prescribed Climatology Emitted Species Step40: 8.6. Prescribed Spatially Uniform Emitted Species Step41: 8.7. Interactive Emitted Species Step42: 8.8. Other Emitted Species Step43: 8.9. Other Method Characteristics Step44: 9. Concentrations Step45: 9.2. Prescribed Lower Boundary Step46: 9.3. Prescribed Upper Boundary Step47: 9.4. Prescribed Fields Mmr Step48: 9.5. Prescribed Fields Mmr Step49: 10. Optical Radiative Properties Step50: 11. Optical Radiative Properties --&gt; Absorption Step51: 11.2. Dust Step52: 11.3. Organics Step53: 12. Optical Radiative Properties --&gt; Mixtures Step54: 12.2. Internal Step55: 12.3. Mixing Rule Step56: 13. Optical Radiative Properties --&gt; Impact Of H2o Step57: 13.2. Internal Mixture Step58: 14. Optical Radiative Properties --&gt; Radiative Scheme Step59: 14.2. Shortwave Bands Step60: 14.3. Longwave Bands Step61: 15. Optical Radiative Properties --&gt; Cloud Interactions Step62: 15.2. Twomey Step63: 15.3. Twomey Minimum Ccn Step64: 15.4. Drizzle Step65: 15.5. Cloud Lifetime Step66: 15.6. Longwave Bands Step67: 16. Model Step68: 16.2. Processes Step69: 16.3. Coupling Step70: 16.4. Gas Phase Precursors Step71: 16.5. Scheme Type Step72: 16.6. Bulk Scheme Species
3,250
<ASSISTANT_TASK:> Python Code: def prime(n ) : if(n <= 1 ) : return False  if(n <= 3 ) : return True  if(n % 2 == 0 or n % 3 == 0 ) : return False  i = 5 while i * i <= n : if(n % i == 0 or n %(i + 2 ) == 0 ) : return False  i += 6  return True  def isVowel(c ) : c = c . lower() if(c == ' a ' or c == ' e ' or c == ' i ' or c == ' o ' or c == ' u ' ) : return True  return False  def isValidString(word ) : cnt = 0 ; for i in range(len(word ) ) : if(isVowel(word[i ] ) ) : cnt += 1   if(prime(cnt ) ) : return True  else : return False   if __name__== "__main __": s = "geeksforgeeks " if(isValidString(s ) ) : print("YES ")  else : print("NO ")   <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
3,251
<ASSISTANT_TASK:> Python Code: import pandas as pd # proposition = "PROPOSITION 064- MARIJUANA LEGALIZATION. INITIATIVE STATUTE." proposition = "PROPOSITION 062- DEATH PENALTY. INITIATIVE STATUTE." props = pd.read_csv("http://www.firstpythonnotebook.org/_static/committees.csv") contribs = pd.read_csv("http://www.firstpythonnotebook.org/_static/contributions.csv") props.prop_name.value_counts() prop = props[props.prop_name == proposition] merged = pd.merge(prop, contribs, on="calaccess_committee_id") merged[["contributor_firstname", "contributor_lastname", "contributor_city", "contributor_state", "amount", "committee_name_x", "committee_position"]].head() merged.amount.sum() merged.committee_position.value_counts().reset_index() support = merged[merged.committee_position == "SUPPORT"] oppose = merged[merged.committee_position == "OPPOSE"] support.amount.sum() oppose.amount.sum() support.amount.sum() / merged.amount.sum() support.sort_values("amount", ascending=False)[["contributor_firstname", "contributor_lastname", "contributor_city", "contributor_state", "amount", "committee_name_x"]].head() oppose.sort_values("amount", ascending=False)[["contributor_firstname", "contributor_lastname", "contributor_city", "contributor_state", "amount", "committee_name_x"]].head() merged.groupby(["committee_name_x", "committee_position"]).amount.sum().reset_index().sort_values("amount", ascending=False) top_contributors = merged.fillna({'contributor_firstname': 'NotAFirstName', 'contributor_lastname': 'NotALastName'}) top_contributors = top_contributors.groupby(["contributor_firstname", "contributor_lastname", "contributor_city", "committee_position"]).amount.sum().reset_index().sort_values("amount", ascending=False) top_contributors.head(10) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Read in data on committees and contributions Step2: Number of committees per proposition Step3: Filter for proposition of interest Step4: All contributions to all committees for and against the proposition Step5: Total contributions Step6: Number of committees for and against the proposition Step7: Total contributions for the proposition Step8: Total contributions against the proposition Step9: Percentage of total contributions given to support the proposition Step10: Top contributions in support of the proposition Step11: Top contributions in opposition to the proposition Step12: Total intake, by committee, for and against the proposition Step13: Top individual and organizational contributors for and against the proposition
3,252
<ASSISTANT_TASK:> Python Code: import pandas as pd from opengrid.library import misc from opengrid.library import houseprint from opengrid.library import caching import charts hp = houseprint.Houseprint() cache_water = caching.Cache(variable='water_daily_min') df_cache = cache_water.get(sensors=hp.get_sensors(sensortype='water')) charts.plot(df_cache.ix[-8:], stock=True, show='inline') hp.sync_tmpos() start = pd.Timestamp('now') - pd.Timedelta(weeks=1) df_water = hp.get_data(sensortype='water', head=start, ) df_water.info() daily_min = analysis.DailyAgg(df_water, agg='min').result daily_min.info() daily_min cache_water.update(daily_min) sensors = hp.get_sensors(sensortype='water') # sensor objects charts.plot(cache_water.get(sensors=sensors, start=start, end=None), show='inline', stock=True) import pandas as pd from opengrid.library import misc from opengrid.library import houseprint from opengrid.library import caching from opengrid.library import analysis import charts hp = houseprint.Houseprint() #hp.sync_tmpos() sensors = hp.get_sensors(sensortype='water') caching.cache_results(hp=hp, sensors=sensors, resultname='water_daily_min', AnalysisClass=analysis.DailyAgg, agg='min') cache = caching.Cache('water_daily_min') daily_min = cache.get(sensors = sensors, start = '20151201') charts.plot(daily_min, stock=True, show='inline') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg Step2: If this is the first time you run this demo, no cached data will be found, and you get an empty graph. Step3: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor. Step4: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe. Step5: A high-level cache function
3,253
<ASSISTANT_TASK:> Python Code: import numpy as np import dask.array as da from fmks.data.cahn_hilliard import generate_cahn_hilliard_data import dask.threaded import dask.multiprocessing def time_ch(num_workers, get, shape=(48, 200, 200), chunks=(1, 200, 200), n_steps=100): generate_cahn_hilliard_data(shape, chunks=chunks, n_steps=n_steps)[1].compute(num_workers=num_workers, get=get) for n_proc in (8, 4, 2, 1): print(n_proc, "thread(s)") %timeit time_ch(n_proc, dask.threaded.get) for n_proc in (8, 4, 2, 1): print(n_proc, "process(es)") %timeit time_ch(n_proc, dask.multiprocessing.get) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The function time_ch calls generate_cahn_hilliard_data to generate the data. generate_cahn_hilliard_data returns the microstructure and response as a tuple. compute is called on the response field with certain number of workers and with a scheduler. Step2: Threaded Timings Step3: Multiprocessing Timings
3,254
<ASSISTANT_TASK:> Python Code: from pymatgen.entries.computed_entries import ComputedEntry from pymatgen.entries.compatibility import MaterialsProjectCompatibility, \ MaterialsProject2020Compatibility from pymatgen.ext.matproj import MPRester # retrieve with MPRester() as m: entries = m.get_entries_in_chemsys("Cl-Mo-O") entry = entries[0] entries[25].energy_adjustments compat = MaterialsProjectCompatibility() entries = compat.process_entries(entries) entries[25].energy_adjustments entries[25].energy_per_atom entries[25].correction_per_atom entries[25].energy_adjustments = [] entries[25].energy_per_atom entries[25].correction_per_atom # retrieve with MPRester() as m: entries = m.get_entries_in_chemsys("Cl-Mo-O", compatible_only=False) entry = entries[0] entries[25].energy_adjustments <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Default behavior - MaterialsProject2020Compatibility Step2: You can examine the energy corrections via the energy_adjustments attribute Step3: If you want even more detail, you can examine an indiviual EnergyAdjustment (one element of the list) Step4: Notice how the energy adjustments have changed. The class name, description and values are all different. You will also notice that the descriptions of the legacy corrections are less verbose than those of the modern MaterialsProject2020Compatibility corrections. Step5: Alternatively, you can simply pass compatible_only=False to the MPRester call when you download data.
3,255
<ASSISTANT_TASK:> Python Code: from __future__ import division %pylab inline from scipy import stats import numpy as np b= stats.bernoulli(.5) # fair coin distribution nsamples = 100 # flip it nsamples times for 200 estimates xs = b.rvs(nsamples*200).reshape(nsamples,-1) phat = np.mean(xs,axis=0) # estimated p # edge of 95% confidence interval epsilon_n=np.sqrt(np.log(2/0.05)/2/nsamples) pct=np.logical_and(phat-epsilon_n<=0.5, 0.5 <= (epsilon_n +phat) ).mean()*100 print 'Interval trapped correct value ', pct,'% of the time' # compute estimated se for all trials se=np.sqrt(phat*(1-phat)/xs.shape[0]) # generate random variable for trial 0 rv=stats.norm(0, se[0]) # compute 95% confidence interval for that trial 0 np.array(rv.interval(0.95))+phat[0] def compute_CI(i): return stats.norm.interval(0.95,loc=i, scale=np.sqrt(i*(1-i)/xs.shape[0])) lower,upper = compute_CI(phat) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: In a previous coin-flipping discussion, we discussed estimation of the Step2: <!-- # @@@CODE src-statistics/Confidence_Intervals.py fromto
3,256
<ASSISTANT_TASK:> Python Code: import gzip import pickle import numpy as np import sklearn.svm as svm def load_data(): with gzip.open('../mnist.pkl.gz', 'rb') as f: train, validate, test = pickle.load(f, encoding="latin1") X_train = np.array([np.reshape(x, (784, )) for x in train[0]]) X_test = np.array([np.reshape(x, (784, )) for x in test [0]]) Y_train = np.array(train[1]) Y_test = np.array(test [1]) return (X_train, X_test, Y_train, Y_test) X_train, X_test, Y_train, Y_test = load_data() X_train.shape, X_test.shape, Y_train.shape, Y_test.shape M = svm.SVC(kernel='rbf', gamma=0.05, C=5) %%time M.fit(X_train, Y_train) M.score(X_train, Y_train) M.score(X_test, Y_test) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The function $\texttt{load_data}()$ returns a pair of the form Step2: Let us see what we have read Step3: We define a support vector machine with Gaussian kernel.
3,257
<ASSISTANT_TASK:> Python Code: import numpy as np import scipy as sp from sklearn import datasets iris = datasets.load_iris() digits = datasets.load_digits() boston = datasets.load_boston() from sklearn import svm model = svm.SVC(gamma=0.002, C=100.) print(model.gamma) model.set_params(gamma=.001) print(model.gamma) model.fit(digits.data[:-1], digits.target[:-1]) model.predict([digits.data[-1]]) import pylab as pl %matplotlib inline pl.imshow(digits.images[-1], cmap=pl.cm.gray_r) iris = datasets.load_iris() iris_X = iris.data iris_y = iris.target # Split iris data in train and test data # A random permutation, to split the data randomly np.random.seed(0) indices = np.random.permutation(len(iris_X)) iris_X_train = iris_X[indices[:-10]] iris_y_train = iris_y[indices[:-10]] iris_X_test = iris_X[indices[-10:]] iris_y_test = iris_y[indices[-10:]] # Create and fit a nearest-neighbor classifier from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier() knn.fit(iris_X_train, iris_y_train) print(knn.predict(iris_X_test)) print(iris_y_test) knn.score(iris_X_test, iris_y_test) from sklearn import linear_model logistic = linear_model.LogisticRegression(C=1e5) logistic.fit(iris_X_train, iris_y_train) print(logistic.predict(iris_X_test)) print(iris_y_test) logistic.score(iris_X_test, iris_y_test) scores = [] for k in range(10): indices = np.random.permutation(len(iris_X)) iris_X_train = iris_X[indices[:-10]] iris_y_train = iris_y[indices[:-10]] iris_X_test = iris_X[indices[-10:]] iris_y_test = iris_y[indices[-10:]] knn = KNeighborsClassifier() knn.fit(iris_X_train, iris_y_train) scores.append(knn.score(iris_X_test, iris_y_test)) print(scores) X_digits = digits.data y_digits = digits.target svc = svm.SVC(C=1, kernel='linear') N = 10 X_folds = np.array_split(X_digits, N) y_folds = np.array_split(y_digits, N) scores = list() for k in range(N): # We use 'list' to copy, in order to 'pop' later on X_train = list(X_folds) X_test = X_train.pop(k) X_train = np.concatenate(X_train) y_train = list(y_folds) y_test = y_train.pop(k) y_train = np.concatenate(y_train) scores.append(svc.fit(X_train, y_train).score(X_test, y_test)) scores from sklearn import model_selection k_fold = cross_validation.KFold(n=6, n_folds=3) for train_indices, test_indices in k_fold: print('Train: %s | test: %s' % (train_indices, test_indices)) kfold = cross_validation.KFold(len(X_digits), n_folds=N) [svc.fit(X_digits[train], y_digits[train]).score( X_digits[test], y_digits[test]) for train, test in kfold] cross_validation.cross_val_score( svc, X_digits, y_digits, cv=kfold, n_jobs=-1) import numpy as np from sklearn import cross_validation, datasets, svm digits = datasets.load_digits() X = digits.data y = digits.target svc = svm.SVC(kernel='linear') C_s = np.logspace(-10, 0, 10) scores = list() scores_std = list() for C in C_s: svc.C = C this_scores = cross_validation.cross_val_score(svc, X, y, n_jobs=1) scores.append(np.mean(this_scores)) scores_std.append(np.std(this_scores)) # Do the plotting import matplotlib.pyplot as plt plt.figure(1, figsize=(4, 3)) plt.clf() plt.semilogx(C_s, scores) plt.semilogx(C_s, np.array(scores) + np.array(scores_std), 'b--') plt.semilogx(C_s, np.array(scores) - np.array(scores_std), 'b--') locs, labels = plt.yticks() plt.yticks(locs, list(map(lambda x: "%g" % x, locs))) plt.ylabel('CV score') plt.xlabel('Parameter C') plt.ylim(0, 1.1) plt.show() from sklearn.model_selection import GridSearchCV, cross_val_score Cs = np.logspace(-6, -1, 10) clf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs), n_jobs=-1) clf.fit(X_digits[:1000], y_digits[:1000]) print(clf.best_score_) print(clf.best_estimator_.C) # Prediction performance on test set is not as good as on train set print(clf.score(X_digits[1000:], y_digits[1000:])) from sklearn import linear_model, decomposition, datasets from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV logistic = linear_model.LogisticRegression() pca = decomposition.PCA() pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)]) digits = datasets.load_digits() X_digits = digits.data y_digits = digits.target ############################################################################### # Plot the PCA spectrum pca.fit(X_digits) plt.figure(1, figsize=(4, 3)) plt.clf() plt.axes([.2, .2, .7, .7]) plt.plot(pca.explained_variance_, linewidth=2) plt.axis('tight') plt.xlabel('n_components') plt.ylabel('explained_variance_') ############################################################################### # Prediction n_components = [20, 40, 64] Cs = np.logspace(-4, 4, 3) #Parameters of pipelines can be set using ‘__’ separated parameter names: estimator = GridSearchCV(pipe, dict(pca__n_components=n_components, logistic__C=Cs)) estimator.fit(X_digits, y_digits) plt.axvline(estimator.best_estimator_.named_steps['pca'].n_components, linestyle=':', label='n_components chosen') plt.legend(prop=dict(size=12)) =================================================== Faces recognition example using eigenfaces and SVMs =================================================== The dataset used in this example is a preprocessed excerpt of the "Labeled Faces in the Wild", aka LFW_: http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB) .. _LFW: http://vis-www.cs.umass.edu/lfw/ Expected results for the top 5 most represented people in the dataset: ================== ============ ======= ========== ======= precision recall f1-score support ================== ============ ======= ========== ======= Ariel Sharon 0.67 0.92 0.77 13 Colin Powell 0.75 0.78 0.76 60 Donald Rumsfeld 0.78 0.67 0.72 27 George W Bush 0.86 0.86 0.86 146 Gerhard Schroeder 0.76 0.76 0.76 25 Hugo Chavez 0.67 0.67 0.67 15 Tony Blair 0.81 0.69 0.75 36 avg / total 0.80 0.80 0.80 322 ================== ============ ======= ========== ======= from __future__ import print_function from time import time import logging import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.datasets import fetch_lfw_people from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.decomposition import PCA from sklearn.svm import SVC print(__doc__) # Display progress logs on stdout logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') ############################################################################### # Download the data, if not already on disk and load it as numpy arrays lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4) # introspect the images arrays to find the shapes (for plotting) n_samples, h, w = lfw_people.images.shape # for machine learning we use the 2 data directly (as relative pixel # positions info is ignored by this model) X = lfw_people.data n_features = X.shape[1] # the label to predict is the id of the person y = lfw_people.target target_names = lfw_people.target_names n_classes = target_names.shape[0] print("Total dataset size:") print("n_samples: %d" % n_samples) print("n_features: %d" % n_features) print("n_classes: %d" % n_classes) ############################################################################### # Split into a training set and a test set using a stratified k fold # split into a training and testing set X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=42) ############################################################################### # Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled # dataset): unsupervised feature extraction / dimensionality reduction n_components = 150 print("Extracting the top %d eigenfaces from %d faces" % (n_components, X_train.shape[0])) t0 = time() pca = PCA(n_components=n_components, svd_solver='randomized', whiten=True).fit(X_train) print("done in %0.3fs" % (time() - t0)) eigenfaces = pca.components_.reshape((n_components, h, w)) print("Projecting the input data on the eigenfaces orthonormal basis") t0 = time() X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print("done in %0.3fs" % (time() - t0)) ############################################################################### # Train a SVM classification model print("Fitting the classifier to the training set") t0 = time() param_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5], 'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], } clf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid) clf = clf.fit(X_train_pca, y_train) print("done in %0.3fs" % (time() - t0)) print("Best estimator found by grid search:") print(clf.best_estimator_) ############################################################################### # Quantitative evaluation of the model quality on the test set print("Predicting people's names on the test set") t0 = time() y_pred = clf.predict(X_test_pca) print("done in %0.3fs" % (time() - t0)) print(classification_report(y_test, y_pred, target_names=target_names)) print(confusion_matrix(y_test, y_pred, labels=range(n_classes))) ############################################################################### # Qualitative evaluation of the predictions using matplotlib def plot_gallery(images, titles, h, w, n_row=3, n_col=4): Helper function to plot a gallery of portraits plt.figure(figsize=(1.8 * n_col, 2.4 * n_row)) plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35) for i in range(n_row * n_col): plt.subplot(n_row, n_col, i + 1) plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray) plt.title(titles[i], size=12) plt.xticks(()) plt.yticks(()) # plot the result of the prediction on a portion of the test set def title(y_pred, y_test, target_names, i): pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1] true_name = target_names[y_test[i]].rsplit(' ', 1)[-1] return 'predicted: %s\ntrue: %s' % (pred_name, true_name) prediction_titles = [title(y_pred, y_test, target_names, i) for i in range(y_pred.shape[0])] plot_gallery(X_test, prediction_titles, h, w) # plot the gallery of the most significative eigenfaces eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])] plot_gallery(eigenfaces, eigenface_titles, h, w) plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Même avant scikit-learn Step2: Nous pouvons regarder l'image. Step3: À savoir (mais pour un autre jour) Step4: Le classifieur le plus simple imagineable s'appelle kNN. Avec scikit-learn, c'est facile. (Visualisaton à suivre.) Step5: La régression logistique est un algorithm important de classification dans l'apprentissage. Le voilà sur les mêmes données Step6: Exercice Step7: Ce qu'on vient de faire s'appelle "cross validation" (validation croisée). On peut le faire plus facilement Step8: En validation croisée, plus c'est grand, plus c'est bon. Step9: Grid search Step10: Pipelining Step13: Eigenfaces
3,258
<ASSISTANT_TASK:> Python Code: df['citizenship'].value_counts().head() df.groupby('citizenship')['networthusbillion'].sum().sort_values(ascending=False) us_pop = 318.9 #billion (2014) us_bill = df[df['citizenship'] == 'United States'] print("There are", us_pop/len(us_bill), "billionaires per billion people in the United States.") germ_pop = 0.08062 #(2013) germ_bill = df[df['citizenship'] == 'Germany'] print("There are", germ_pop/len(germ_bill), "billionaires per billion people in Germany.") china_pop = 1.357 #(2013) china_bill = df[df['citizenship'] == 'China'] print("There are", china_pop/len(china_bill), "billionaires per billion people in China.") russia_pop = 0.1435 #(2013) russia_bill = df[df['citizenship'] == 'Russia'] print("There are", russia_pop/len(russia_bill), "billionaires per billion people in Russia.") japan_pop = 0.1273 # 2013 japan_bill = df[df['citizenship'] == 'Japan'] print("There are", japan_pop/len(japan_bill), "billionaires per billion people in Japan.") print(df.columns) recent = df[df['year'] == 2014] # if it is not recent then there are duplicates for diff years recent.sort_values('rank').head(10) recent['networthusbillion'].describe() print("The average wealth of a billionaire is", recent['networthusbillion'].mean(), "billion dollars") male = recent[(recent['gender'] == 'male')] female = recent[(recent['gender'] == 'female')] print("The average wealth of a male billionaire is", male['networthusbillion'].mean(), "billion dollars") print("The average wealth of a female billionaire is", female['networthusbillion'].mean(), "billion dollars") recent.sort_values('networthusbillion').head(1) # Who are the top 10 poorest billionaires? # Who are the top 10 poorest billionaires recent.sort_values('networthusbillion').head(10) # 'What is relationship to company'? And what are the most common relationships? #top 10 most common relationships to company df['relationshiptocompany'].value_counts().head(10) # Most common source of wealth? Male vs. female? # Most common source of wealth? Male vs. female print("The most common source of wealth is", df['sourceofwealth'].value_counts().head(1)) print("The most common source of wealth for males is", male['sourceofwealth'].value_counts().head(1)) print("The most common source of wealth for females is", female['sourceofwealth'].value_counts().head(1)) #need to figure out how to extract just the number nd not the data type 'Name: sourceofwealth, dtype: int64' richest = df[df['citizenship'] == 'United States'].sort_values('rank').head(1)['networthusbillion'].to_dict() # richest['networthusbillion'] richest[282] ## I JUST WANT THE VALUE -- 18.5. ## 16.77 TRILLION US_GDP = 1.677 * (10^13) US_GDP recent['sector'].value_counts().head(10) df.groupby('sector')['networthusbillion'].sum() (recent['selfmade'] == 'self-made').value_counts() # recent['age'].value_counts().sort_values() print("The average billionnaire is", round(recent['age'].mean()), "years old.") df.groupby('selfmade')['age'].mean() # or different industries? df.groupby('sector')['age'].mean() #youngest billionnaires recent.sort_values('age').head(10) #oldest billionnaires recent.sort_values('age', ascending =False).head(10) #Age distribution - maybe make a graph about it? import matplotlib.pyplot as plt %matplotlib inline # This will scream we don't have matplotlib. his = df['age'].hist(range=[0, 100]) his.set_title('Distribution of Age Amongst Billionaires') his.set_xlabel('Age(years)') his.set_ylabel('# of Billionnaires') # Maybe just made a graph about how wealthy they are in general? import matplotlib.pyplot as plt %matplotlib inline # This will scream we don't have matplotlib. his = df['networthusbillion'].hist(range=[0, 45]) his.set_title('Distribution of Wealth Amongst Billionaires') his.set_xlabel('Wealth(Billions)') his.set_ylabel('# of Billionnaires') recent.plot(kind='scatter', x='networthusbillion', y='age') recent.plot(kind='scatter', x='age', y='networthusbillion', alpha = 0.2) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Who are the top 10 richest billionaires? Step2: What's the average wealth of a billionaire? Male? Female? Step3: Who is the poorest billionaire? Step4: Given the richest person in a country, what % of the GDP is their wealth? Step5: Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India Step6: How many self made billionaires vs. others? Step7: How old are billionaires? Step8: How old are billionaires self made vs. non self made? Step9: Maybe plot their net worth vs age (scatterplot)
3,259
<ASSISTANT_TASK:> Python Code: import dateutils import dateutil.parser import pandas as pd parking_df = pd.read_csv("small-violations.csv") parking_df parking_df.dtypes import datetime parking_df.head()['Issue Date'].astype(datetime.datetime) import pandas as pd parking_df = pd.read_csv("small-violations.csv") parking_df col_plateid = { 'Plate ID': 'str', } violations_df = pd.read_csv("small-violations.csv", dtype=col_plateid) violations_df.head(20) print("The data type is",(type(violations_df['Plate ID'][0]))) type(parking_df['Vehicle Year'][0]) # DISCOVERY - pass value as [0] rather than 0 col_types = { 'Vehicle Year': [0] } test_df = pd.read_csv("violations.csv", na_values=col_types, nrows=10) test_df.head(10) violations_df['Vehicle Year'] = violations_df['Vehicle Year'].replace("0","NaN") violations_df.head(10) type(violations_df['Issue Date'][0]) violate_df = pd.read_csv("small-violations.csv", parse_dates=True, infer_datetime_format=True, keep_date_col=True, date_parser=True, dayfirst=True, nrows=10) #violate_df['Vehicle Year'] = test1_df['Vehicle Year'].replace("0","NaN") violate_df.head() yourdate = dateutil.parser.parse(violate_df['Issue Date'][0]) yourdate violate_df.head()['Issue Date'].astype(datetime.datetime) violate_df.columns # changing it to string because it later needs to be converted into Python time. col_observ = { 'Date First Observed': 'str', } test2_df = pd.read_csv("violations.csv", dtype=col_observ, nrows=10) test2_df.head() # defining conversion into python time def to_date(num): if num == "0": return num.replace("0","NaN") else: yourdate = dateutil.parser.parse(num) date_in_py = yourdate.strftime("%Y %B %d") return date_in_py to_date("20140324") # confirming its string. type(test2_df['Date First Observed'][0]) test2_df['Date First Observed'].apply(to_date) #replacing Date First Observed with Date First Observed column as already there are so many columns. test2_df['Date First Observed'] = test2_df['Date First Observed'].apply(to_date) violate_df['Violation Time'].head(5) type(violate_df['Violation Time'][0]) # am replacing A and P with AM and PM to def str_to_time(time_str): s = time_str.replace("P"," PM").replace("A"," AM") x = x = s[:2] + ":" + s[2:] return x str_to_time("1239P") test2_df['Violation Time'] = test2_df['Violation Time'].apply(str_to_time) def vio_date(time_str): parsed_date = dateutil.parser.parse(time_str) date_vio = parsed_date.strftime("%H:%M %p") return date_vio #return parsed_date.hour print(vio_date("12:32 PM")) test2_df['Violation Time'].apply(vio_date) #replacing Violation Time with Date Violation Time column as already there are so many columns. test2_df['Violation Time'] = test2_df['Violation Time'].apply(vio_date) test2_df['Violation Time'] #violate_df['Vehicle Color'].count_values() violate_df.groupby('Vehicle Color').describe() def to_color(color_str): if color_str == "WH": return str(color_str.replace("WH","White")) if color_str == "WHT": return str(color_str.replace("WHT","White")) if color_str == "RD": return str(color_str.replace("RD","Red")) if color_str == "BLK": return str(color_str.replace("BLK","BLACK")) if color_str == "BK": return str(color_str.replace("BK","BLACK")) if color_str == "BR": return str(color_str.replace("BR","Brown")) if color_str == "BRW": return str(color_str.replace("BRW","Brown")) if color_str == "GN": return str(color_str.replace("GN","Green")) if color_str == "GRY": return str(color_str.replace("GRY","Gray")) if color_str == "GY": return str(color_str.replace("GY","Gray")) if color_str == "BL": return str(color_str.replace("BL","Blue")) if color_str == "SILVR": return str(color_str.replace("SILVR","Silver")) if color_str == "SILVE": return str(color_str.replace("SILVE","Silver")) if color_str == "MAROO": return str(color_str.replace("MAROO","Maroon")) to_color("WHT") test2_df['Vehicle Color'].apply(to_color) #replacing Vehicle Color with Vehicle Color column as already there are so many columns. test2_df['Vehicle Color'] = test2_df['Vehicle Color'].apply(to_color) test2_df['Vehicle Color'].head() df_code = pd.read_csv("DOF_Parking_Violation_Codes.csv") df_code.head(10) violate_df['Violation Legal Code'].head() test2_df.join(df_code, on='Violation Code', how='left') for ammount in df_code["All Other Areas"]: try: money_to_int(ammount) except: print(ammount) print(type(ammount)) def money_to_int(money_str): return int(money_str.replace("$","").replace(",","")) print(money_to_int("$115")) import re ammount_list = [] other_area = df_code["All Other Areas"] for ammount in other_area: try: x = money_to_int(ammount) ammount_list.append(x) #print(amount_list) except: print("made it to except") if isinstance(ammount,str): print("is a string!") clean = re.findall(r"\d{3}", ammount) z = [int(i) for i in clean] #print(type(z[0])) #print(clean) if len(z) > 1: print("z is greater than 1") avg = int(sum(z) / len(z)) print(type(avg)) #print(avg) ammount_list.append(avg) elif len(z) == 1: print("only one item in list!") print("Let's append", str(z[0])) ammount_list.append(z[0]) #print(amount_list) else: ammount_list.append(None) else: ammount_list.append(None) len(ammount_list) df_code['new_areas'] = ammount_list df_code #df_code['new_areas'].sum() #since I am unable to read the entire data set using the subset to calculate the sum. test3_df = pd.read_csv("small-violations.csv", dtype=col_observ) test3_df test3_df.join(df_code, on='Violation Code', how='left') # joining with the violation dataset new_data = test3_df.join(df_code, on='Violation Code', how='left') new_data['new_areas'].sum() new_data.columns new_data['Violation Code'].value_counts() new_data[new_data['Violation Code'] == 21] most_frequent = new_data[new_data['Violation Code'] == 21].head(1) print("The most frequent violation is", most_frequent['DEFINITION']) columns_to_show = ['Violation Code','new_areas'] new_data[columns_to_show] lucrative_df = new_data[columns_to_show] freq_df = new_data #df.sort_values('length', ascending=False).head(3) lucrative_df.groupby('Violation Code')['new_areas'].sum().sort_values(ascending=False) new_data[new_data['Violation Code'] == 14].head(1) most_lucrative = new_data[new_data['Violation Code'] == 14].head(1) print("The most lucrative is Violation Code 14 which corresponds to", most_frequent['DEFINITION']) columns_to_show = ['Registration State','new_areas'] new_data[columns_to_show] df_reg = new_data[columns_to_show] df_reg[df_reg['Registration State'] != "NY"] df_nonNY = df_reg[df_reg['Registration State'] != "NY"] print("The total money that NYC make off of all non-New York vehicles is", df_nonNY['new_areas'].sum()) df_nonNY.groupby('Registration State')['new_areas'].sum().sort_values(ascending=False).head(10) import matplotlib.pyplot as plt %matplotlib inline df_nonNY.groupby('Registration State')['new_areas'].sum().sort_values(ascending=False).head(10).plot(kind='bar',x='Registration State', color='green') new_data.columns test2_df['Violation Time'].head() #new_data['Violation Time'] = test2_df['Violation Time'] violate_df['Violation Time'] type(new_data['Violation Time'][0]) v_time = new_data['Violation Time'] v_time.head(10) def vio_date(time_str): parsed_date = dateutil.parser.parse(time_str) date_vio = parsed_date.strftime("%H:%M %p") return date_vio #return parsed_date.hour print(vio_date("12:32 PM")) # 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am count1 = [] count2 = [] count3 = [] count4 = [] count5 = [] z = [i for i in v_time] #print(z) for i in z: if i != None: #print(i) try: vio_date(i) #print(type(i)) #print("finished printing i") except: pass #print(type(z[0])) for item in z: item = str(item) #print(type(item)) if item < "06.00 AM": count1.append(item) #print(len(count1)) if item < "12.00 PM": count2.append(item) if item < "06.00 PM": count3.append(item) if item < "12.00 AM": count4.append(item) #else: #count5.append(item) #print(len(count5)) print(len(count4)) print(len(count3)) print(len(count2)) print(len(count1)) #gives the Registration State wise ticket cost (new_areas) df_reg df_reg.describe() new_data.columns # parsing to the daytime format. did earlier (dateutil.parser.parse(violate_df['Issue Date'][0]) new_data['Issue Date'] = test3_df['Issue Date'] new_data['Issue Date'].value_counts().head(10) new_data['Issue Date'].value_counts().head(10).plot(kind='bar',x='Issue Date', color='orange') # since new data issue date is showing so many nan. i am going back to old data new_data['Issue Date'] = test3_df['Issue Date'] new_data['Issue Date'] columns_to_show = ['Issue Date','new_areas'] new_data[columns_to_show].head() new_data.groupby('Issue Date')['new_areas'].sum().sort_values(ascending=False).head(10).plot(kind='bar',x='Issue Date', color='green') df = pd.read_csv("borough.csv") df # bronx, queens, manhattan, staten island, brooklyn o df[58: 63] NYC = df[58: 63] NYC['code'] = ["BX", "K", "NYC", "Q", "R"] NYC columns_to_show = ['Violation County','new_areas'] new_data[columns_to_show] columns_to_show = ['Violation County','new_areas'] new_data[columns_to_show] county = new_data[columns_to_show] county.groupby('Violation County')['new_areas'].sum().sort_values(ascending=False).head(10) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes! Step2: 2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN. Step3: 3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates. Step4: 4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN. Step5: 5. "Violation time" is... not a time. Make it a time Step6: 6. There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice. Step7: 7. Join the data with the Parking Violations Code dataset from the NYC Open Data site. Step8: 8. How much money did NYC make off of parking violations? Step9: 9. What's the most lucrative kind of parking violation? The most frequent? Step10: 10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles? Step11: 11. Make a chart of the top few. Step12: 12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am. Step13: 13. What's the average ticket cost in NYC? Step14: 14. Make a graph of the number of tickets per day. Step15: 15. Make a graph of the amount of revenue collected per day. Step16: 16. Manually construct a dataframe out of https Step17: 17. What's the parking-ticket-$-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head!
3,260
<ASSISTANT_TASK:> Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() rides[:24*10].plot(x='dteday', y='cnt', figsize=(10,4)) day_rides = pd.read_csv('Bike-Sharing-Dataset/day.csv') day_rides = day_rides.set_index(['dteday']) day_rides.loc['2011-08-01':'2011-12-31'].plot(y='cnt', figsize=(10,4)) day_rides.loc['2012-08-01':'2012-12-31'].plot(y='cnt', figsize=(10,4)) dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # # def sigmoid(x): # return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here # self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' #print('features',features) #print('targets',targets) # nCount = 0 n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): # nCount += 1 # if(nCount > 1): # break #print('#######################################') #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. #print('X.shape',X.shape) #print('X',X) #print('X[None,:].shape', X[None,:].shape) #print('X[None,:]', X[None,:]) #print('y.shape',y.shape) #print('y',y) #print('weights_input_to_hidden.shape', self.weights_input_to_hidden.shape) #print('weights_hidden_to_output.shape', self.weights_hidden_to_output.shape) hidden_inputs = X[None,:] @ self.weights_input_to_hidden # signals into hidden layer #print('hidden_inputs.shape', hidden_inputs.shape) hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer #print('hidden_outputs.shape',hidden_outputs.shape) # TODO: Output layer - Replace these values with your calculations. final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer final_outputs = final_inputs # signals from final output layer #print('final_inputs.shape', final_inputs.shape) #print('final_outputs.shape', final_outputs.shape) #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y - final_outputs # Output layer error is the difference between desired target and actual output. #print('error.shape', error.shape) #print('y',y) #print('final_outputs',final_outputs) #print('error',error) output_error_term = error * 1 #print('output_error_term',output_error_term) # TODO: Calculate the hidden layer's contribution to the error hidden_error = output_error_term @ self.weights_hidden_to_output.T # hidden_error = output_error_term * self.weights_hidden_to_output.T #print('hidden_error.shape',hidden_error.shape) # print('hidden_error1',hidden_error1) # print('hidden_error',hidden_error) # TODO: Backpropagated error terms - Replace these values with your calculations. #output_error_term = None hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) #print('hidden_error_term.shape', hidden_error_term.shape) # Weight step (input to hidden) tmp = X[:,None] @ hidden_error_term #print('X[:,None]',X[:,None]) #print('hidden_error_term',hidden_error_term) #print('tmp.shape',tmp.shape) #print('tmp',tmp) delta_weights_i_h += tmp #print('delta_weights_i_h.shape',delta_weights_i_h.shape) #print('-------------------') # Weight step (hidden to output) #print('hidden_outputs', hidden_outputs) #print('output_error_term', output_error_term) tmp = hidden_outputs.T * output_error_term #print('tmp.shape', tmp.shape) #print('tmp', tmp) delta_weights_h_o += tmp #print('delta_weights_h_o.shape', delta_weights_h_o.shape) # TODO: Update the weights - Replace these values with your calculations. #print('self.lr, n_records',self.lr, n_records) self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step #print('self.weights_hidden_to_output', self.weights_hidden_to_output) self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step #print('self.weights_input_to_hidden', self.weights_input_to_hidden) def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. # #print('features.shape', features.shape) # #print(features) hidden_inputs = features @ self.weights_input_to_hidden # signals into hidden layer # #print('hedden_inputs', hidden_inputs) hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # #print('hidden_outputs', hidden_outputs) # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer # #print('final_inputs', final_inputs) final_outputs = final_inputs # signals from final output layer # #print('final_outputs', final_outputs) return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) import sys ### Set the hyperparameters here ### iterations = 4000 learning_rate = 0.5 hidden_nodes = 20 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) axes = plt.gca() axes.plot(losses['train'], label='Training loss') axes.plot(losses['validation'], label='Validation loss') axes.legend() _ = axes.set_ylim([0,3]) fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 加载和准备数据 Step2: 数据简介 Step3: 查看每天的骑行数据,对比2011年和2012年 Step4: 虚拟变量(哑变量) Step5: 调整目标变量 Step6: 我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。 Step7: 开始构建网络 Step8: 单元测试 Step9: 训练网络 Step10: 检查预测结果
3,261
<ASSISTANT_TASK:> Python Code: # Define your group, for this exercise mygroup = "A" # <- change the letter in quotes # Import Python libraries import os # This lets us interact with the operating system import pandas as pd # This allows us to use dataframes import seaborn as sns # This gives us pretty graphics options # Load the data datafile = os.path.join('data', 'correlations', mygroup, 'expn.tab') data = pd.read_csv(datafile, sep="\t") # Show the first few lines of the data data.head() # Show summary statistics of the dataframe # Show the Pearson correlation coefficients between columns in the dataset # The line below allows plots to be rendered in the notebook # This is very useful for literate programming, and for producing reports %matplotlib inline # Show a scatter plot of transcript levels for gene1 and gene2 <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: After executing the code cell, you should see a table of values. The table has columns named gene1 and gene2, and rows that are indexed starting at zero (it is typical in many programming languages to start counting at zero). Step2: <a id="correlations"></a> Step3: You can now make a quantitative estimate of whether these two genes are likely to be coregulated.
3,262
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'atmos') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 1.3. Model Family Step7: 1.4. Basic Approximations Step8: 2. Key Properties --&gt; Resolution Step9: 2.2. Canonical Horizontal Resolution Step10: 2.3. Range Horizontal Resolution Step11: 2.4. Number Of Vertical Levels Step12: 2.5. High Top Step13: 3. Key Properties --&gt; Timestepping Step14: 3.2. Timestep Shortwave Radiative Transfer Step15: 3.3. Timestep Longwave Radiative Transfer Step16: 4. Key Properties --&gt; Orography Step17: 4.2. Changes Step18: 5. Grid --&gt; Discretisation Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Step20: 6.2. Scheme Method Step21: 6.3. Scheme Order Step22: 6.4. Horizontal Pole Step23: 6.5. Grid Type Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Step25: 8. Dynamical Core Step26: 8.2. Name Step27: 8.3. Timestepping Type Step28: 8.4. Prognostic Variables Step29: 9. Dynamical Core --&gt; Top Boundary Step30: 9.2. Top Heat Step31: 9.3. Top Wind Step32: 10. Dynamical Core --&gt; Lateral Boundary Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Step34: 11.2. Scheme Method Step35: 12. Dynamical Core --&gt; Advection Tracers Step36: 12.2. Scheme Characteristics Step37: 12.3. Conserved Quantities Step38: 12.4. Conservation Method Step39: 13. Dynamical Core --&gt; Advection Momentum Step40: 13.2. Scheme Characteristics Step41: 13.3. Scheme Staggering Type Step42: 13.4. Conserved Quantities Step43: 13.5. Conservation Method Step44: 14. Radiation Step45: 15. Radiation --&gt; Shortwave Radiation Step46: 15.2. Name Step47: 15.3. Spectral Integration Step48: 15.4. Transport Calculation Step49: 15.5. Spectral Intervals Step50: 16. Radiation --&gt; Shortwave GHG Step51: 16.2. ODS Step52: 16.3. Other Flourinated Gases Step53: 17. Radiation --&gt; Shortwave Cloud Ice Step54: 17.2. Physical Representation Step55: 17.3. Optical Methods Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Step57: 18.2. Physical Representation Step58: 18.3. Optical Methods Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Step60: 20. Radiation --&gt; Shortwave Aerosols Step61: 20.2. Physical Representation Step62: 20.3. Optical Methods Step63: 21. Radiation --&gt; Shortwave Gases Step64: 22. Radiation --&gt; Longwave Radiation Step65: 22.2. Name Step66: 22.3. Spectral Integration Step67: 22.4. Transport Calculation Step68: 22.5. Spectral Intervals Step69: 23. Radiation --&gt; Longwave GHG Step70: 23.2. ODS Step71: 23.3. Other Flourinated Gases Step72: 24. Radiation --&gt; Longwave Cloud Ice Step73: 24.2. Physical Reprenstation Step74: 24.3. Optical Methods Step75: 25. Radiation --&gt; Longwave Cloud Liquid Step76: 25.2. Physical Representation Step77: 25.3. Optical Methods Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Step79: 27. Radiation --&gt; Longwave Aerosols Step80: 27.2. Physical Representation Step81: 27.3. Optical Methods Step82: 28. Radiation --&gt; Longwave Gases Step83: 29. Turbulence Convection Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Step85: 30.2. Scheme Type Step86: 30.3. Closure Order Step87: 30.4. Counter Gradient Step88: 31. Turbulence Convection --&gt; Deep Convection Step89: 31.2. Scheme Type Step90: 31.3. Scheme Method Step91: 31.4. Processes Step92: 31.5. Microphysics Step93: 32. Turbulence Convection --&gt; Shallow Convection Step94: 32.2. Scheme Type Step95: 32.3. Scheme Method Step96: 32.4. Processes Step97: 32.5. Microphysics Step98: 33. Microphysics Precipitation Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Step100: 34.2. Hydrometeors Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Step102: 35.2. Processes Step103: 36. Cloud Scheme Step104: 36.2. Name Step105: 36.3. Atmos Coupling Step106: 36.4. Uses Separate Treatment Step107: 36.5. Processes Step108: 36.6. Prognostic Scheme Step109: 36.7. Diagnostic Scheme Step110: 36.8. Prognostic Variables Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Step112: 37.2. Cloud Inhomogeneity Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Step114: 38.2. Function Name Step115: 38.3. Function Order Step116: 38.4. Convection Coupling Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Step118: 39.2. Function Name Step119: 39.3. Function Order Step120: 39.4. Convection Coupling Step121: 40. Observation Simulation Step122: 41. Observation Simulation --&gt; Isscp Attributes Step123: 41.2. Top Height Direction Step124: 42. Observation Simulation --&gt; Cosp Attributes Step125: 42.2. Number Of Grid Points Step126: 42.3. Number Of Sub Columns Step127: 42.4. Number Of Levels Step128: 43. Observation Simulation --&gt; Radar Inputs Step129: 43.2. Type Step130: 43.3. Gas Absorption Step131: 43.4. Effective Radius Step132: 44. Observation Simulation --&gt; Lidar Inputs Step133: 44.2. Overlap Step134: 45. Gravity Waves Step135: 45.2. Sponge Layer Step136: 45.3. Background Step137: 45.4. Subgrid Scale Orography Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Step139: 46.2. Source Mechanisms Step140: 46.3. Calculation Method Step141: 46.4. Propagation Scheme Step142: 46.5. Dissipation Scheme Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Step144: 47.2. Source Mechanisms Step145: 47.3. Calculation Method Step146: 47.4. Propagation Scheme Step147: 47.5. Dissipation Scheme Step148: 48. Solar Step149: 49. Solar --&gt; Solar Pathways Step150: 50. Solar --&gt; Solar Constant Step151: 50.2. Fixed Value Step152: 50.3. Transient Characteristics Step153: 51. Solar --&gt; Orbital Parameters Step154: 51.2. Fixed Reference Date Step155: 51.3. Transient Method Step156: 51.4. Computation Method Step157: 52. Solar --&gt; Insolation Ozone Step158: 53. Volcanos Step159: 54. Volcanos --&gt; Volcanoes Treatment
3,263
<ASSISTANT_TASK:> Python Code: # Authors: Tal Linzen <linzen@nyu.edu> # Denis A. Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.stats.regression import linear_regression print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, aud_r=2) # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=False, exclude='bads') # Reject some epochs based on amplitude reject = dict(mag=5e-12) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=(None, 0), preload=True, reject=reject) names = ['intercept', 'trial-count'] intercept = np.ones((len(epochs),), dtype=np.float) design_matrix = np.column_stack([intercept, # intercept np.linspace(0, 1, len(intercept))]) # also accepts source estimates lm = linear_regression(epochs, design_matrix, names) def plot_topomap(x, unit): x.plot_topomap(ch_type='mag', scale=1, size=1.5, vmax=np.max, unit=unit, times=np.linspace(0.1, 0.2, 5)) trial_count = lm['trial-count'] plot_topomap(trial_count.beta, unit='z (beta)') plot_topomap(trial_count.t_val, unit='t') plot_topomap(trial_count.mlog10_p_val, unit='-log10 p') plot_topomap(trial_count.stderr, unit='z (error)') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Set parameters and read data Step2: Run regression
3,264
<ASSISTANT_TASK:> Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import matplotlib.pyplot as pl from astropy.wcs import WCS from scipy import constants import cygrid np.set_printoptions(precision=1) def gaincurve(elev, a0, a1, a2): ''' Radio telescope sensitivity is usually a function of elevation (parametrized as parabula). ''' return a0 + a1 * elev + a2 * elev * elev def dv_to_df(restfreq, velo_kms): ''' Convert velocity resolution to frequency resolution. ''' return restfreq * velo_kms * 1.e3 / constants.c def setup_header(mapcenter, mapsize, beamsize_fwhm): ''' Produce a FITS header that contains the target field. ''' # define target grid (via fits header according to WCS convention) # a good pixel size is a third of the FWHM of the PSF (avoids aliasing) pixsize = beamsize_fwhm / 3. dnaxis1 = int(mapsize[0] / pixsize) dnaxis2 = int(mapsize[1] / pixsize) header = { 'NAXIS': 2, 'NAXIS1': dnaxis1, 'NAXIS2': dnaxis2, 'CTYPE1': 'RA---SIN', 'CTYPE2': 'DEC--SIN', 'CUNIT1': 'deg', 'CUNIT2': 'deg', 'CDELT1': -pixsize, 'CDELT2': pixsize, 'CRPIX1': (dnaxis1 + 1) / 2., 'CRPIX2': (dnaxis2 + 1) / 2., 'CRVAL1': mapcenter[0], 'CRVAL2': mapcenter[1], } return header dual_pol = True restfreq = 23.7e9 # Hz opacity = 0.07 # assume reasonably good weather Tsys_zenith = 60. T_amb = 290. # K T_atm = T_amb - 17. Gamma = 1.12 # K/Jy eta_MB = 0.79 # main beam efficiency Gamma_MB = Gamma / eta_MB Ta_to_Tb = 1. / eta_MB nchan = 2 ** 16 # 64 k bandwidth = 5e8 # 500 MHz spec_reso = bandwidth / nchan * 1.16 # true spectral resolution 16% worse than channel width print('spec_reso = {:.1f} kHz'.format(spec_reso * 1.e-3)) desired_vel_resolution = 1. # km/s desired_freq_resolution = dv_to_df(restfreq, desired_vel_resolution) print('desired_freq_resolution = {:.1f} kHz'.format(desired_freq_resolution * 1.e-3)) smooth_nbin = int(desired_freq_resolution / spec_reso + 0.5) print('smooth_nbin', smooth_nbin) exposure = 60. # seconds elevations = np.array([10, 20, 30, 40, 50, 60, 90]) AM = 1. / np.sin(np.radians(elevations)) gain_correction = gaincurve(elevations, 0.954, 3.19E-3, -5.42E-5) Tsys_corr = Tsys_zenith + T_atm * (np.exp(opacity * AM) - np.exp(opacity * 1)) print('Tsys_corr', Tsys_corr) # Tsys_corr = Tsys_zenith + opacity * T_atm * (AM - 1) # approximate formula, for small opacity * AM # print(Tsys_corr) Ta_rms = Tsys_corr / np.sqrt(spec_reso * smooth_nbin * exposure) if dual_pol: Ta_rms /= np.sqrt(2.) Ta_rms *= np.sqrt(2.) Tb_rms = Ta_to_Tb * Ta_rms / gain_correction S_rms = Tb_rms / Gamma_MB atm_atten = np.exp(-opacity * AM) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( 'Elev', 'Airmass', 'Tsys', 'Ta RMS', 'Tb RMS', 'S RMS', 'AtmAtten', 'Tb_eff RMS', 'S_eff RMS' )) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( '[d]', '', '[K]', '[K]', '[K]', '[Jy]', '', '[K]', '[Jy]' )) for idx in range(len(elevations)): print( '{0:>8.2f} {1:>8.2f} {2:>10.4f} {3:>10.4f} {4:>10.4f} ' '{5:>10.4f} {6:>10.4f} {7:>10.4f} {8:>10.4f}'.format( elevations[idx], AM[idx], Tsys_corr[idx], Ta_rms[idx], Tb_rms[idx], S_rms[idx], atm_atten[idx], Tb_rms[idx] / atm_atten[idx], S_rms[idx] / atm_atten[idx], )) print('Ta RMS = Antenna temp. noise') print('Tb RMS = Brightness temp. noise') print('S RMS = Flux density noise') Tb_eff_rms_desired = 0.01 # 10 mK Ta_eff_rms_desired = Tb_eff_rms_desired * gain_correction / Ta_to_Tb exposure = (Tsys_corr / Ta_eff_rms_desired) ** 2 / (spec_reso * smooth_nbin) if dual_pol: exposure /= 2. exposure *= 2. print('Exposure time needed to reach an effective MB brightness temperature noise level of {:.1f} mK'.format( Tb_eff_rms_desired * 1.e3)) print('{0:>8s} {1:>10s}'.format('Elev [d]', 'Time [min]')) for idx in range(len(elevations)): print('{0:>8.2f} {1:>10.1f}'.format( elevations[idx], (exposure / 60.)[idx] )) map_width, map_height = 100., 100. # arcsec beamsize_fwhm = 38. # arcsec; at the frequency given in our example num_scan_lines = int(3 * map_height / beamsize_fwhm + 0.5) print('num_scan_lines', num_scan_lines) sampling_interval = 1. # s (== 4 x 250 ms at Effelsberg) max_speed = beamsize_fwhm / 3 / sampling_interval print('max_speed = {:.2f} arcsec per s'.format(max_speed)) min_duration = map_width / max_speed print('min_duration = {:.1f} s'.format(min_duration)) scanline_duration = 90. # seconds samples_per_scanline = int(scanline_duration / sampling_interval + 0.5) print('samples_per_scanline', samples_per_scanline) refpos_duration = 90. # seconds refpos_interval = 2 duty_cycle = 15. # s total_on_time = (scanline_duration + duty_cycle) * num_scan_lines total_ref_time = (refpos_duration + duty_cycle) * (num_scan_lines // refpos_interval) total_time = total_on_time + total_ref_time print('Total time necessary for map: {:.1f} min'.format(total_time / 60.)) num_maps = 128 dummy_tsys = 1. on_noise = dummy_tsys / np.sqrt(spec_reso * smooth_nbin * sampling_interval) ref_noise = dummy_tsys / np.sqrt(spec_reso * smooth_nbin * refpos_duration) print('on_noise = {:.2e}, ref_noise = {:.2e}'.format(on_noise, ref_noise)) reduced_specs = np.empty((num_scan_lines, samples_per_scanline, num_maps)) xcoords, ycoords = np.empty((2, num_scan_lines, samples_per_scanline)) lons = np.linspace(-map_width / 2, map_width / 2, samples_per_scanline) lats = np.linspace(-map_height / 2, map_height / 2, num_scan_lines) for scan_line in range(num_scan_lines): if scan_line % refpos_interval == 0: ref_spec = np.random.normal(0., ref_noise, num_maps) + dummy_tsys on_specs = np.random.normal(0., on_noise, (samples_per_scanline, num_maps)) + dummy_tsys reduced_specs[scan_line] = dummy_tsys * (on_specs - ref_spec) / ref_spec xcoords[scan_line] = lons ycoords[scan_line] = np.repeat(lats[scan_line], lons.size) tmp_map = reduced_specs[..., 0] cabs = np.max(np.abs(tmp_map)) fig = pl.figure(figsize=(10, 5)) ax = fig.add_axes((0.1, 0.1, 0.8, 0.8)) _ = ax.scatter( xcoords, ycoords, c=tmp_map, cmap='bwr', edgecolor='none', vmin=-cabs, vmax=cabs, ) target_header = setup_header((0, 0), (map_width / 3600., map_height / 3600.), beamsize_fwhm / 3600.) target_wcs = WCS(target_header) # print(target_header) gridder = cygrid.WcsGrid(target_header, naxis3=num_maps) kernelsize_fwhm = beamsize_fwhm / 2 kernelsize_fwhm /= 3600. # need to convert to degree kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2)) support_radius = 4. * kernelsize_sigma healpix_reso = kernelsize_sigma / 2. gridder.set_kernel( 'gauss1d', (kernelsize_sigma,), support_radius, healpix_reso, ) gridder.grid(xcoords.flatten() / 3600, ycoords.flatten() / 3600, reduced_specs.reshape((-1, num_maps))) cygrid_cube = gridder.get_datacube() tmp_map = cygrid_cube[0] cabs = np.max(np.abs(tmp_map)) fig = pl.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection=target_wcs.celestial) ax.imshow(tmp_map, cmap='bwr', interpolation='nearest', origin='lower', vmin=-cabs, vmax=cabs) lon, lat = ax.coords lon.set_axislabel('R.A. [deg]') lat.set_axislabel('Dec [deg]') pl.show() rms_cube = np.std(cygrid_cube, ddof=1) rms_plane = np.mean(np.std(cygrid_cube, ddof=1, axis=(1, 2))) print('rms_cube', rms_cube, 'rms_plane', rms_plane) elevations = np.array([10, 20, 30, 40, 50, 60, 90]) AM = 1. / np.sin(np.radians(elevations)) gain_correction = gaincurve(elevations, 0.954, 3.19E-3, -5.42E-5) Tsys_corr = Tsys_zenith + T_atm * (np.exp(opacity * AM) - np.exp(opacity * 1)) print('Tsys_corr', Tsys_corr) Ta_rms = rms_cube * Tsys_corr if dual_pol: Ta_rms /= np.sqrt(2.) Tb_rms = Ta_to_Tb * Ta_rms / gain_correction S_rms = Tb_rms / Gamma_MB atm_atten = np.exp(-opacity * AM) print('-' * 95) print('RMS per map') print('-' * 95) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( 'Elev', 'Airmass', 'Tsys', 'Ta RMS', 'Tb RMS', 'S RMS', 'AtmAtten', 'Tb_eff RMS', 'S_eff RMS' )) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( '[d]', '', '[K]', '[K]', '[K]', '[Jy]', '', '[K]', '[Jy]' )) for idx in range(len(elevations)): print( '{0:>8.2f} {1:>8.2f} {2:>10.4f} {3:>10.4f} {4:>10.4f} ' '{5:>10.4f} {6:>10.4f} {7:>10.4f} {8:>10.4f}'.format( elevations[idx], AM[idx], Tsys_corr[idx], Ta_rms[idx], Tb_rms[idx], S_rms[idx], atm_atten[idx], Tb_rms[idx] / atm_atten[idx], S_rms[idx] / atm_atten[idx], )) print('Ta RMS = Antenna temp. noise') print('Tb RMS = Brightness temp. noise') print('S RMS = Flux density noise') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Introduction Step2: Atmospheric temperature is approximately given by ambient temperature at ground. Step3: Calculate telescope sensitivity (aka Kelvins per Jansky). Step4: The conversion between antenna temperatures and main-beam brightness temperatures is given by Step5: Define spectrometer properties Step6: If a certain velocity resolution is desired, we first have to infer the desired spectral resolution. Step7: This means, we can bin the original spectrum by a factor of Step8: which will further decrease the noise. Step9: Case 1 Step10: Note, Tsys is higher for low elevation (more air mass). Step11: Calculate raw $T_\mathrm{A}$ noise Step12: For dual polarization observations, we can divide by $\sqrt{2}$. Step13: We also have to account for the position switch (division by noisy reference spectrum). Step14: Finally, we convert to main-beam brightness temperature, $\Delta T_\mathrm{B}$, and flux-density, $\Delta S$, noise Step15: The astronomical signal is furthermore attenuated by the atmosphere. There are two ways to handle this. (a) If the signal strength is known (or can be expected to have a certain value), just apply the attenuation factor and compare to the noise levels. (b) calculate an effective noise level, by increasing Tb and flux-density noise accordingly. Here, we follow the second approach, as the true signal level is unknown. The effective RMS values are not true RMS estimates, but merely serve to indicate the impact of Earth's atmosphere on the sensitivity. Close to zenith, the effect is a few percent only, but for very low elevations, almost half of the signal is lost! Step16: Case 2 Step17: Again, divide by two for dual polarization Step18: and account for PSW (division by noisy reference spectrum) Step19: On-the-fly mapping Step20: The spacing between the scans should not be larger than half the beamwidth, a third is even better. Step21: Likewise, the scan velocity must not be too high. This is determined by the sampling rate and the beam size (one wants at least three independent samples, per beam, otherwise the resulting map will have some smearing along the scan lines). Step22: With this, we can calculate the minimal duration of one scan line Step23: However, each telescope will have a duty cycle between two scan lines, which is defined by the time the telescope needs for re-positioning. At Effelsberg, this is about 15 s, which means that the duration per scanline should at least be one or two minutes, otherwise the observing efficiency would become really poor. Step24: We now have to choose the time spent on the reference position. Furthermore, it is possible to do the reference scan only after every $n$ scanlines (to save time). Step25: Such a map would need the following total observing time. Step26: Now, we create the raw data. Note, the absolute value of the RMS in counts (for the P quantities) is not important, but the RMS ratio between On and Ref spectra is (the absolute number is unimportant, because it gets calibrated away in the (On-Ref/Ref) equation). Step27: Here, a dummy $T_\mathrm{sys}$ value is used. Later, the map (and thus the RMS) can simply be scaled to match the true $T_\mathrm{sys}$. Step28: We will now create spectral noise (in arbitrary units). This must be much smaller than $T_\mathrm{sys}$ to avoid numerical problems. It is also important, that each of the raw-data spectra has an offset (the $T_\mathrm{sys}$ level in counts, if you will). If one would neglect this, one had a division by very small numbers in the line where the reduced spectra are calculated. The magnitude of the noise is furthermore tied to the $T_\mathrm{sys}$ level - it is based on the "radiometer" factor $\left(1/\sqrt{\tau\Delta f}\right)$ Step29: We can test this, by just plotting the "raw" data for one channel. Step30: Now do the real gridding. First, prepare a WCS header Step31: We already define a WCS object for later use in our plots Step32: Setup the gridder and define kernel sizes (half the beamsize is always a good choice). Step33: The gridder needs the coordinates as flat arrays. The data to be gridded must be a 2D array (first dimension has to match the number of coordinate samples, second dimension is the number of channels/maps in the desired data cube). Step34: Again, as a sanity check, we plot one of the channel maps. Step35: Last but not least, we can measure the noise. There are two possibilities to do this. Step36: Second, we can calculate the RMS per plane (and the average over all planes) Step37: The ultimate questions is, what is the noise level in such a map, if we account for real $T_\mathrm{sys}$ and atmospheric effects, etc. Step38: $T_\mathrm{A}$ noise is now the measured noise from the gridded maps multiplied with the $T_\mathrm{sys}$ Step39: Note, we don't need to account for the position switch again (division by noisy reference spectrum was already performed in the reduced-spectra array creation)!
3,265
<ASSISTANT_TASK:> Python Code: # You can use any Python source file as a module by executing an import statement in some other Python source file # The import statement combines two operations; it searches for the named module, then it binds the # results of that search to a name in the local scope. import os, json, math # Import data processing libraries like Numpy and TensorFlow import numpy as np import tensorflow as tf # Python shutil module enables us to operate with file objects easily and without diving into file objects a lot. import shutil # Show the currently installed version of TensorFlow print("TensorFlow version: ",tf.version.VERSION) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY # `ls` is a Linux shell command that lists directory contents # `l` flag list all the files with permissions and details !ls -l ../data/toy_data/*.csv # Define columns of data CSV_COLUMNS = ['fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key'] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']] # Define features you want to use def features_and_labels(row_data): for unwanted_col in ['pickup_datetime', 'key']: row_data.pop(unwanted_col) label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label # load the training data def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS) .map(features_and_labels) # features, label ) if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(1000).repeat() dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE return dataset # Build a simple Keras DNN using its Functional API def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_dnn_model(): INPUT_COLS = ['pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count'] # TODO 2 # input layer inputs = { colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32') for colname in INPUT_COLS } # tf.feature_column.numeric_column() represents real valued or numerical features. feature_columns = { colname : tf.feature_column.numeric_column(colname) for colname in INPUT_COLS } # the constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires that you specify: LayerConstructor()(inputs) dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs) # two hidden layers of [32, 8] just in like the BQML DNN h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs) h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1) # final output is a linear activation because this is regression output = tf.keras.layers.Dense(1, activation='linear', name='fare')(h2) model = tf.keras.models.Model(inputs, output) model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse']) return model print("Here is our DNN architecture so far:\n") model = build_dnn_model() print(model.summary()) # tf.keras.utils.plot_model() Converts a Keras model to dot format and save to a file. tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR') TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around NUM_EVALS = 32 # how many times to evaluate NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down trainds = load_dataset('../data/toy_data/taxi-traffic-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN) evalds = load_dataset('../data/toy_data/taxi-traffic-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) # Model Fit history = model.fit(trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch) # plot # Use matplotlib for visualizing the model import matplotlib.pyplot as plt nrows = 1 ncols = 2 # The .figure() method will create a new figure, or activate an existing figure. fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(['loss', 'rmse']): ax = fig.add_subplot(nrows, ncols, idx+1) # The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y. plt.plot(history.history[key]) plt.plot(history.history['val_{}'.format(key)]) # The .title() method sets a title for the axes. plt.title('model {}'.format(key)) plt.ylabel(key) plt.xlabel('epoch') # The .legend() method will place a legend on the axes. plt.legend(['train', 'validation'], loc='upper left'); # TODO 5 # Use the model to do prediction with `model.predict()` model.predict({ 'pickup_longitude': tf.convert_to_tensor([-73.982683]), 'pickup_latitude': tf.convert_to_tensor([40.742104]), 'dropoff_longitude': tf.convert_to_tensor([-73.983766]), 'dropoff_latitude': tf.convert_to_tensor([40.755174]), 'passenger_count': tf.convert_to_tensor([3.0]), }, steps=1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Locating the CSV files Step2: Lab Task 1 Step3: Next, let's define our features we want to use and our label(s) and then load in the dataset for training. Step4: Lab Task 2 Step5: Lab Task 3 Step6: Lab Task 4 Step7: Visualize the model loss curve Step8: Lab Task 5
3,266
<ASSISTANT_TASK:> Python Code: odd_1000 = [x**2 for x in range(0, 1000) if x % 2 == 1] # 리스트의 처음 다섯 개 항목 odd_1000[:5] odd_3x7 = [x for x in range(0, 1000) if x % 2 == 1 and x % 7 == 0] # 리스트의 처음 다섯 개 항목 odd_3x7[:5] def square_plus1(x): return x**2 + 1 odd_3x7_spl = [square_plus1(x) for x in odd_3x7] # 리스트의 처음 다섯 개 항목 odd_3x7_spl[:5] import csv with open('Seoul_pop2.csv', 'rb') as f: reader = csv.reader(f) for row in reader: if len(row) == 0 or row[0][0] == '#': continue else: print(row) np.arange(3, 10, 3) np.zeros((2,3)) np.ones((2,)) np.diag([1, 2, 3, 4]) np.ones((3,3)) * 2 np.diag(np.ones((3,))*2) np.diag(np.arange(2, 7, 2)) xs = np.linspace(0, 3, 30) xs np.linspace(0,1, 10) ** 2 data = np.loadtxt('populations.txt') year, hares, lynxes, carrots = data.T plt.axes([0.2, 0.1, 0.5, 0.8]) plt.plot(year, hares, year, lynxes, year, carrots) plt.legend(('Hare', 'Lynx', 'Carrot'), loc=(1.05, 0.5)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 문제 Step2: 문제 Step3: csv 파일 읽어들이기 Step4: 문제 Step5: 문제 Step6: 문제 Step7: 넘파이의 linspace() 함수 활용 Step8: 문제 Step9: 넘파이 활용 기초 2 Step10: 문제
3,267
<ASSISTANT_TASK:> Python Code: import torch import torch.nn as nn # we'll use this a lot going forward! import numpy as np import matplotlib.pyplot as plt %matplotlib inline X = torch.linspace(1,50,50).reshape(-1,1) # Equivalent to # X = torch.unsqueeze(torch.linspace(1,50,50), dim=1) torch.manual_seed(71) # to obtain reproducible results e = torch.randint(-8,9,(50,1),dtype=torch.float) print(e.sum()) y = 2*X + 1 + e print(y.shape) plt.scatter(X.numpy(), y.numpy()) plt.ylabel('y') plt.xlabel('x'); torch.manual_seed(59) model = nn.Linear(in_features=1, out_features=1) print(model.weight) print(model.bias) class Model(nn.Module): def __init__(self, in_features, out_features): super().__init__() self.linear = nn.Linear(in_features, out_features) def forward(self, x): y_pred = self.linear(x) return y_pred torch.manual_seed(59) model = Model(1, 1) print(model) print('Weight:', model.linear.weight.item()) print('Bias: ', model.linear.bias.item()) for name, param in model.named_parameters(): print(name, '\t', param.item()) x = torch.tensor([2.0]) print(model.forward(x)) # equivalent to print(model(x)) x1 = np.array([X.min(),X.max()]) print(x1) w1,b1 = model.linear.weight.item(), model.linear.bias.item() print(f'Initial weight: {w1:.8f}, Initial bias: {b1:.8f}') print() y1 = x1*w1 + b1 print(y1) plt.scatter(X.numpy(), y.numpy()) plt.plot(x1,y1,'r') plt.title('Initial Model') plt.ylabel('y') plt.xlabel('x'); criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr = 0.001) # You'll sometimes see this as # optimizer = torch.optim.SGD(model.parameters(), lr = 1e-3) epochs = 50 losses = [] for i in range(epochs): i+=1 y_pred = model.forward(X) loss = criterion(y_pred, y) losses.append(loss) print(f'epoch: {i:2} loss: {loss.item():10.8f} weight: {model.linear.weight.item():10.8f} \ bias: {model.linear.bias.item():10.8f}') optimizer.zero_grad() loss.backward() optimizer.step() plt.plot(range(epochs), losses) plt.ylabel('Loss') plt.xlabel('epoch'); w1,b1 = model.linear.weight.item(), model.linear.bias.item() print(f'Current weight: {w1:.8f}, Current bias: {b1:.8f}') print() y1 = x1*w1 + b1 print(x1) print(y1) plt.scatter(X.numpy(), y.numpy()) plt.plot(x1,y1,'r') plt.title('Current Model') plt.ylabel('y') plt.xlabel('x'); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Create a column matrix of X values Step2: Create a "random" array of error values Step3: Create a column matrix of y values Step4: Plot the results Step5: Note that when we created tensor $X$, we did <em>not</em> pass <tt>requires_grad=True</tt>. This means that $y$ doesn't have a gradient function, and <tt>y.backward()</tt> won't work. Since PyTorch is not tracking operations, it doesn't know the relationship between $X$ and $y$. Step6: Without seeing any data, the model sets a random weight of 0.1060 and a bias of 0.9638. Step7: <div class="alert alert-info"><strong>NOTE Step8: As models become more complex, it may be better to iterate over all the model parameters Step9: <div class="alert alert-info"><strong>NOTE Step10: which is confirmed with $f(x) = (0.1060)(2.0)+(0.9638) = 1.1758$ Step11: Set the loss function Step12: Set the optimization Step13: Train the model Step14: Plot the loss values Step15: Plot the result
3,268
<ASSISTANT_TASK:> Python Code: import pints import pints.plot import pints.toy import matplotlib.pyplot as plt import numpy as np model = pints.toy.GoodwinOscillatorModel() real_parameters = model.suggested_parameters() times = model.suggested_times() values = model.simulate(real_parameters, times) plt.figure() plt.subplot(3, 1, 1) plt.plot(times, values[:, 0], 'b') plt.subplot(3, 1, 2) plt.plot(times, values[:, 1], 'g') plt.subplot(3, 1, 3) plt.plot(times, values[:, 2], 'r') plt.show() noise1 = 0.001 noise2 = 0.01 noise3 = 0.1 noisy_values = np.array(values, copy=True) noisy_values[:, 0] += np.random.normal(0, noise1, len(times)) noisy_values[:, 1] += np.random.normal(0, noise2, len(times)) noisy_values[:, 2] += np.random.normal(0, noise3, len(times)) plt.figure() plt.subplot(3, 1, 1) plt.plot(times, noisy_values[:, 0], 'b') plt.subplot(3, 1, 2) plt.plot(times, noisy_values[:, 1], 'g') plt.subplot(3, 1, 3) plt.plot(times, noisy_values[:, 2], 'r') plt.show() # Create an object with links to the model and time series problem = pints.MultiOutputProblem(model, times, values) # Create a log posterior log_prior = pints.UniformLogPrior([1, 1, 0.01, 0.01, 0.01], [10, 10, 1, 1, 1]) log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, [noise1, noise2, noise3]) log_posterior = pints.LogPosterior(log_likelihood, log_prior) # Run MCMC on the noisy data x0 = [[5, 5, 0.5, 0.5, 0.5]]*3 mcmc = pints.MCMCController(log_posterior, 3, x0) mcmc.set_max_iterations(5000) mcmc.set_log_to_screen(False) print('Running') chains = mcmc.run() print('Done!') results = pints.MCMCSummary( chains=chains, time=mcmc.time(), parameter_names=['k2', 'k3', 'm1', 'm2', 'm3'] ) print(results) pints.plot.trace(chains, ref_parameters=real_parameters) plt.show() # Fit to the noisy data parameters = [] opt = pints.OptimisationController(log_posterior, x0[0], method=pints.XNES) opt.set_log_to_screen(False) parameters, fbest = opt.run() print('') print(' p1 p2 p3 p4 p5') print('real ' + ' '.join(['{: 8.4g}'.format(float(x)) for x in real_parameters])) print('found ' + ' '.join(['{: 8.4g}'.format(x) for x in parameters])) problem = pints.MultiOutputProblem(model, times, noisy_values) # Create a log-likelihood function (adds an extra parameter!) log_likelihood = pints.GaussianLogLikelihood(problem) # Create a uniform prior over both the parameters and the new noise variable log_prior = pints.UniformLogPrior( [0, 0, 0, 0, 0, 0, 0, 0], [10, 10, 1, 1, 1, 1, 1, 1] ) # Create a posterior log-likelihood (log(likelihood * prior)) log_posterior = pints.LogPosterior(log_likelihood, log_prior) # Choose starting points for 3 mcmc chains real_parameters1 = np.array(real_parameters.tolist() + [noise1, noise2, noise3]) xs = [ real_parameters1 * 1.1, real_parameters1 * 0.9, real_parameters1 * 1.15, real_parameters1 * 1.2, ] # Create mcmc routine mcmc = pints.MCMCController(log_posterior, 4, xs, method=pints.RelativisticMCMC) # Add stopping criterion mcmc.set_max_iterations(200) # Run in parallel mcmc.set_parallel(True) mcmc.set_log_interval(1) # Tune the samplers' hyper-parameters for sampler in mcmc.samplers(): sampler.set_leapfrog_step_size([0.1, 0.5, 0.002, 0.002, 0.002, 0.0005, 0.001, 0.01]) sampler.set_leapfrog_steps(10) # Run! print('Running...') chains = mcmc.run() print('Done!') results = pints.MCMCSummary( chains=chains, time=mcmc.time(), parameter_names=['k2', 'k3', 'm1', 'm2', 'm3', 'sigma_x', 'sigma_y', 'sigma_z']) print(results) pints.plot.trace(chains, ref_parameters=real_parameters1) plt.show() pints.plot.series(np.vstack(chains), problem) plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The model also provides suggested parameters and sampling times, allowing us to run a simulation Step2: This gives us all we need to create a plot of current versus time Step3: Now we will add some noise to generate some fake "experimental" data and try to recover the original parameters. Step4: Now we can try and infer the original parameters Step5: We can use an MCMCSummary to display the results Step6: Now we can inspect the resulting chains Step7: This is a pretty hard problem! Step8: Sampling using relativistic HMC Step9: Display the results Step10: Plot posterior predictive distribution.
3,269
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import LeaveOneOut from sklearn import linear_model, neighbors %matplotlib inline plt.style.use('ggplot') # dataset path data_dir = "." sample_data = pd.read_csv(data_dir+"/hw1.csv", delimiter=',') sample_data.head() X = np.array(sample_data.iloc[:,range(1,5)]) y = np.array(sample_data.iloc[:,0]) def loo_risk(X,y,regmod): Construct the leave-one-out square error risk for a regression model Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar LOO risk loo = LeaveOneOut() loo_losses = [] for train_index, test_index in loo.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] regmod.fit(X_train,y_train) y_hat = regmod.predict(X_test) loss = np.sum((y_hat - y_test)**2) loo_losses.append(loss) return np.mean(loo_losses) def emp_risk(X,y,regmod): Return the empirical risk for square error loss Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar empirical risk regmod.fit(X,y) y_hat = regmod.predict(X) return np.mean((y_hat - y)**2) lin1 = linear_model.LinearRegression(fit_intercept=True) print('LOO Risk: '+ str(loo_risk(X,y,lin1))) print('Emp Risk: ' + str(emp_risk(X,y,lin1))) LOOs = [] MSEs = [] K=60 Ks = range(1,K+1) for k in Ks: knn = neighbors.KNeighborsRegressor(n_neighbors=k) LOOs.append(loo_risk(X,y,knn)) MSEs.append(emp_risk(X,y,knn)) plt.plot(Ks,LOOs,'r',label="LOO risk") plt.title("Risks for kNN Regression") plt.plot(Ks,MSEs,'b',label="Emp risk") plt.legend() _ = plt.xlabel('k') min(LOOs) print('optimal k:' + str(LOOs.index(min(LOOs)))) n,p = X.shape rem = set(range(p)) supp = [] LOOs = [] while len(supp) < p: rem = list(set(range(p)) - set(supp)) ERMs = [emp_risk(X[:,supp+[j]],y,linear_model.LinearRegression(fit_intercept=True)) for j in rem] jmin = rem[np.argmin(ERMs)] supp.append(jmin) LOOs.append(loo_risk(X[:,supp],y,linear_model.LinearRegression(fit_intercept=True))) for i,s,loo in zip(range(p),supp,LOOs): print("Step {} added variable {} with LOO: {}".format(i,s,loo)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step3: The response variable is quality. Step4: Exercise 2.1 (5 pts) Compare the leave-one-out risk with the empirical risk for linear regression, on this dataset. Step5: Exercise 2.2 (10 pts) Perform kNN regression and compare the leave-one-out risk with the empirical risk for k from 1 to 50. Remark on the tradeoff between bias and variance for this dataset and compare against linear regression. Step6: Conclusion Comparing the performance of kNN and linear regression, we see that 16-nearest neighbors achieves a LOO risk of 233.2 which is lower than that for linear regression (243.5).
3,270
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('ggplot') # Anaconda on Windows will get warning df=pd.read_csv('train.csv') df.head() df.hist( figsize=(16, 10)) df.describe() df.keys() df2=df.drop(['Name','PassengerId','Cabin','Ticket'],1) df2.head() code={'female':0,'male':1} for k,i in df2.iterrows(): df2.loc[k,"sexc"]=code[i['Sex']] df2.head() df2["Embarked"].value_counts() code={'C':0,'Q':1, 'S':2} for k,i in df2.iterrows(): if i['Embarked']==i['Embarked']: df2.loc[k,"embarkedc"]=code[i['Embarked']] df2.head(10) #remove the original column df3=df2.drop(['Sex','Embarked'],1) df4=df3.dropna(axis=0) df4.head(10) df5=df3[ df3['Age'].notnull() ] df5.head() X=df4.as_matrix([df4.columns[1:]]) y=df4.as_matrix(["Survived"]) y.shape y=y.reshape(-1) y.shape X.shape from sklearn.ensemble import ExtraTreesClassifier from sklearn.cross_validation import cross_val_score forest = ExtraTreesClassifier(n_estimators=250,random_state=0) cross_val_score(forest, X, y) forest.fit(X,y) forest.predict([3,22.0,1,0,7.2500,1.0,2.0]) forest.predict([1,38.0,1,0,71.2833,0.0,0.0]) #Pclass=3 (lower), Age=25 (guessing), SibSp=0, Parch=0, Fare=0.0 (free trip), sexc=1 (male), embarkedc=1 (Queenstown, guessing) forest.predict([3,25.0,0,0,0.0,1,1]) forest.predict([1,23.0,0,1,50.0,0,1]) %%HTML <img src="https://upload.wikimedia.org/wikipedia/en/b/bb/Titanic_breaks_in_half.jpg"> <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Loading data from CSV Step2: Show first 5 records Step3: Histogram Step4: Show statistic Step5: Show list of column names Step6: Drop some unnecessary columns Step7: From column 'Sex', create another numeric column (sexc) Step8: Count number of records of each category Step9: From column 'Embarked', create another numeric column (embarkedc) Step10: Drop every NaN row Step11: Alternative approuch--drop every NaN row using Masking Step12: Convert Dataframe to numpy array for traning Step13: Construct Classifier Step14: Lazy k-fold cross-valivation Step15: Training Step16: Prediction Step17: Dummy Jack Step18: --The output of prediction is 0 that means sorry for Jack, he's not survive. Step19: --But Rose has to continue.
3,271
<ASSISTANT_TASK:> Python Code: from IPython.display import HTML HTML('<iframe src="http://conda.pydata.org/docs/_downloads/conda-cheatsheet.pdf" width="700" height="400"></iframe>') # importing numpy # performance list sum # performance array sum %timeit np.sum(array) one_dim_array = two_dim_array = # size & shape # data type # usual arrays # changing the shape # linspace one_dim_array two_dim_array # Chess board chess_board = np.zeros([8, 8], dtype=int) # your code chess_board # drawing the chessboard # numpy functions x = y = # plotting # another function # transpose two_dim_array = # matrix multiplication # matrix vector # inv # eigenvectors & eigenvalues from IPython.display import HTML HTML('<iframe src="http://www.mambiente.munimadrid.es/sica/scripts/index.php" \ width="700" height="400"></iframe>') # Linux command !head ./data/barrio_del_pilar-20160322.csv # Windows # !gc log.txt | select -first 10 # head # loading the data # ./data/barrio_del_pilar-20160322.csv data2016 = # mean # masking invalid data data2015 = from IPython.display import HTML HTML('<iframe src="http://ccaa.elpais.com/ccaa/2015/12/24/madrid/1450960217_181674.html" width="700" height="400"></iframe>') # http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html def moving_average(x, N=8): return np.convolve(x, np.ones(N)/N, mode='same') HTML('<iframe src="http://eportal.magrama.gob.es/websiar/Ficha.aspx?IdProvincia=28&IdEstacion=1" width="700" height="400"></iframe>') !head data/M01_Center_Finca_temperature_data_2004_2015.csv # Loading the data temp_data = # Importing SciPy stats # Applying some functions: describe, mode, mean... temp_data2 = np.zeros([365, 3, 12]) # Calculating mean of mean temp # max of max # min of min plt.style.available # plotting max_max, min_min, mean_mean # mean vs mean_mean # and max, min 2015 #we will use numpy functions in order to work with numpy arrays def funcion(x,y): return # 0D: works! funcion(3,5) # 1D: works! x = np. plt.plot( , ) #We can create the X and Y matrices by hand, or use a function designed to make ir easy: #we create two 1D arrays of the desired lengths: x_1d = np.linspace(0, 5, 5) y_1d = np.linspace(-2, 4, 7) #And we use the meshgrid function to create the X and Y matrices! X, Y = X Y #Using Numpy arrays, calculating the function value at the points is easy! Z #Let's plot it! x_1d = np.???(0, 5, 100) y_1d = np.???(-2, 4, 100) X, Y = np.???( , ) Z = funcion(X,Y) plt.contour(X, Y, Z) plt.colorbar() plt.contourf( , , , ,cmap=plt.cm.Spectral) #With cmap, a color map is specified plt.colorbar() plt.contourf( , , , ,cmap=plt.cm.Spectral) plt.colorbar() #We can even combine them! plt.contourf(X, Y, Z, np.linspace(-2, 2, 100),cmap=plt.cm.Spectral) plt.colorbar() cs = plt.???(X, Y, Z, np.linspace(-2, 2, 9), colors='k') plt.clabel(cs) time_vector = np. ('data/ligo_tiempos.txt') frequency_vector = np. ('data/ligo_frecuencias.txt') intensity_matrix = np. ('data/ligo_datos.txt') time_2D, freq_2D = np. plt. ( ) #We can manually adjust the sice of the picture plt. ( , , ,np.linspace(0, 0.02313, 200),cmap='bone') plt.xlabel('time (s)') plt.ylabel('Frequency (Hz)') plt.colorbar() plt.figure(figsize=(10,6)) plt.contourf(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 200),cmap = plt.cm.Spectral) plt.colorbar() plt.contour(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 9), colors='k') plt.xlabel('time (s)') plt.ylabel('Frequency (Hz)') plt.axis([9.9, 10.05, 0, 300]) from ipywidgets import interact #Lets define a extremely simple function: def ejemplo(x): print(x) #Try changing the value of x to True, 'Hello' or ['hello', 'world'] #We can control the slider values with more precission: x = np.linspace(-1, 7, 1000) fig = plt.figure() fig.tight_layout() plt.subplot(211)#This allows us to display multiple sub-plots, and where to put them plt.plot(x, np.sin(x)) plt.grid(False) plt.title("Audio signal: modulator") plt.subplot(212) plt.plot(x, np.sin(50 * x)) plt.grid(False) plt.title("Radio signal: carrier") #Am modulation simply works like this: am_wave = np.sin(50 * x) * (0.5 + 0.5 * np.sin(x)) plt.plot(x, am_wave) def am_mod (f_carr=50, f_mod=1, depth=0.5): #The default values will be the starting points of the sliders interact(am_mod, f_carr = (1,100,2), f_mod = (0.2, 2, 0.1), depth = (0, 1, 0.1)) # Importación from sympy import init_session init_session(use_latex='matplotlib') #We must start calling this function coef_traccion = w = W = w, W x, y, z, t = symbols('x y z t', real=True) x.assumptions0 expr = expr #We can substitute pieces of the expression: expr. #We can particularize on a certain value: (sin(x) + 3 * x). #We can evaluate the numerical value with a certain precission: (sin(x) + 3 * x). expr1 = (x ** 3 + 3 * y + 2) ** 2 expr1 expr1. expr = cos(2*x) expr. expr_xy = y ** 3 * sin(x) ** 2 + x ** 2 * cos(y) expr_xy int2 = 1 / sin(x) x, a = symbols('x a', real=True) int3 = 1 / (x**2 + a**2)**2 a, x, t, C = symbols('a, x, t, C', real=True) ecuacion = ecuacion x = symbols('x') f = Function('y') ecuacion_dif = ecuacion_dif # Notebook style from IPython.core.display import HTML css_file = './static/style.css' HTML(open(css_file, "r").read()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Main objectives of this workshop Step2: Array creation Step3: Basic slicing Step4: [start Step5: 2. Drawing Step6: Operations & linalg Step7: Air quality data Step8: Loading the data Step9: Dealing with missing values Step10: Plotting the data Step11: CO Step12: O3 Step13: The file contains data from 2004 to 2015 (included). Each row corresponds to a day of the year, so evey 365 lines contain data from a whole year* Step14: We can also get information about percentiles! Step15: Let's visualize data! Step16: Let's see if 2015 was a normal year... Step17: But the power of Matplotlib does not end here! Step18: In oder to plot the 2D function, we will need a grid. Step19: Note that with the meshgrid function we can only create rectangular grids Step20: We can try a little more resolution... Step21: The countourf function is simmilar, but it colours also between the lines. In both functions, we can manually adjust the number of lines/zones we want to differentiate on the plot. Step22: These functions can be enormously useful when you want to visualize something. Step23: The time and frequency vectors contain the values at which the instrument was reading, and the intensity matrix, the postprocessed strength measured for each frequency at each time. Step24: Wow! What is that? Let's zoom into it! Step25: IPython Widgets Step26: If you want a dropdown menu that passes non-string values to the Python function, you can pass a dictionary. The keys in the dictionary are used for the names in the dropdown menu UI and the values are the arguments that are passed to the underlying Python function. Step27: In order to interact with it, we will need to transform it into a function Step28: Other options... Step29: The basic unit of this package is the symbol. A simbol object has name and graphic representation, which can be different Step30: By default, SymPy takes symbols as complex numbers. That can lead to unexpected results in front of certain operations, like logarithms. We can explicitly signal that a symbol is real when we create it. We can also create several symbols at a time. Step31: Expressions can be created from symbols Step32: We can manipulate the expression in several ways. For example Step33: We can derivate and integrate Step34: We also have ecuations and differential ecuations Step35: Data Analysis with pandas
3,272
<ASSISTANT_TASK:> Python Code: df.fillna('n/a',inplace=True) su=df[df['type_of_property'].str.contains('Apartment')] mu=df[df['type_of_property'].str.contains('Apartments')] print(len(mu)) print(len(su)) su['propertyinfo_value'] len(su[~(su['propertyinfo_value'].str.contains('bd') | su['propertyinfo_value'].str.contains('Studio'))]) len(su[su['propertyinfo_value'].str.contains('bd') & ~su['propertyinfo_value'].str.contains('ba')]) no_baths=su[~su['propertyinfo_value'].str.contains('ba')] sucln=su[~su.index.isin(no_baths.index)] sucln def parse_info(row): print(row) br,ba,sqft=row.split('·')[:3] #_,br=br.split('mo')[:2] rent,br=br.split('mo')[:2] return pd.Series({'Beds':br,'Baths':ba,'Sqft':sqft,'Rent':rent+'mo'}) attr=sucln['propertyinfo_value'].apply(parse_info) attr sujnd=sucln.join(attr) sujnd.T <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 带卧室但没厕所 Step2: 这里通过自定义函数来分类处理数据,从sucln['xxx']出来的是Series,parse_info中读入的row参数,是来自Series的一行文字
3,273
<ASSISTANT_TASK:> Python Code: from modsim import System # If this doesn't work, move this file into your /code folder. # It needs to be in the same folder as modsim.py. def func1(input1, input2): print("Input 1 = ", input1) print("Input 2 = ", input2) output = input1 + input2 print("Output = ", output) func1(1, 2.5) print(output) output = 5 print(output) func1(1, 2.5) # will print input1, input2, and output print("Output also =", output) def func2(input1, input2): print("Input 1 = ", input1) print("Input 2 = ", input2) print("Output = ", output) # this will be the global value from earlier, since it's not locally computed func1(1, 2.5) func2(1, 2.5) def forgetful_func(input1, input2): print(input1, input2) forgetful_func(1, 6) print("Now we try to print those:") print(input1, input2) def useful_func(input1, input2): print(input1 + input2) useful_func(2, 3) # This should make sense. You pass in these parameters, they get used, they aren't stored after printing. input1 = 1.5 input2 = 4 useful_func(input1, input2) cat = 5 dog = 10 useful_func(cat, dog) useful_func(dog, dog) useful_func(input2, dog) def will_you_remember_me(thing1, thing2): new_thing = thing1 + thing2 return new_thing will_you_remember_me(3, 8) print(new_thing) I_will_remember_you = will_you_remember_me(6, 7) print(I_will_remember_you) # Please, never name things like this. Please. print(will_you_remember_me(5, 6)) solar_system = System(planets=8, central_mass="Sun") print(solar_system.planets, solar_system.central_mass) other_system = System(planets=5, central_mass="Me") print(other_system.planets, other_system.central_mass) friends = 1000 center_of_universe = " " # Type your name here? personal_universe = System(friends=friends, center_of_universe=center_of_universe) print(personal_universe.friends, personal_universe.center_of_universe) apples = 1 bananas = 1 cherries = 7 grapes = 13 fruit_salad = System(a=apples, b=bananas, c=cherries, d=grapes) print(fruit_salad.a, fruit_salad.b, fruit_salad.c, fruit_salad.d) print(fruit_salad.apples) print(apples) print(a) global_a = fruit_salad.a # This is a name I chose, it doesn't 'make' the variable global. print(global_a) b = fruit_salad.b b += 10 fruit_salad.b += 100 print(b, fruit_salad.b) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Let's write a function, like they do in ModSim notebooks all the time. We'll give it some parameters just to make it feel important. Step2: All it does is add two numbers and print a lot of stuff. Printing is fun, let's do it some more. Step3: Well, shucks. That didn't work. output isn't defined? But we defined it in that super technical function! Step4: So, output was defined in the function, but doesn't exist outside of the function executing. What happens if you run the function again and print output? Step5: What? There are 2 versions of output? Well ya see, this is where we come to global and local variables. Local variables only exist within a function or class* that is using them (they are temporary for a function/specific to a class). Global variables can be accessed by any function or class. Step6: Here we're running two similar functions one after another, and they print their results one after another. Step7: Why isn't the second print command (outside of the function) working? Because input1 and input2 still aren't global. We getting this? Good. Step8: Now, I dare you to do something w_i_l_d Step9: Cool, so that's inputs (mostly). What about outputs? That poor little small-town output variable wants to be a global star. Step10: Foiled again! But we used return and everything! Well guess what, a return value that doesn't get assigned to anything is like a letter without an address (cue joke about The Twitter and snail mail implying that I'm old or something). Step11: If you're lazy you can do this, but your variable will not be saved so be careful. Step12: You can do the above as a shortcut, but remember that your return value will be LOST FOREVER after that. Step13: So, you've created two systems by directly setting parameters, and one by passing in premade variables. In the book those variables usually have the same name as their target, which obscures what's actually happening. Step14: Oh dear, did that last one fail? Of course it did, apples isn't an attribute of fruit_salad. You can print plain old apples, because it's global Step15: You can't print plain old a, because that's an attribute of fruit_salad and isn't defined outside of that. Step16: You can, however, do this Step17: And you could just as well call global_a something else, like a, and there would be a global variable a as well as a property a, and if you changed one the other would not be affected.
3,274
<ASSISTANT_TASK:> Python Code: import holoviews as hv hv.notebook_extension(bokeh=True) hv.Element(None, group='Value', label='Label') import numpy as np points = [(0.1*i, np.sin(0.1*i)) for i in range(100)] hv.Curve(points) np.random.seed(7) points = [(0.1*i, np.sin(0.1*i)) for i in range(100)] errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2) for i in np.linspace(0, 100, 11)] hv.Curve(points) * hv.ErrorBars(errors) %%opts ErrorBars points = [(0.1*i, np.sin(0.1*i)) for i in range(100)] errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2, np.random.rand()/4) for i in np.linspace(0, 100, 11)] hv.Curve(points) * hv.ErrorBars(errors, vdims=['y', 'yerrneg', 'yerrpos']) xs = np.linspace(0, np.pi*4, 40) hv.Area((xs, np.sin(xs))) X = np.linspace(0,3,200) Y = X**2 + 3 Y2 = np.exp(X) + 2 Y3 = np.cos(X) hv.Area((X, Y, Y2), vdims=['y', 'y2']) * hv.Area((X, Y, Y3), vdims=['y', 'y3']) np.random.seed(42) xs = np.linspace(0, np.pi*2, 20) err = 0.2+np.random.rand(len(xs)) hv.Spread((xs, np.sin(xs), err)) %%opts Spread (fill_color='indianred' fill_alpha=1) xs = np.linspace(0, np.pi*2, 20) hv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))), vdims=['y', 'yerrneg', 'yerrpos']) data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)] bars = hv.Bars(data, kdims=[hv.Dimension('Car occupants', values='initial')], vdims=['Count']) bars + bars[['one', 'two', 'three']] %%opts Bars [group_index=0 stack_index=1] from itertools import product np.random.seed(3) groups, stacks = ['A', 'B'], ['a', 'b'] keys = product(groups, stacks) hv.Bars([k+(np.random.rand()*100.,) for k in keys], kdims=['Group', 'Stack'], vdims=['Count']) hv.BoxWhisker(np.random.randn(200), kdims=[], vdims=['Value']) %%opts BoxWhisker [invert_axes=True width=600] groups = [chr(65+g) for g in np.random.randint(0, 3, 200)] hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)), kdims=['Group', 'Category'], vdims=['Value']).sort() np.random.seed(1) data = [np.random.normal() for i in range(10000)] frequencies, edges = np.histogram(data, 20) hv.Histogram(frequencies, edges) %%opts Histogram [projection='polar' show_grid=True] data = [np.random.rand()*np.pi*2 for i in range(100)] frequencies, edges = np.histogram(data, 20) hv.Histogram(frequencies, edges, kdims=['Angle']) %%opts Scatter (color='k', marker='s', s=10) np.random.seed(42) points = [(i, np.random.random()) for i in range(20)] hv.Scatter(points) + hv.Scatter(points)[12:20] np.random.seed(12) points = np.random.rand(50,2) hv.Points(points) + hv.Points(points)[0.6:0.8,0.2:0.5] for o in [hv.Points(points,name="Points "), hv.Scatter(points,name="Scatter")]: for d in ['key','value']: print("%s %s_dimensions: %s " % (o.name, d, o.dimensions(d,label=True))) %%opts Points [color_index=2 size_index=3 scaling_factor=50] (cmap='jet') np.random.seed(10) data = np.random.rand(100,4) points = hv.Points(data, vdims=['z', 'alpha']) points + points[0.3:0.7, 0.3:0.7].hist() %%opts Spikes (line_alpha=0.4) [spike_length=0.1] xs = np.random.rand(50) ys = np.random.rand(50) hv.Points((xs, ys)) * hv.Spikes(xs) %%opts Spikes (cmap='Reds') hv.Spikes(np.random.rand(20, 2), kdims=['Mass'], vdims=['Intensity']) %%opts Spikes [spike_length=0.1] NdOverlay [show_legend=False] hv.NdOverlay({i: hv.Spikes(np.random.randint(0, 100, 10), kdims=['Time'])(plot=dict(position=0.1*i)) for i in range(10)})(plot=dict(yticks=[((i+1)*0.1-0.05, i) for i in range(10)])) %%opts Spikes (line_alpha=0.2) points = hv.Points(np.random.randn(500, 2)) points << hv.Spikes(points['y']) << hv.Spikes(points['x']) x,y = np.mgrid[-10:10,-10:10] * 0.25 sine_rings = np.sin(x**2+y**2)*np.pi+np.pi exp_falloff = 1/np.exp((x**2+y**2)/8) vector_data = [x,y,sine_rings, exp_falloff] hv.VectorField(vector_data) %%opts VectorField.A [color_dim='angle'] VectorField.M [color_dim='magnitude'] hv.VectorField(vector_data, group='A') import numpy as np np.random.seed(42) points = [(i, np.random.normal()) for i in range(800)] hv.Scatter(points).hist() %%opts Surface (cmap='jet' rstride=20, cstride=2) hv.Surface(np.sin(np.linspace(0,100*np.pi*2,10000)).reshape(100,100)) %%opts Scatter3D [azimuth=40 elevation=20] x,y = np.mgrid[-5:5, -5:5] * 0.1 heights = np.sin(x**2+y**2) hv.Scatter3D(zip(x.flat,y.flat,heights.flat)) %%opts Trisurface [fig_size=200] (cmap='hot_r') hv.Trisurface((x.flat,y.flat,heights.flat)) x,y = np.mgrid[-50:51, -50:51] * 0.1 hv.Raster(np.sin(x**2+y**2)) n = 21 xs = np.logspace(1, 3, n) ys = np.linspace(1, 10, n) hv.QuadMesh((xs, ys, np.random.rand(n-1, n-1))) coords = np.linspace(-1.5,1.5,n) X,Y = np.meshgrid(coords, coords); Qx = np.cos(Y) - np.cos(X) Qz = np.sin(Y) + np.sin(X) Z = np.sqrt(X**2 + Y**2) hv.QuadMesh((Qx, Qz, Z)) data = {(chr(65+i),chr(97+j)): i*j for i in range(5) for j in range(5) if i!=j} hv.HeatMap(data).sort() x,y = np.mgrid[-50:51, -50:51] * 0.1 bounds=(-1,-1,1,1) # Coordinate system: (left, bottom, top, right) (hv.Image(np.sin(x**2+y**2), bounds=bounds) + hv.Image(np.sin(x**2+y**2), bounds=bounds)[-0.5:0.5, -0.5:0.5]) x,y = np.mgrid[-50:51, -50:51] * 0.1 r = 0.5*np.sin(np.pi +3*x**2+y**2)+0.5 g = 0.5*np.sin(x**2+2*y**2)+0.5 b = 0.5*np.sin(np.pi/2+x**2+y**2)+0.5 hv.RGB(np.dstack([r,g,b])) %%opts Image (cmap='gray') hv.Image(r,label="R") + hv.Image(g,label="G") + hv.Image(b,label="B") %%opts Image (cmap='gray') mask = 0.5*np.sin(0.2*(x**2+y**2))+0.5 rgba = hv.RGB(np.dstack([r,g,b,mask])) bg = hv.Image(0.5*np.cos(x*3)+0.5, label="Background") * hv.VLine(x=0,label="Background") overlay = bg*rgba overlay.label="RGBA Overlay" bg + hv.Image(mask,label="Mask") + overlay x,y = np.mgrid[-50:51, -50:51] * 0.1 h = 0.5 + np.sin(0.2*(x**2+y**2)) / 2.0 s = 0.5*np.cos(y*3)+0.5 v = 0.5*np.cos(x*3)+0.5 hsv = hv.HSV(np.dstack([h, s, v])) hsv %%opts Image (cmap='gray') hv.Image(h, label="H") + hv.Image(s, label="S") + hv.Image(v, label="V") hv.ItemTable([('Age', 10), ('Weight',15), ('Height','0.8 meters')]) keys = [('M',10), ('M',16), ('F',12)] values = [(15, 0.8), (18, 0.6), (10, 0.8)] table = hv.Table(zip(keys,values), kdims = ['Gender', 'Age'], vdims=['Weight', 'Height']) table table.select(Gender='M') + table.select(Gender='M', Age=10) table.select(Gender='M').to.curve(kdims=["Age"], vdims=["Weight"]) scene = hv.RGB.load_image('../assets/penguins.png') scene * hv.VLine(-0.05) + scene * hv.HLine(-0.05) points = [(-0.3, -0.3), (0,0), (0.25, -0.25), (0.3, 0.3)] codes = [1,4,4,4] scene * hv.Spline((points,codes)) * hv.Curve(points) scene * hv.Text(0, 0.2, 'Adult\npenguins') + scene * hv.Arrow(0,-0.1, 'Baby penguin', 'v') angle = np.linspace(0, 2*np.pi, 100) baby = list(zip(0.15*np.sin(angle), 0.2*np.cos(angle)-0.2)) adultR = [(0.25, 0.45), (0.35,0.35), (0.25, 0.25), (0.15, 0.35), (0.25, 0.45)] adultL = [(-0.3, 0.4), (-0.3, 0.3), (-0.2, 0.3), (-0.2, 0.4),(-0.3, 0.4)] scene * hv.Path([adultL, adultR, baby]) * hv.Path([baby]) x,y = np.mgrid[-50:51, -50:51] * 0.1 def circle(radius, x=0, y=0): angles = np.linspace(0, 2*np.pi, 100) return np.array( list(zip(x+radius*np.sin(angles), y+radius*np.cos(angles)))) hv.Image(np.sin(x**2+y**2)) * hv.Contours([circle(0.22)], level=0) * hv.Contours([circle(0.33)], level=1) %%opts Polygons (cmap='hot' line_color='black' line_width=2) np.random.seed(35) hv.Polygons([np.random.rand(4,2)], level=0.5) *\ hv.Polygons([np.random.rand(4,2)], level=1.0) *\ hv.Polygons([np.random.rand(4,2)], level=1.5) *\ hv.Polygons([np.random.rand(4,2)], level=2.0) def rectangle(x=0, y=0, width=1, height=1): return np.array([(x,y), (x+width, y), (x+width, y+height), (x, y+height)]) (hv.Polygons([rectangle(width=2), rectangle(x=6, width=2)])(style={'fill_color': '#a50d0d'}) * hv.Polygons([rectangle(x=2, height=2), rectangle(x=5, height=2)])(style={'fill_color': '#ffcc00'}) * hv.Polygons([rectangle(x=3, height=2, width=2)])(style={'fill_color': 'cyan'})) scene * hv.Bounds(0.2) * hv.Bounds((0.2, 0.2, 0.45, 0.45,)) scene * hv.Box( -0.25, 0.3, 0.3, aspect=0.5) * hv.Box( 0, -0.2, 0.1) + \ scene * hv.Ellipse(-0.25, 0.3, 0.3, aspect=0.5) * hv.Ellipse(0, -0.2, 0.1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: In addition, Element has key dimensions (kdims), value dimensions (vdims), and constant dimensions (cdims) to describe the semantics of indexing within the Element, the semantics of the underlying data contained by the Element, and any constant parameters associated with the object, respectively. Step2: A Curve is a set of values provided for some set of keys from a continuously indexable 1D coordinate system, where the plotted values will be connected up because they are assumed to be samples from a continuous relation. Step3: ErrorBars is a set of x-/y-coordinates with associated error values. Error values may be either symmetric or asymmetric, and thus can be supplied as an Nx3 or Nx4 array (or any of the alternative constructors Chart Elements allow). Step4: Area <a id='Area'></a> Step5: * Area between curves * Step6: Spread <a id='Spread'></a> Step7: Asymmetric Step8: Bars <a id='Bars'></a> Step9: Bars is an NdElement type, so by default it is sorted. To preserve the initial ordering specify the Dimension with values set to 'initial', or you can supply an explicit list of valid dimension keys. Step10: BoxWhisker <a id='BoxWhisker'></a> Step11: BoxWhisker Elements support any number of dimensions and may also be rotated. To style the boxes and whiskers, supply boxprops, whiskerprops, and flierprops. Step12: Histogram <a id='Histogram'></a> Step13: Histograms partition the x axis into discrete (but not necessarily regular) bins, showing counts in each as a bar. Step14: Scatter <a id='Scatter'></a> Step15: Scatter is the discrete equivalent of Curve, showing y values for discrete x values selected. See Points for more information. Step16: As you can see, Points is very similar to Scatter, and can produce some plots that look identical. However, the two Elements are very different semantically. For Scatter, the dots each show a dependent variable y for some x, such as in the Scatter example above where we selected regularly spaced values of x and then created a random number as the corresponding y. I.e., for Scatter, the y values are the data; the xs are just where the data values are located. For Points, both x and y are independent variables, known as key_dimensions in HoloViews Step17: The Scatter object expresses a dependent relationship between x and y, making it useful for combining with other similar Chart types, while the Points object expresses the relationship of two independent keys x and y with optional vdims (zero in this case), which makes Points objects meaningful to combine with the Raster types below. Step18: Such a plot wouldn't be meaningful for Scatter, but is a valid use for Points, where the x and y locations are independent variables representing coordinates, and the "data" is conveyed by the size and color of the dots. Step19: When supplying two dimensions to the Spikes object, the second dimension will be mapped onto the line height. Optionally, you may also supply a cmap and color_index to map color onto one of the dimensions. This way we can, for example, plot a mass spectrogram Step20: Another possibility is to draw a number of spike trains as you would encounter in neuroscience. Here we generate 10 separate random spike trains and distribute them evenly across the space by setting their position. By also declaring some yticks, each spike train can be labeled individually Step21: Finally, we may use Spikes to visualize marginal distributions as adjoined plots using the &lt;&lt; adjoin operator Step22: VectorField <a id='VectorField'></a> Step23: As you can see above, here the x and y positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each x,y position). Step24: SideHistogram <a id='SideHistogram'></a> Step25: Chart3D Elements <a id='Chart3D Elements'></a> Step26: Surface is used for a set of gridded points whose associated value dimension represents samples from a continuous surface; it is the equivalent of a Curve but with two key dimensions instead of just one. Step27: Scatter3D is the equivalent of Scatter but for two key dimensions, rather than just one. Step28: Raster Elements <a id='Raster Elements'></a> Step29: QuadMesh <a id='QuadMesh'></a> Step30: QuadMesh may also be used to represent an arbitrary mesh of quadrilaterals by supplying three separate 2D arrays representing the coordinates of each quadrilateral in a 2D space. Note that when using QuadMesh in this mode, slicing and indexing semantics and most operations will currently not work. Step31: HeatMap <a id='HeatMap'></a> Step32: Image <a id='Image'></a> Step33: Notice how, because our declared coordinate system is continuous, we can slice with any floating-point value we choose. The appropriate range of the samples in the input numpy array will always be displayed, whether or not there are samples at those specific floating-point values. Step34: You can see how the RGB object is created from the original channels Step35: RGB also supports an optional alpha channel, which will be used as a mask revealing or hiding any Elements it is overlaid on top of Step36: HSV <a id='HSV'></a> Step37: You can see how this is created from the original channels Step38: Tabular Elements <a id='Tabular Elements'></a> Step39: Table <a id='Table'></a> Step40: Note that you can use select using tables, and once you select using a full, multidimensional key, you get an ItemTable (shown on the right) Step41: The Table is used as a common data structure that may be converted to any other HoloViews data structure using the TableConversion class. Step42: Annotation Elements <a id='Annotation Elements'></a> Step43: VLine and HLine <a id='VLine'></a><a id='HLine'></a> Step44: Spline <a id='Spline'></a> Step45: Text and Arrow <a id='Text'></a><a id='Arrow'></a> Step46: Paths <a id='Path Elements'></a> Step47: Contours <a id='Contours'></a> Step48: Polygons <a id='Polygons'></a> Step49: Polygons without a value are useful as annotation, but also allow us to draw arbitrary shapes. Step50: Bounds <a id='Bounds'></a> Step51: Box <a id='Box'></a> and Ellipse <a id='Ellipse'></a>
3,275
<ASSISTANT_TASK:> Python Code: def naivesum_list(N): Naively sum the first N integers A = 0 for i in list(range(N + 1)): A += i return A %load_ext memory_profiler %memit naivesum_list(10**4) %memit naivesum_list(10**5) %memit naivesum_list(10**6) %memit naivesum_list(10**7) %memit naivesum_list(10**8) def naivesum(N): Naively sum the first N integers A = 0 for i in range(N + 1): A += i return A %memit naivesum(10**4) %memit naivesum(10**5) %memit naivesum(10**6) %memit naivesum(10**7) %memit naivesum(10**8) def all_primes(N): Return all primes less than or equal to N. Parameters ---------- N : int Maximum number Returns ------- prime : generator Prime numbers primes = [] for n in range(2, N+1): is_n_prime = True for p in primes: if n%p == 0: is_n_prime = False break if is_n_prime: primes.append(n) yield n print("All prime numbers less than or equal to 20:") for p in all_primes(20): print(p) a = all_primes(10) next(a) next(a) next(a) next(a) next(a) for p in all_primes(100): if (1+p)%4 == 0: print("The prime {} is 4 * {} - 1.".format(p, int((1+p)/4))) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Iterators and Generators Step2: We will now see how much memory this uses Step4: We see that the memory usage is growing very rapidly - as the list gets large it's growing as $N$. Step6: We see that the memory usage is unchanged with $N$, making it practical to run much larger calculations. Step7: This code needs careful examination. First it defines the list of all prime numbers that it currently knows, primes (which is initially empty). Then it loops through all integers $n$ from $2$ to $N$ (ignoring $1$ as we know it's not prime). Step8: To see what the generator is actually doing, we can step through it one call at a time using the built in next function Step9: So, when the generator gets to the end of its iteration it raises an exception. As seen in previous sections, we could surround the next call with a try block to capture the StopIteration so that we can continue after it finishes. This is effectively what the for loop is doing.
3,276
<ASSISTANT_TASK:> Python Code: from __future__ import division, print_function # отключим всякие предупреждения Anaconda import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd %matplotlib inline import seaborn as sns from matplotlib import pyplot as plt plt.rcParams['figure.figsize'] = (6,4) xx = np.linspace(0,1,50) plt.plot(xx, [2 * x * (1-x) for x in xx], label='gini') plt.plot(xx, [4 * x * (1-x) for x in xx], label='2*gini') plt.plot(xx, [-x * np.log2(x) - (1-x) * np.log2(1 - x) for x in xx], label='entropy') plt.plot(xx, [1 - max(x, 1-x) for x in xx], label='missclass') plt.plot(xx, [2 - 2 * max(x, 1-x) for x in xx], label='2*missclass') plt.xlabel('p+') plt.ylabel('criterion') plt.title('Критерии качества как функции от p+ (бинарная классификация)') plt.legend(); # первый класс np.random.seed(7) train_data = np.random.normal(size=(100, 2)) train_labels = np.zeros(100) # добавляем второй класс train_data = np.r_[train_data, np.random.normal(size=(100, 2), loc=2)] train_labels = np.r_[train_labels, np.ones(100)] def get_grid(data, eps=0.01): x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 return np.meshgrid(np.arange(x_min, x_max, eps), np.arange(y_min, y_max, eps)) plt.rcParams['figure.figsize'] = (10,8) plt.scatter(train_data[:, 0], train_data[:, 1], c=train_labels, s=100, cmap='autumn', edgecolors='black', linewidth=1.5) plt.plot(range(-2,5), range(4,-3,-1)); from sklearn.tree import DecisionTreeClassifier # параметр min_samples_leaf указывает, при каком минимальном количестве # элементов в узле он будет дальше разделяться rs = 17 clf_tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=rs) # обучаем дерево clf_tree.fit(train_data, train_labels) # немного кода для отображения разделяющей поверхности xx, yy = get_grid(train_data) predicted = clf_tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.pcolormesh(xx, yy, predicted, cmap='autumn') plt.scatter(train_data[:, 0], train_data[:, 1], c=train_labels, s=100, cmap='autumn', edgecolors='black', linewidth=1.5); # используем .dot формат для визуализации дерева from sklearn.tree import export_graphviz export_graphviz(clf_tree, feature_names=['x1', 'x2'], out_file='small_tree.dot', filled=True) !dot -Tpng small_tree.dot -o small_tree.png !rm small_tree.dot data = pd.DataFrame({'Возраст пилота': [19,64,18,20,38,49,55,25,29,31,33], 'Задержка рейса': [1,0,1,0,1,0,0,1,1,0,1]}) data data.sort_values('Возраст пилота') age_tree = DecisionTreeClassifier(random_state=17) age_tree.fit(data['Возраст пилота'].values.reshape(-1, 1), data['Задержка рейса'].values) export_graphviz(age_tree, feature_names=['Возраст пилота'], out_file='age_tree.dot', filled=True) !dot -Tpng age_tree.dot -o age_tree.png data2 = pd.DataFrame({'Возраст пилота': [19,64,18,20,38,49,55,25,29,31,33], 'Зарплата пилота': [25,80,22,36,37,59,74,70,33,102,88], 'Задержка рейса': [1,0,1,0,1,0,0,1,1,0,1]}) data2 data2.sort_values('Возраст пилота') data2.sort_values('Зарплата пилота') age_sal_tree = DecisionTreeClassifier(random_state=17) age_sal_tree.fit(data2[['Возраст пилота', 'Зарплата пилота']].values, data2['Задержка рейса'].values); export_graphviz(age_sal_tree, feature_names=['Возраст пилота', 'Зарплата пилота'], out_file='age_sal_tree.dot', filled=True) !dot -Tpng age_sal_tree.dot -o age_sal_tree.png from sklearn.utils import shuffle from sklearn.model_selection import train_test_split df_k = pd.read_csv('/Users/Nonna/Desktop/BananaML/BananaML/kaggle_flight/train_dataset.csv') df_k = shuffle(df_k) df_k = df_k.head(250) train_df = df_k[['Month', 'DayofMonth', 'DayOfWeek', 'UniqueCarrier', 'target']] train_df = train_df.fillna(train_df.mean()) train_df = pd.get_dummies(train_df, columns = ['Month', 'DayofMonth', 'DayOfWeek', 'UniqueCarrier']) x_train, x_test, y_train, y_test = train_test_split(train_df.drop('target', axis = 1), train_df.target, test_size=0.3, random_state=42) print(x_train.shape, x_test.shape) from sklearn.model_selection import GridSearchCV, cross_val_score from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import roc_auc_score tree = DecisionTreeClassifier(max_depth=5, random_state=17) tree_params = {'max_depth': range(1,11), 'max_features': range(4,19)} tree_grid = GridSearchCV(tree, tree_params, cv=5, n_jobs=-1, verbose=True, scoring='roc_auc') tree_grid.fit(x_train, y_train) tree_grid.best_params_ tree_grid.best_score_ roc_auc_score(y_test, tree_grid.predict(x_test)) from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=17) print(np.mean(cross_val_score(forest, x_train, y_train, cv=5))) forest_params = {'max_depth': range(1,11), 'max_features': range(4,19)} forest_grid = GridSearchCV(forest, forest_params, cv=5, n_jobs=-1, verbose=True, scoring='roc_auc') forest_grid.fit(x_train, y_train) forest_grid.best_params_, forest_grid.best_score_ roc_auc_score(y_test, forest_grid.predict(x_test)) from sklearn.tree import export_graphviz export_graphviz(tree_grid.best_estimator_, feature_names=train_df.columns[:-1], out_file='flight_tree.dot', filled=True) !dot -Tpng flight_tree.dot -o flight_tree.png from sklearn.datasets import load_digits data = load_digits() X, y = data.data, data.target X[0,:].reshape([8,8]) f, axes = plt.subplots(1, 4, sharey=True, figsize=(16,6)) for i in range(4): axes[i].imshow(X[i,:].reshape([8,8])); np.bincount(y) X_train, X_holdout, y_train, y_holdout = train_test_split(X, y, test_size=0.3, random_state=17) tree = DecisionTreeClassifier(max_depth=5, random_state=17) %%time tree.fit(X_train, y_train) from sklearn.metrics import accuracy_score tree_pred = tree.predict(X_holdout) accuracy_score(y_holdout, tree_pred) tree_params = {'max_depth': [1, 2, 3, 5, 10, 20, 25, 30, 40, 50, 64], 'max_features': [1, 2, 3, 5, 10, 20 ,30, 50, 64]} tree_grid = GridSearchCV(tree, tree_params, cv=5, n_jobs=-1, verbose=True, scoring='accuracy') tree_grid.fit(X_train, y_train) tree_grid.best_params_, tree_grid.best_score_ accuracy_score(y_holdout, tree_grid.predict(X_holdout)) np.mean(cross_val_score(RandomForestClassifier(random_state=17), X_train, y_train, cv=5)) rf = RandomForestClassifier(random_state=17, n_jobs=-1).fit(X_train, y_train) accuracy_score(y_holdout, rf.predict(X_holdout)) def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30): data, target = [], [] for i in range(n): x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max) if np.abs(x1 - x2) > 0.5: data.append([x1, x2]) target.append(np.sign(x1 - x2)) return np.array(data), np.array(target) X, y = form_linearly_separable_data() plt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn', edgecolors='black'); tree = DecisionTreeClassifier(random_state=17).fit(X, y) xx, yy = get_grid(X, eps=.05) predicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.pcolormesh(xx, yy, predicted, cmap='autumn') plt.scatter(X[:, 0], X[:, 1], c=y, s=100, cmap='autumn', edgecolors='black', linewidth=1.5) plt.title('Easy task. Decision tree compexifies everything'); export_graphviz(tree, feature_names=['x1', 'x2'], out_file='deep_toy_tree.dot', filled=True) !dot -Tpng deep_toy_tree.dot -o deep_toy_tree.png ! jupyter nbconvert Desicion_trees_practise.ipynb --to html <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: А теперь практический пример Step2: Напишем вспомогательную функцию, которая будет возвращать решетку для дальнейшей красивой визуализации. Step3: Отобразим данные. Неформально, задача классификации в этом случае – построить какую-то "хорошую" границу, разделяющую 2 класса (красные точки от желтых). Интуиция подсказывает, что хорошо на новых данных будет работать какая-то гладкая граница, разделяющая 2 класса, или хотя бы просто прямая (в $n$-мерном случае - гиперплоскость). Step4: Попробуем разделить эти два класса, обучив дерево решений. В дереве будем использовать параметр max_depth, ограничивающий глубину дерева. Визуализируем полученную границу разделения класссов. Step5: А как выглядит само построенное дерево? Видим, что дерево "нарезает" пространство на 7 прямоугольников (в дереве 7 листьев). В каждом таком прямоугольнике прогноз дерева будет константным, по превалированию объектов того или иного класса. Step6: <img src='small_tree.png'> Step7: Отсортируем ее по возрастанию возраста. Step8: Обучим на этих данных дерево решений (без ограничения глубины) и посмотрим на него. Step9: Видим, что дерево задействовало 5 значений, с которыми сравнивается возраст Step10: <img src='age_tree.png'> Step11: Если отсортировать по возрасту, то целевой класс ("Задержка рейса") меняется (с 1 на 0 или наоборот) 5 раз. А если отсортировать по зарплате – то 7 раз. Как теперь дерево будет выбирать признаки? Посмотрим. Step12: <img src='age_sal_tree.png'> Step13: Теперь настроим параметры дерева на кросс-валидации. Настраивать будем максимальную глубину и максимальное используемое на каждом разбиении число признаков. Суть того, как работает GridSearchCV Step14: Лучшее сочетание параметров и соответствующая средняя доля правильных ответов на кросс-валидации Step15: Нарисуем получившееся дерево Step16: <img src='flight_tree.png'> Step17: Загружаем данные. Step18: Картинки здесь представляются матрицей 8 x 8 (интенсивности белого цвета для каждого пикселя). Далее эта матрица "разворачивается" в вектор длины 64, получается признаковое описание объекта. Step19: Нарисуем несколько рукописных цифр, видим, что они угадываются. Step20: Посмотрим на соотношение классов в выборке, видим, что примерно поровну нулей, единиц, ..., девяток. Step21: Выделим 70% выборки (X_train, y_train) под обучение и 30% будут отложенной выборкой (X_holdout, y_holdout). отложенная выборка никак не будет участвовать в настройке параметров моделей, на ней мы в конце, после этой настройки, оценим качество полученной модели. Step22: Обучим дерево решений, опять параметры пока наугад берем. Step23: Сделаем прогнозы для отложенной выборки. Видим, что метод ближайших соседей справился намного лучше. Но это мы пока выбирали параметры наугад. Step24: Теперь так же, как раньше настроим параметры моделей на кросс-валидации Step25: Лучшее сочетание параметров и соответствующая средняя доля правильных ответов на кросс-валидации Step26: Это уже не 66%, но и не 97%. Step27: Результаты эксперимента Step28: Однако дерево решений строит уж больно сложную границу и само по себе оказывается глубоким. Кроме того, представьте, как плохо дерево будет обобщаться на пространство вне представленного квадрата $30 \times 30$, обрамляющего обучающую выборку. Step29: Вот такая сложная конструкция, хотя решение (хорошая разделяющая поверхность) – это всего лишь прямая $x_1 = x_2$. Step30: <img src='deep_toy_tree.png'>
3,277
<ASSISTANT_TASK:> Python Code: # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne.datasets import spm_face from mne.preprocessing import ICA, create_eog_epochs from mne import io, combine_evoked from mne.minimum_norm import make_inverse_operator, apply_inverse print(__doc__) data_path = spm_face.data_path() subjects_dir = data_path + '/subjects' raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds' raw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run # Here to save memory and time we'll downsample heavily -- this is not # advised for real data as it can effectively jitter events! raw.resample(120., npad='auto') picks = mne.pick_types(raw.info, meg=True, exclude='bads') raw.filter(1, 30, method='fir', fir_design='firwin') events = mne.find_events(raw, stim_channel='UPPT001') # plot the events to get an idea of the paradigm mne.viz.plot_events(events, raw.info['sfreq']) event_ids = {"faces": 1, "scrambled": 2} tmin, tmax = -0.2, 0.6 baseline = None # no baseline as high-pass is applied reject = dict(mag=5e-12) epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks, baseline=baseline, preload=True, reject=reject) # Fit ICA, find and remove major artifacts ica = ICA(n_components=0.95, random_state=0).fit(raw, decim=1, reject=reject) # compute correlation scores, get bad indices sorted by score eog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject) eog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908') ica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on ica.plot_components(eog_inds) # view topographic sensitivity of components ica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar ica.plot_overlay(eog_epochs.average()) # inspect artifact removal ica.apply(epochs) # clean data, default in place evoked = [epochs[k].average() for k in event_ids] contrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled evoked.append(contrast) for e in evoked: e.plot(ylim=dict(mag=[-400, 400])) plt.show() # estimate noise covarariance noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk') # The transformation here was aligned using the dig-montage. It's included in # the spm_faces dataset and is named SPM_dig_montage.fif. trans_fname = data_path + ('/MEG/spm/SPM_CTF_MEG_example_faces1_3D_' 'raw-trans.fif') maps = mne.make_field_map(evoked[0], trans_fname, subject='spm', subjects_dir=subjects_dir, n_jobs=1) evoked[0].plot_field(maps, time=0.170) # Make source space src = data_path + '/subjects/spm/bem/spm-oct-6-src.fif' bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif' forward = mne.make_forward_solution(contrast.info, trans_fname, src, bem) snr = 3.0 lambda2 = 1.0 / snr ** 2 method = 'dSPM' inverse_operator = make_inverse_operator(contrast.info, forward, noise_cov, loose=0.2, depth=0.8) # Compute inverse solution on contrast stc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None) # stc.save('spm_%s_dSPM_inverse' % contrast.comment) # Plot contrast in 3D with PySurfer if available brain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170, views=['ven'], clim={'kind': 'value', 'lims': [3., 5.5, 9.]}) # brain.save_image('dSPM_map.png') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load and filter data, set up epochs Step2: Visualize fields on MEG helmet Step3: Compute forward model Step4: Compute inverse solution
3,278
<ASSISTANT_TASK:> Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline import pyqg # create the model object m = pyqg.BTModel(L=2.*np.pi, nx=256, beta=0., H=1., rek=0., rd=None, tmax=40, dt=0.001, taveint=1, ntd=4) # in this example we used ntd=4, four threads # if your machine has more (or fewer) cores available, you could try changing it # generate McWilliams 84 IC condition fk = m.wv != 0 ckappa = np.zeros_like(m.wv2) ckappa[fk] = np.sqrt( m.wv2[fk]*(1. + (m.wv2[fk]/36.)**2) )**-1 nhx,nhy = m.wv2.shape Pi_hat = np.random.randn(nhx,nhy)*ckappa +1j*np.random.randn(nhx,nhy)*ckappa Pi = m.ifft( Pi_hat[np.newaxis,:,:] ) Pi = Pi - Pi.mean() Pi_hat = m.fft( Pi ) KEaux = m.spec_var( m.wv*Pi_hat ) pih = ( Pi_hat/np.sqrt(KEaux) ) qih = -m.wv2*pih qi = m.ifft(qih) # initialize the model with that initial condition m.set_q(qi) # define a quick function for plotting and visualize the initial condition def plot_q(m, qmax=40): fig, ax = plt.subplots() pc = ax.pcolormesh(m.x,m.y,m.q.squeeze(), cmap='RdBu_r') pc.set_clim([-qmax, qmax]) ax.set_xlim([0, 2*np.pi]) ax.set_ylim([0, 2*np.pi]); ax.set_aspect(1) plt.colorbar(pc) plt.title('Time = %g' % m.t) plt.show() plot_q(m) for _ in m.run_with_snapshots(tsnapstart=0, tsnapint=10): plot_q(m) energy = m.get_diagnostic('KEspec') enstrophy = m.get_diagnostic('Ensspec') # this makes it easy to calculate an isotropic spectrum from pyqg import diagnostic_tools as tools kr, energy_iso = tools.calc_ispec(m,energy.squeeze()) _, enstrophy_iso = tools.calc_ispec(m,enstrophy.squeeze()) ks = np.array([3.,80]) es = 5*ks**-4 plt.loglog(kr,energy_iso) plt.loglog(ks,es,'k--') plt.text(2.5,.0001,r'$k^{-4}$',fontsize=20) plt.ylim(1.e-10,1.e0) plt.xlabel('wavenumber') plt.title('Energy Spectrum') ks = np.array([3.,80]) es = 5*ks**(-5./3) plt.loglog(kr,enstrophy_iso) plt.loglog(ks,es,'k--') plt.text(5.5,.01,r'$k^{-5/3}$',fontsize=20) plt.ylim(1.e-3,1.e0) plt.xlabel('wavenumber') plt.title('Enstrophy Spectrum') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: McWilliams performed freely-evolving 2D turbulence ($R_d = \infty$, $\beta =0$) experiments on a $2\pi\times 2\pi$ periodic box. Step2: Initial condition Step3: Runing the model Step4: The genius of McWilliams (1984) was that he showed that the initial random vorticity field organizes itself into strong coherent vortices. This is true in significant part of the parameter space. This was previously suspected but unproven, mainly because people did not have computer resources to run the simulation long enough. Thirty years later we can perform such simulations in a couple of minutes on a laptop!
3,279
<ASSISTANT_TASK:> Python Code: # Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy.io # To read matlab files import pylab from test_helper import Test x = [5, 4, 3, 4] print type(x[0]) # Create a list of floats containing the same elements as in x # x_f = <FILL IN> x_f = map(float, x) Test.assertTrue(np.all(x == x_f), 'Elements of both lists are not the same') Test.assertTrue(((type(x[-2])==int) & (type(x_f[-2])==float)),'Type conversion incorrect') # Numpy arrays can be created from numeric lists or using different numpy methods y = np.arange(8)+1 x = np.array(x_f) # Check the different data types involved print 'El tipo de la variable x_f es ', type(x_f) print 'El tipo de la variable x es ', type(x) print 'El tipo de la variable y es ', type(y) # Print the shapes of the numpy arrays print 'La variable y tiene dimensiones ', y.shape print 'La variable x tiene dimensiones ', x.shape #Complete the following exercises # Convert x into a variable x_matrix, of type `numpy.matrixlib.defmatrix.matrix` using command # np.matrix(). The resulting matrix should be of dimensions 4x1 x_matrix = np.matrix(x).T #x_matrix = <FILL IN> # Convert x into a variable x_array, of type `ndarray`, and dimensions 4x2 x_array = x[:,np.newaxis] #x_array = <FILL IN> # Reshape array y into a 4x2 matrix using command np.reshape() y = y.reshape((4,2)) #y = <FILL IN> Test.assertEquals(type(x_matrix),np.matrixlib.defmatrix.matrix,'x_matrix is not defined as a matrix') Test.assertEqualsHashed(x_matrix,'f4239d385605dc62b73c9a6f8945fdc65e12e43b','Incorrect variable x_matrix') Test.assertEquals(type(x_array),np.ndarray,'x_array is not defined as a numpy ndarray') Test.assertEqualsHashed(x_array,'f4239d385605dc62b73c9a6f8945fdc65e12e43b','Incorrect variable x_array') Test.assertEquals(type(y),np.ndarray,'y is not defined as a numpy ndarray') Test.assertEqualsHashed(y,'66d90401cb8ed9e1b888b76b0f59c23c8776ea42','Incorrect variable y') print 'Uso de flatten sobre la matriz x_matrix (de tipo matrix)' print 'x_matrix.flatten(): ', x_matrix.flatten() print 'Su tipo es: ', type(x_matrix.flatten()) print 'Sus dimensiones son: ', x_matrix.flatten().shape print '\nUso de flatten sobre la matriz y (de tipo ndarray)' print 'x_matrix.flatten(): ', y.flatten() print 'Su tipo es: ', type(y.flatten()) print 'Sus dimensiones son: ', y.flatten().shape print '\nUso de tolist sobre la matriz x_matrix (de tipo matrix) y el vector (2D) y (de tipo ndarray)' print 'x_matrix.tolist(): ', x_matrix.tolist() print 'y.tolist(): ', y.tolist() # Try to run the following command on variable x_matrix, and see what happens print x_array**2 # Try to run the following command on variable x_matrix, and see what happens print 'Remember that the shape of x_array is ', x_array.shape print 'Remember that the shape of y is ', y.shape # Complete the following exercises. You can print the partial results to visualize them # Multiply the 2-D array `y` by 2 y_by2 = y * 2 #y_by2 = <FILL IN> # Multiply each of the columns in `y` by the column vector x_array z_4_2 = x_array * y #z_4_2 = <FILL IN> # Obtain the matrix product of the transpose of x_array and y x_by_y = x_array.T.dot(y) #x_by_y = <FILL IN> # Repeat the previous calculation, this time using x_matrix (of type numpy matrix) instead of x_array # Note that in this case you do not need to use method dot() x_by_y2 = x_matrix.T * y #x_by_y2 = <FILL IN> # Multiply vector x_array by its transpose to obtain a 4 x 4 matrix x_4_4 = x_array.dot(x_array.T) #x_4_4 = <FILL IN> # Multiply the transpose of vector x_array by vector x_array. The result is the squared-norm of the vector x_norm2 = x_array.T.dot(x_array) #x_norm2 = <FILL IN> Test.assertEqualsHashed(y_by2,'120a3a46cdf65dc239cc9b128eb1336886c7c137','Incorrect result for variable y_by2') Test.assertEqualsHashed(z_4_2,'607730d96899ee27af576ecc7a4f1105d5b2cfed','Incorrect result for variable z_4_2') Test.assertEqualsHashed(x_by_y,'a3b24f229d1e02fa71e940adc0a4135779864358','Incorrect result for variable x_by_y') Test.assertEqualsHashed(x_by_y2,'a3b24f229d1e02fa71e940adc0a4135779864358','Incorrect result for variable x_by_y2') Test.assertEqualsHashed(x_4_4,'fff55c032faa93592e5d27bf13da9bb49c468687','Incorrect result for variable x_4_4') Test.assertEqualsHashed(x_norm2,'6eacac8f346bae7b5c72bcc3381c7140eaa98b48','Incorrect result for variable x_norm2') print z_4_2.shape print np.mean(z_4_2) print np.mean(z_4_2,axis=0) print np.mean(z_4_2,axis=1) # Previous check that you are working with the right matrices Test.assertEqualsHashed(z_4_2,'607730d96899ee27af576ecc7a4f1105d5b2cfed','Wrong value for variable z_4_2') Test.assertEqualsHashed(x_array,'f4239d385605dc62b73c9a6f8945fdc65e12e43b','Wrong value for variable x_array') # Vertically stack matrix z_4_2 with itself ex1_res = np.vstack((z_4_2,z_4_2)) #ex1_res = <FILL IN> # Horizontally stack matrix z_4_2 and vector x_array ex2_res = np.hstack((z_4_2,x_array)) #ex2_res = <FILL IN> # Horizontally stack a column vector of ones with the result of the first exercise (variable ex1_res) X = np.hstack((np.ones((8,1)),ex1_res)) #X = <FILL IN> Test.assertEqualsHashed(ex1_res,'31e60c0fa3e3accedc7db24339452085975a6659','Wrong value for variable ex1_res') Test.assertEqualsHashed(ex2_res,'189b90c5b2113d2415767915becb58c6525519b7','Wrong value for variable ex2_res') Test.assertEqualsHashed(X,'426c2708350ac469bc2fc4b521e781b36194ba23','Wrong value for variable X') # Keep last row of matrix X X_sub1 = X[-1,] #X_sub1 = <FILL IN> # Keep first column of the three first rows of X X_sub2 = X[:3,0] #X_sub2 = <FILL IN> # Keep first two columns of the three first rows of X X_sub3 = X[:3,:2] #X_sub3 = <FILL IN> # Invert the order of the rows of X X_sub4 = X[::-1,:] #X_sub4 = <FILL IN> Test.assertEqualsHashed(X_sub1,'0bcf8043a3dd569b31245c2e991b26686305b93f','Wrong value for variable X_sub1') Test.assertEqualsHashed(X_sub2,'7c43c1137480f3bfea7454458fcfa2bc042630ce','Wrong value for variable X_sub2') Test.assertEqualsHashed(X_sub3,'3cddc950ea2abc256192461728ef19d9e1d59d4c','Wrong value for variable X_sub3') Test.assertEqualsHashed(X_sub4,'33190dec8f3cbe3ebc9d775349665877d7b892dd','Wrong value for variable X_sub4') print X.shape print X.dot(X.T) print X.T.dot(X) print np.linalg.inv(X.T.dot(X)) #print np.linalg.inv(X.dot(X.T)) Test.assertEqualsHashed(X,'426c2708350ac469bc2fc4b521e781b36194ba23','Wrong value for variable X') # Obtain matrix Z Z = np.hstack((X,np.log(X[:,1:]))) #Z = <FILL IN> Test.assertEqualsHashed(Z,'d68d0394b57b4583ba95fc669c1c12aeec782410','Incorrect matrix Z') def log_transform(x): return np.hstack((x,np.log(x[1]),np.log(x[2]))) #return <FILL IN> Z_map = np.array(map(log_transform,X)) Test.assertEqualsHashed(Z_map,'d68d0394b57b4583ba95fc669c1c12aeec782410','Incorrect matrix Z') Z_lambda = np.array(map(lambda x: np.hstack((x,np.log(x[1]),np.log(x[2]))),X)) #Z_lambda = np.array(map(lambda x: <FILL IN>,X)) Test.assertEqualsHashed(Z_lambda,'d68d0394b57b4583ba95fc669c1c12aeec782410','Incorrect matrix Z') # Calculate variable Z_poly, using any method that you want Z_poly = np.array(map(lambda x: np.array([x[1]**k for k in range(4)]),X)) #Z_poly = <FILL IN> Test.assertEqualsHashed(Z_poly,'ba0f38316dffe901b6c7870d13ccceccebd75201','Wrong variable Z_poly') w_log = np.array([3.3, 0.5, -2.4, 3.7, -2.9]) w_poly = np.array([3.2, 4.5, -3.2, 0.7]) f_log = Z_lambda.dot(w_log) f_poly = Z_poly.dot(w_poly) #f_log = <FILL IN> #f_poly = <FILL IN> Test.assertEqualsHashed(f_log,'cf81496c5371a0b31931625040f460ed3481fb3d','Incorrect evaluation of the logarithmic model') Test.assertEqualsHashed(f_poly,'05307e30124daa103c970044828f24ee8b1a0bb9','Incorrect evaluation of the polynomial model') # Import additional libraries for this part from pyspark.mllib.linalg import DenseVector from pyspark.mllib.linalg import SparseVector from pyspark.mllib.regression import LabeledPoint # We create a sparse vector of length 900, with only 25 non-zero values Z = np.eye(30, k=5).flatten() print 'The dimension of array Z is ', Z.shape # Create a DenseVector containing the elements of array Z dense_V = DenseVector(Z) #dense_V = <FILL IN> #Create a SparseVector containing the elements of array Z #Nonzero elements are indexed by the following variable idx_nonzero idx_nonzero = np.nonzero(Z)[0] sparse_V = SparseVector(Z.shape[0], idx_nonzero, Z[idx_nonzero]) #sparse_V = <FILL IN> #Standard matrix operations can be computed on DenseVectors and SparseVectors #Calculate the square norm of vector sparse_V, by multiplying sparse_V by the transponse of dense_V print 'The norm of vector Z is', sparse_V.dot(dense_V) #print sparse_V #print dense_V Test.assertEqualsHashed(dense_V,'b331f43b23fda1ac19f5c29ee2c843fab6e34dfa', 'Incorrect vector dense_V') Test.assertEqualsHashed(sparse_V,'954fe70f3f9acd720219fc55a30c7c303d02f05d', 'Incorrect vector sparse_V') Test.assertEquals(type(dense_V),pyspark.mllib.linalg.DenseVector,'Incorrect type for dense_V') Test.assertEquals(type(sparse_V),pyspark.mllib.linalg.SparseVector,'Incorrect type for sparse_V') # Create a labeled point with a positive label and a dense feature vector. pos = LabeledPoint(1.0, [1.0, 0.0, 3.0]) # Create a labeled point with a negative label and a sparse feature vector. neg = LabeledPoint(0.0, sparse_V) # You can now easily access the label and features of the vector: print 'The label of the first labeled point is', pos.label print 'The features of the second labeled point are', neg.features <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. Objectives Step2: Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type. Step3: Some other useful Numpy methods are Step4: 2.2. Products and powers of numpy arrays and matrices Step5: 2.3. Numpy methods that can be carried out along different dimensions Step6: Other numpy methods where you can specify the axis along with a certain operation should be carried out are Step7: 2.5. Slicing Step8: 2.6 Matrix inversion Step9: 2.7 Exercises Step10: 2.7.1. Non-linear transformations Step11: If you did not do that, repeat the previous exercise, this time using the map() method together with function log_transform() Step12: Repeat the previous exercise once again using a lambda function Step13: 2.7.2. Polynomial transformations Step14: 2.7.3. Model evaluation Step15: 3. MLlib Data types Step16: DenseVectors can be created from lists or from numpy arrays Step17: 3.2. Labeled point
3,280
<ASSISTANT_TASK:> Python Code: True True = 13 True and False True or False False and False or True True or False and False # Importa dalla libreria solo i tre operatori logici from operator import and_, or_, not_ not_(or_(True, and_(False, True))) not (True or (False and True)) True == True 6*3 < 7*2 14*2 == 4*7 from operator import lt, le, gt, ge, eq eq(14*2, 4*7) if not 3 > 4: a = 3*2 else: a = 4*2 a print(a) def Test(x): return x > 5 and x < 10 Test(2) Test(6) def ThreeAnd(x,y,z): return x and y and z ThreeAnd(2>4, 1<2, 1==3) def Abs(x): if x >= 0: return x else: return -x Abs(-3) Abs(5) def F(a, b): if b > 0: return add else: return sub def G(a, b): return F(a,b)(a,b) G(-2,-3) def P(): return P() def Test(x, y): if x == 0: return 0 else: return y Test(0,P()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: In qualsiasi linguaggio di programmazione, oltre ad espressioni aritmetiche, è possibile valutare delle espressioni logiche, utilizzando gli operatori logici and, or, e not Step2: DOMANDA Step3: In alternativa, per dichiarare una precedenza tra gli operatori si possono usare le parentesi tonde Step4: Operatori di confronto Step5: Espressioni condizionali e predicati Step6: Esercizi Step7: ESERCIZIO 1.5 Step8: e poi de valutare l'espressione
3,281
<ASSISTANT_TASK:> Python Code: data = range(1, 6) pie = Pie(sizes=data) fig = Figure(marks=[pie], animation_duration=1000) # Add `animation_duration` (in milliseconds) to have smooth transitions display(fig) Nslices = 5 pie.sizes = np.random.rand(Nslices) pie.sort = True pie.selected_style = {"opacity": "1", "stroke": "white", "stroke-width": "2"} pie.unselected_style = {"opacity": "0.2"} pie.selected = [3] pie.selected = None pie.labels = ['{:.2f}'.format(d) for d in pie.sizes] fig pie.label_color = 'white' pie.font_size = '20px' pie.font_weight = 'normal' pie1 = Pie(sizes=np.random.rand(6), inner_radius=0.05) fig1 = Figure(marks=[pie1], animation_duration=1000) display(fig1) # As of now, the radius sizes are absolute, in pixels pie1.radius = 250 pie1.inner_radius = 80 # Angles are in radians, 0 being the top vertical pie1.start_angle = -90 pie1.end_angle = 90 pie1.y = 0.9 pie1.x = 0.4 pie1.radius = 320 pie1.stroke = 'brown' pie1.colors = ['orange', 'darkviolet'] pie1.opacities = [.1, 1] display(fig1) from bqplot import ColorScale, ColorAxis Nslices = 7 size_data = np.random.rand(Nslices) color_data = np.random.randn(Nslices) sc = ColorScale(scheme='Reds') # The ColorAxis gives a visual representation of its ColorScale ax = ColorAxis(scale=sc) pie2 = Pie(sizes=size_data, scales={'color': sc}, color=color_data) Figure(marks=[pie2], axes=[ax]) from datetime import datetime from bqplot.traits import convert_to_date from bqplot import DateScale, LinearScale, Axis avg_precipitation_days = [(d/30., 1-d/30.) for d in [2, 3, 4, 6, 12, 17, 23, 22, 15, 4, 1, 1]] temperatures = [9, 12, 16, 20, 22, 23, 22, 22, 22, 20, 15, 11] dates = [datetime(2010, k, 1) for k in range(1, 13)] sc_x = DateScale() sc_y = LinearScale() ax_x = Axis(scale=sc_x, label='month', tick_format='%B') ax_y = Axis(scale=sc_y, orientation='vertical', label='average temperature') pies = [Pie(sizes=precipit, x=date, y=temp, scales={"x": sc_x, "y": sc_y}, radius=30., stroke='navy', colors=['navy', 'navy'], opacities=[1, .1]) for precipit, date, temp in zip(avg_precipitation_days, dates, temperatures)] Figure(title='Kathmandu precipitation', marks=pies, axes=[ax_x, ax_y], padding_x=.05, padding_y=.1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: As with all bqplot Marks, pie data can be dynamically modified Step2: Sort the pie slices by ascending size Step3: Setting different styles for selected slices Step4: For more on piechart interactions, see the Mark Interactions notebook Step5: Modify label styling Step6: Updating pie shape and style Step7: Change pie dimensions Step8: Moving the pie around Step9: Changing slice styles Step10: Representing an additional dimension using Color Step11: Positioning the Pie using custom scales
3,282
<ASSISTANT_TASK:> Python Code: import numpy as np a = np.arange(6) b = a print("a =\n",a) print("b =\n",b) b.shape = (2,3) # mudança no shape de b, print("\na shape =",a.shape) # altera o shape de a b[0,0] = -1 # mudança no conteúdo de b print("a =\n",a) # altera o conteudo de a print("\nid de a = ",id(a)) # id é um identificador único de objeto print("id de b = ",id(b)) # a e b possuem o mesmo id print('np.may_share_memory(a,b):',np.may_share_memory(a,b)) def cc(a): return a b = cc(a) print("id de a = ",id(a)) print("id de b = ",id(b)) print('np.may_share_memory(a,b):',np.may_share_memory(a,b)) a = np.arange(30) print("a =\n", a) print('a.shape:',a.shape) b = a.reshape( (5, 6)) print("b =\n", b) b[:, 0] = -1 print('b=\n',b) print("a =\n", a) c = a.reshape( (2, 3, 5) ) print("c =\n", c) print('c.base is a:',c.base is a) print('np.may_share_memory(a,c):',np.may_share_memory(a,c)) print('id(a),id(c):',id(a),id(c)) a = np.zeros( (5, 6)) print('a.shape:',a.shape) b = a[::2,::2] print('b.shape:',b.shape) b[:,:] = 1. print('b=\n', b) print('a=\n', a) print('b.base is a:',b.base is a) print('np.may_share_memory(a,b):',np.may_share_memory(a,b)) a = np.arange(25).reshape((5,5)) print('a=\n',a) b = a[:,0] print('b=',b) b[:] = np.arange(5) b[2] = 100 print('b=',b) print('a=\n',a) a = np.arange(25).reshape((5,5)) print('a=\n',a) b = np.arange(5) print('b=',b) print('a=\n',a) a = np.arange(24).reshape((4,6)) print('a:\n',a) at = a.T print('at:\n',at) print('at.shape',at.shape) print('np.may_share_memory(a,at):',np.may_share_memory(a,at)) a = np.arange(24).reshape((4,6)) print('a:\n',a) av = a.ravel() print('av.shape:',av.shape) print('av:\n',av) print('np.may_share_memory(a,av):',np.may_share_memory(a,av)) b = a.copy() c = np.array(a, copy=True) print("id de a = ",id(a)) print("id de b = ",id(b)) print("id de c = ",id(c)) print('np.may_share_memory(a,b):',np.may_share_memory(a,b)) print('np.may_share_memory(a,c):',np.may_share_memory(a,c)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Observe que mesmo no retorno de uma função, a cópia explícita pode não acontecer. Veja o exemplo a Step2: Cópia rasa Step3: Slice - Fatiamento Step4: Este outro exemplo é uma forma atraente de processar uma coluna de uma matriz bidimensional, Step5: Transposto Step6: Ravel Step7: Cópia profunda
3,283
<ASSISTANT_TASK:> Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) source_text_ids = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split()] for line in source_text.split('\n')] target_text_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_text_ids, target_text_ids DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) def model_inputs(): Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) input = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_len = tf.reduce_max(input_tensor=target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length') return input, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) def process_decoder_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_encoding_input(process_decoder_input) from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) return enc_cell # RNN cell layer enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return training_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id # param keep_prob is unused # Helper for the inference process. start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings, start_tokens=start_tokens, end_token=end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] # inference_decoder_output = tf.nn.dropout(inference_decoder_output, keep_prob) return inference_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, # target_sequence_length, max_summary_length, # output_layer, keep_prob): training_decoder_output = decoding_layer_train( encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): # def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, # end_of_sequence_id, max_target_sequence_length, # vocab_size, output_layer, batch_size, keep_prob): inference_decoder_output = decoding_layer_infer( encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, # source_sequence_length, source_vocab_size, # encoding_embedding_size): # return enc_output, enc_state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # def process_decoder_input(target_data, target_vocab_to_int, batch_size): dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # def decoding_layer(dec_input, encoder_state, # target_sequence_length, max_target_sequence_length, # rnn_size, # num_layers, target_vocab_to_int, target_vocab_size, # batch_size, keep_prob, decoding_embedding_size): training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) # # Number of Epochs # epochs = 12 # # Batch Size # batch_size = 1500 # # RNN Size # rnn_size = 60 # # Number of Layers # num_layers = 1 # # Embedding Size # encoding_embedding_size = 300 # decoding_embedding_size = 300 # # Learning Rate # learning_rate = 0.001 # # Dropout Keep Probability # keep_probability = 0.8 # display_step = 10 ## out === 0.7858 # # Number of Epochs # epochs = 15 # # Batch Size # batch_size = 1500 # # RNN Size # rnn_size = 65 # # Number of Layers # num_layers = 1 # # Embedding Size # encoding_embedding_size = 300 # decoding_embedding_size = 300 # # Learning Rate # learning_rate = 0.001 # # Dropout Keep Probability # keep_probability = 0.85 # display_step = 10 ## out === 0.83 # # Number of Epochs # epochs = 20 # # Batch Size # batch_size = 2000 # # RNN Size # rnn_size = 70 # # Number of Layers # num_layers = 1 # # Embedding Size # encoding_embedding_size = 512 # decoding_embedding_size = 512 # # Learning Rate # learning_rate = 0.001 # # Dropout Keep Probability # keep_probability = 0.9 # display_step = 20 ##out == 0.87 # # Number of Epochs # epochs = 10 # # Batch Size # batch_size = 1000 # # RNN Size # rnn_size = 50 # # Number of Layers # num_layers = 1 # # Embedding Size # encoding_embedding_size = 512 # decoding_embedding_size = 512 # # Learning Rate # learning_rate = 0.01 # # Dropout Keep Probability # keep_probability = 0.85 # display_step = 20 ##out 0.71 # Number of Epochs epochs = 15 # Batch Size batch_size = 1500 # RNN Size rnn_size = 100 # Number of Layers num_layers = 1 # Embedding Size encoding_embedding_size = 512 decoding_embedding_size = 512 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.95 display_step = 20 DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) DON'T MODIFY ANYTHING IN THIS CELL def pad_sentence_batch(sentence_batch, pad_int): Pad sentences with <PAD> so that each sentence of a batch has the same length max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): Batch targets, sources, and the lengths of their sentences together for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths DON'T MODIFY ANYTHING IN THIS CELL def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids converted = [vocab_to_int.get(word, vocab_to_int.get('<UNK>')) for word in sentence.lower().split()] return converted DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Language Translation Step3: Explore the Data Step6: Implement Preprocessing Function Step8: Preprocess all the data and save it Step10: Check Point Step12: Check the Version of TensorFlow and Access to GPU Step15: Build the Neural Network Step18: Process Decoder Input Step21: Encoding Step24: Decoding - Training Step27: Decoding - Inference Step30: Build the Decoding Layer Step33: Build the Neural Network Step34: Neural Network Training Step36: Build the Graph Step40: Batch and pad the source and target sequences Step43: Train Step45: Save Parameters Step47: Checkpoint Step50: Sentence to Sequence Step52: Translate
3,284
<ASSISTANT_TASK:> Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ! pip uninstall -y tensorflow ! pip install -q tf-nightly ! pip install -q tensorflow-model-optimization import tensorflow as tf import numpy as np import tensorflow_model_optimization as tfmot import tempfile input_shape = [20] x_train = np.random.randn(1, 20).astype(np.float32) y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20) def setup_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(20, input_shape=input_shape), tf.keras.layers.Flatten() ]) return model def setup_pretrained_weights(): model= setup_model() model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'] ) model.fit(x_train, y_train) _, pretrained_weights = tempfile.mkstemp('.tf') model.save_weights(pretrained_weights) return pretrained_weights def setup_pretrained_model(): model = setup_model() pretrained_weights = setup_pretrained_weights() model.load_weights(pretrained_weights) return model setup_model() pretrained_weights = setup_pretrained_weights() base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) quant_aware_model.summary() # Create a base model base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy # Helper function uses `quantize_annotate_layer` to annotate that only the # Dense layers should be quantized. def apply_quantization_to_dense(layer): if isinstance(layer, tf.keras.layers.Dense): return tfmot.quantization.keras.quantize_annotate_layer(layer) return layer # Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense` # to the layers of the model. annotated_model = tf.keras.models.clone_model( base_model, clone_function=apply_quantization_to_dense, ) # Now that the Dense layers are annotated, # `quantize_apply` actually makes the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() print(base_model.layers[0].name) # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. i = tf.keras.Input(shape=(20,)) x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i) o = tf.keras.layers.Flatten()(x) annotated_model = tf.keras.Model(inputs=i, outputs=o) # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) # For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the # quantized model can take in float inputs instead of only uint8. quant_aware_model.summary() # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. annotated_model = tf.keras.Sequential([ tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)), tf.keras.layers.Flatten() ]) # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() # Define the model. base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) # Save or checkpoint the model. _, keras_model_file = tempfile.mkstemp('.h5') quant_aware_model.save(keras_model_file) # `quantize_scope` is needed for deserializing HDF5 models. with tfmot.quantization.keras.quantize_scope(): loaded_model = tf.keras.models.load_model(keras_model_file) loaded_model.summary() base_model = setup_pretrained_model() quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) # Typically you train the model here. converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] quantized_tflite_model = converter.convert() LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig): # Configure how to quantize weights. def get_weights_and_quantizers(self, layer): return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))] # Configure how to quantize activations. def get_activations_and_quantizers(self, layer): return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))] def set_quantize_weights(self, layer, quantize_weights): # Add this line for each item returned in `get_weights_and_quantizers` # , in the same order layer.kernel = quantize_weights[0] def set_quantize_activations(self, layer, quantize_activations): # Add this line for each item returned in `get_activations_and_quantizers` # , in the same order. layer.activation = quantize_activations[0] # Configure how to quantize outputs (may be equivalent to activations). def get_output_quantizers(self, layer): return [] def get_config(self): return {} quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class CustomLayer(tf.keras.layers.Dense): pass model = quantize_annotate_model(tf.keras.Sequential([ quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope` # as well as the custom Keras layer. with quantize_scope( {'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig, 'CustomLayer': CustomLayer}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): # Configure weights to quantize with 4-bit instead of 8-bits. def get_weights_and_quantizers(self, layer): return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))] model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this Dense layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): def get_activations_and_quantizers(self, layer): # Skip quantizing activations. return [] def set_quantize_activations(self, layer, quantize_activations): # Empty since `get_activaations_and_quantizers` returns # an empty list. return model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this Dense layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer): Quantizer which forces outputs to be between -1 and 1. def build(self, tensor_shape, name, layer): # Not needed. No new TensorFlow variables needed. return {} def __call__(self, inputs, training, weights, **kwargs): return tf.keras.backend.clip(inputs, -1.0, 1.0) def get_config(self): # Not needed. No __init__ parameters to serialize. return {} class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): # Configure weights to quantize with 4-bit instead of 8-bits. def get_weights_and_quantizers(self, layer): # Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer. return [(layer.kernel, FixedRangeQuantizer())] model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this `Dense` layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 量子化認識トレーニングの総合ガイド Step2: 量子化認識モデルを定義する Step3: 一部のレイヤーを量子化する Step4: この例では量子化するものを決定するためにレイヤーの種類が使用されていますが、特定のレイヤーを量子化する上で最も簡単な方法は、name プロパティを設定し、clone_function でその名前を探す方法です。 Step5: 可読性を高められても、モデルの精度を潜在的に低下させる Step6: Sequential の例 Step7: チェックポイントと逆シリアル化 Step8: 量子化モデルを作成してデプロイする Step9: 量子化による実験 Step10: カスタム Keras レイヤーを量子化する Step11: 量子化パラメータを変更する Step12: 構成の適用は、「量子化による実験」のユースケースと同じです。 Step13: 量子化するレイヤーの一部を変更する Step14: 構成の適用は、「量子化による実験」のユースケースと同じです。 Step16: カスタム量子化アルゴリズムを使用する Step17: 構成の適用は、「量子化による実験」のユースケースと同じです。
3,285
<ASSISTANT_TASK:> Python Code: import matplotlib.pyplot as plt from IPython.display import Image %matplotlib inline # image courtesy of Raschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print. Image(filename='learning-curve.png', width=600) from sklearn import datasets import numpy as np iris = datasets.load_iris() X = iris.data y = iris.target from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.6, random_state=0) import matplotlib.pyplot as plt from sklearn.learning_curve import learning_curve def plot_learning_curve(model, X_train, y_train): # code adapted into function from ch6 of Raschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print. train_sizes, train_scores, test_scores =\ learning_curve(estimator=model, X=X_train, y=y_train, train_sizes=np.linspace(0.1, 1.0, 10), cv=10, n_jobs=1) train_mean = np.mean(train_scores, axis=1) train_std = np.std(train_scores, axis=1) test_mean = np.mean(test_scores, axis=1) test_std = np.std(test_scores, axis=1) plt.plot(train_sizes, train_mean, color='blue', marker='o', markersize=5, label='training accuracy') plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue') plt.plot(train_sizes, test_mean, color='green', linestyle='--', marker='s', markersize=5, label='cross-validation accuracy') plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green') plt.grid() plt.xlabel('Number of training samples') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.tight_layout() plt.show() from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline pipe_lr = Pipeline([('scl', StandardScaler()), ('clf', LogisticRegression(random_state=0))]) plot_learning_curve(pipe_lr, X_train, y_train) # Your code goes here # Your code goes here rng = np.random.RandomState(0) n_samples_1 = 1000 n_samples_2 = 100 X_unbalanced = np.r_[1.5 * rng.randn(n_samples_1, 2), 0.5 * rng.randn(n_samples_2, 2) + [2, 2]] y_unbalanced = [0] * (n_samples_1) + [1] * (n_samples_2) from sklearn import svm # fit the model and get the separating hyperplane model = svm.SVC(kernel='linear', C=1.0) model.fit(X_unbalanced, y_unbalanced) w = model.coef_[0] a = -w[0] / w[1] xx = np.linspace(-5, 5) yy = a * xx - model.intercept_[0] / w[1] # plot separating hyperplanes and samples h0 = plt.plot(xx, yy, 'k-', label='no weights') plt.scatter(X_unbalanced[:, 0], X_unbalanced[:, 1], c=y_unbalanced, cmap=plt.cm.Paired) plt.axis('tight') plt.show() # Your code goes here <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Plotting our own learning curves Step2: Notice how we plot the standard deviation too; in addition to seeing whether the training and test accuracy converge we can see how much variation exists across the k-folds of the training runs with each sample size. This variance can in itself help us determine whether or not our model suffers from variance. Step3: Exercise Step4: Tuning a SVM classifier for an unbalanced dataset Step5: Notice how the separating hyperplane fails to capture a high percentage of the positive examples. Assuming we wish to capture more of the positive examples, we can use the class_weight parameter of svm.SVC to add more emphasis.
3,286
<ASSISTANT_TASK:> Python Code: %matplotlib inline from __future__ import print_function import numpy as np from scipy import stats import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.graphics.api import qqplot print(sm.datasets.sunspots.NOTE) dta = sm.datasets.sunspots.load_pandas().data dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) del dta["YEAR"] dta.plot(figsize=(12,8)); fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2) arma_mod20 = sm.tsa.ARMA(dta, (2,0)).fit(disp=False) print(arma_mod20.params) arma_mod30 = sm.tsa.ARMA(dta, (3,0)).fit(disp=False) print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic) print(arma_mod30.params) print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic) sm.stats.durbin_watson(arma_mod30.resid.values) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = arma_mod30.resid.plot(ax=ax); resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid.values.squeeze(), qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True) print(predict_sunspots) fig, ax = plt.subplots(figsize=(12, 8)) ax = dta.ix['1950':].plot(ax=ax) fig = arma_mod30.plot_predict('1990', '2012', dynamic=True, ax=ax, plot_insample=False) def mean_forecast_err(y, yhat): return y.sub(yhat).mean() mean_forecast_err(dta.SUNACTIVITY, predict_sunspots) from statsmodels.tsa.arima_process import arma_generate_sample, ArmaProcess np.random.seed(1234) # include zero-th lag arparams = np.array([1, .75, -.65, -.55, .9]) maparams = np.array([1, .65]) arma_t = ArmaProcess(arparams, maparams) arma_t.isinvertible arma_t.isstationary fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax.plot(arma_t.generate_sample(nsample=50)); arparams = np.array([1, .35, -.15, .55, .1]) maparams = np.array([1, .65]) arma_t = ArmaProcess(arparams, maparams) arma_t.isstationary arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2) arma11 = sm.tsa.ARMA(arma_rvs, (1,1)).fit(disp=False) resid = arma11.resid r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) arma41 = sm.tsa.ARMA(arma_rvs, (4,1)).fit(disp=False) resid = arma41.resid r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) macrodta = sm.datasets.macrodata.load_pandas().data macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3')) cpi = macrodta["cpi"] fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = cpi.plot(ax=ax); ax.legend(); print(sm.tsa.adfuller(cpi)[1]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Sunpots Data Step2: Does our model obey the theory? Step3: This indicates a lack of fit. Step4: Exercise Step5: Let's make sure this model is estimable. Step6: What does this mean? Step7: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags. Step8: Exercise Step9: Hint Step10: P-value of the unit-root test, resoundly rejects the null of no unit-root.
3,287
<ASSISTANT_TASK:> Python Code: from __future__ import print_function import os import tfx_utils def _make_default_sqlite_uri(pipeline_name): return os.path.join(os.environ['HOME'], 'airflow/tfx/metadata', pipeline_name, 'metadata.db') def get_metadata_store(pipeline_name): return tfx_utils.TFXReadonlyMetadataStore.from_sqlite_db(_make_default_sqlite_uri(pipeline_name)) pipeline_name = 'taxi' # or taxi_solution pipeline_db_path = _make_default_sqlite_uri(pipeline_name) print('Pipeline DB:\n{}'.format(pipeline_db_path)) store = get_metadata_store(pipeline_name) store.get_artifacts_of_type_df(tfx_utils.TFXArtifactTypes.MODEL) store.display_tfma_analysis(<insert model ID here>, slicing_column='trip_start_hour') # Try different IDs here. Click stop in the plot when changing IDs. %matplotlib notebook store.plot_artifact_lineage(<insert model ID here>) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Now print out the model artifacts Step2: Now analyze the model performance Step3: Now plot the artifact lineage
3,288
<ASSISTANT_TASK:> Python Code: NAME = "Michelle Appel" NAME2 = "Verna Dankers" NAME3 = "Yves van Montfort" EMAIL = "michelle.appel@student.uva.nl" EMAIL2 = "verna.dankers@student.uva.nl" EMAIL3 = "yves.vanmontfort@student.uva.nl" %pylab inline plt.rcParams["figure.figsize"] = [20,10] def true_mean_function(x): return np.cos(2*pi*(x+1)) def add_noise(y, sigma): return y + sigma*np.random.randn(len(y)) def generate_t(x, sigma): return add_noise(true_mean_function(x), sigma) sigma = 0.2 beta = 1.0 / pow(sigma, 2) N_test = 100 x_test = np.linspace(-1, 1, N_test) mu_test = np.zeros(N_test) y_test = true_mean_function(x_test) t_test = add_noise(y_test, sigma) plt.plot( x_test, y_test, 'b-', lw=2) plt.plot( x_test, t_test, 'go') plt.show() def k_n_m(xn, xm, thetas): theta0, theta1, theta2, theta3 = thetas # Unpack thetas if(xn == xm): k = theta0 + theta2 + theta3*xn*xm else: k = theta0 * np.exp(-(theta1/2)*(xn-xm)**2) + theta2 + theta3*xn*xm return k def computeK(x1, x2, thetas): K = np.zeros(shape=(len(x1), len(x2))) # Create empty array for xn, row in zip(x1, range(len(x1))): # Iterate over x1 for xm, column in zip(x2, range(len(x2))): # Iterate over x2 K[row, column] = k_n_m(xn, xm, thetas) # Add kernel to matrix return K x1 = [0, 0, 1] x2 = [0, 0, 1] thetas = [1, 2, 3, 1] K = computeK(x1, x2, thetas) ### Test your function x1 = [0, 1, 2] x2 = [1, 2, 3, 4] thetas = [1, 2, 3, 4] K = computeK(x1, x2, thetas) assert K.shape == (len(x1), len(x2)), "the shape of K is incorrect" import matplotlib.pyplot as plt # The thetas thetas0 = [1, 4, 0, 0] thetas1 = [9, 4, 0, 0] thetas2 = [1, 64, 0, 0] thetas3 = [1, 0.25, 0, 0] thetas4 = [1, 4, 10, 0] thetas5 = [1, 4, 0, 5] f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3) # Subplot setup all_thetas = [thetas0, thetas1, thetas2, thetas3, thetas4, thetas5] # List of all thetas all_plots = [ax1, ax2, ax3, ax4, ax5, ax6] # List of all plots n = 5 # Number of samples per subplot for subplot_, theta_ in zip(all_plots, all_thetas): # Iterate over all plots and thetas K = computeK(x_test, x_test, theta_) # Compute K # Fix numerical error on eigenvalues 0 that are slightly negative (<e^15) min_eig = np.min(np.real(np.linalg.eigvals(K))) if min_eig < 0: K -= 10*min_eig * np.eye(*K.shape) mean = np.zeros(shape=len(K)) # Generate Means random = numpy.random.multivariate_normal(mean, K) # Draw n random samples samples = [numpy.random.multivariate_normal(mean, K) for i in range(n)] # Calculate expected y and variance expected_y = numpy.mean(np.array(samples), axis=0) uncertainties = numpy.sqrt(K.diagonal()) for sample in samples: subplot_.plot(sample) x = np.arange(0, 100) # 100 Steps # Plot uncertainty subplot_.fill_between( x, expected_y - 2 * uncertainties, expected_y + 2 * uncertainties, alpha=0.3, color='pink' ) subplot_.plot(expected_y, 'g--') # Plot ground truth # subplot_.legend(['Sampled y', 'Expected y']) # Add legend subplot_.set_title(theta_) # Set title plt.show() def computeC(x1, x2, theta, beta): K = computeK(x1, x2, theta) return K + np.diag(np.array([1/beta for x in x1])) def gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None): # Calculate or reuse C if C is None: C = computeC(x_train, x_train, theta, beta) # Calculate mean and variance c = computeC(x_test, x_test, theta, beta) K = computeK(x_train, x_test, theta) KC = np.matmul(np.linalg.inv(C), K) mean_test = np.asarray(np.matrix(t_train) * KC) var_test = c - np.matmul(KC.T, K) return mean_test.squeeze(), var_test.squeeze(), C ### Test your function N = 2 train_x = np.linspace(-1, 1, N) train_t = 2*train_x test_N = 3 test_x = np.linspace(-1, 1, test_N) theta = [1, 2, 3, 4] beta = 25 test_mean, test_var, C = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=None) assert test_mean.shape == (test_N,), "the shape of mean is incorrect" assert test_var.shape == (test_N, test_N), "the shape of var is incorrect" assert C.shape == (N, N), "the shape of C is incorrect" C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]]) _, _, C_out = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=C_in) assert np.allclose(C_in, C_out), "C is not reused!" import math def gp_log_likelihood(x_train, t_train, theta, beta, C=None, invC=None): if C is None: C = computeC(x_train, x_train, theta, beta) if invC is None: invC = np.linalg.inv(C) t_train = np.matrix(t_train) # Data likelihood as represented in Bishop page 311 lp = -0.5 * np.log(np.linalg.det(C)) - 0.5 * t_train * \ invC * t_train.T - len(x_train) / 2 * np.log(2*np.pi) lp = np.asscalar(lp) return lp, C, invC ### Test your function N = 2 train_x = np.linspace(-1, 1, N) train_t = 2 * train_x theta = [1, 2, 3, 4] beta = 25 lp, C, invC = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=None) assert lp < 0, "the log-likelihood should smaller than 0" assert C.shape == (N, N), "the shape of var is incorrect" assert invC.shape == (N, N), "the shape of C is incorrect" C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]]) _, C_out, _ = gp_log_likelihood(train_x, train_t, theta, beta, C=C_in, invC=None) assert np.allclose(C_in, C_out), "C is not reused!" invC_in = np.array([[1.26260453, 0.15416407], [0.15416407, 1.26260453]]) _, _, invC_out = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=invC_in) assert np.allclose(invC_in, invC_out), "invC is not reused!" def gp_plot( x_test, y_test, mean_test, var_test, x_train, t_train, theta, beta ): # x_test: # y_test: the true function at x_test # mean_test: predictive mean at x_test # var_test: predictive covariance at x_test # t_train: the training values # theta: the kernel parameters # beta: the precision (known) # the reason for the manipulation is to allow plots separating model and data stddevs. std_total = np.sqrt(np.diag(var_test)) # includes all uncertainty, model and target noise std_model = np.sqrt(std_total**2 - 1.0/beta) # remove data noise to get model uncertainty in stddev std_combo = std_model + np.sqrt(1.0/beta) # add stddev (note: not the same as full) plt.plot(x_test, y_test, 'b', lw=3) plt.plot(x_test, mean_test, 'k--', lw=2) plt.fill_between(x_test, mean_test+2*std_combo,mean_test-2*std_combo, color='k', alpha=0.25) plt.fill_between(x_test, mean_test+2*std_model,mean_test-2*std_model, color='r', alpha=0.25) plt.plot(x_train, t_train, 'ro', ms=10) # Number of data points n = 2 def plot_conditioned_on_training(n): # Use the periodic data generator to create 2 training points x_train = np.random.uniform(low=-1.0, high=1.0, size=n) t_train = generate_t(x_train, sigma) # 100 data points for testing x_test = np.linspace(-1, 1, 100) y_test = true_mean_function(x_test) # Iterate over all plots and thetas for i, theta in enumerate(all_thetas): plt.subplot(2, 3, i+1) mean, var, C = gp_predictive_distribution(x_train, t_train, x_test, theta, beta) lp, C, invC = gp_log_likelihood(x_train, t_train, theta, beta, C) # Put theta info and log likelihood in title plt.title("thetas : {}, lp : {}".format(theta, lp)) gp_plot( x_test, y_test, mean, var, x_train, t_train, theta, beta) plt.show() plot_conditioned_on_training(n) # Number of data points n = 10 plot_conditioned_on_training(n) # YOUR CODE HERE # raise NotImplementedError() np.random.seed(1) plt.rcParams["figure.figsize"] = [10,10] # Cov should be diagonal (independency) and have the same values (identical), i.e. a*I. def create_X(mean, sig, N): return np.random.multivariate_normal(mean, sig * np.identity(2), N) m1 = [1, 1]; m2 = [3, 3] s1 = 1/2; s2 = 1/2 N1 = 20; N2 = 30 X1 = create_X(m1, s1, N1) X2 = create_X(m2, s2, N2) plt.figure() plt.axis('equal') plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o') plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o') plt.show() def create_X_and_t(X1, X2): # YOUR CODE HERE # raise NotImplementedError() X1_len = X1.shape[0] X2_len = X2.shape[0] X = np.vstack((X1, X2)) t = np.hstack((-np.ones(X1_len), np.ones(X2_len))) # Shuffle data? indices = np.arange(X1_len + X2_len) np.random.shuffle(indices) return X[indices], t[indices] ### Test your function dim = 2 N1_test = 2 N2_test = 3 X1_test = np.arange(4).reshape((N1_test, dim)) X2_test = np.arange(6).reshape((N2_test, dim)) X_test, t_test = create_X_and_t(X1_test, X2_test) assert X_test.shape == (N1_test + N2_test, dim), "the shape of X is incorrect" assert t_test.shape == (N1_test + N2_test,), "the shape of t is incorrect" def computeK(X): # YOUR CODE HERE # raise NotImplementedError() K = np.dot(X, X.T).astype('float') return K dim = 2 N_test = 3 X_test = np.arange(6).reshape((N_test, dim)) K_test = computeK(X_test) assert K_test.shape == (N_test, N_test) import cvxopt def compute_multipliers(X, t): # YOUR CODE HERE # raise NotImplementedError() K = computeK(np.dot(np.diag(t), X)) q = cvxopt.matrix(-np.ones_like(t, dtype='float')) G = cvxopt.matrix(np.diag(-np.ones_like(t, dtype='float'))) A = cvxopt.matrix(t).T h = cvxopt.matrix(np.zeros_like(t, dtype='float')) b = cvxopt.matrix(0.0) P = cvxopt.matrix(K) sol = cvxopt.solvers.qp(P, q, G, h, A, b) a = np.array(sol['x']) return a ### Test your function dim = 2 N_test = 3 X_test = np.arange(6).reshape((N_test, dim)) t_test = np.array([-1., 1., 1.]) a_test = compute_multipliers(X_test, t_test) assert a_test.shape == (N_test, 1) # YOUR CODE HERE # raise NotImplementedError() np.random.seed(420) X, t = create_X_and_t(X1, X2) a_opt = compute_multipliers(X, t) sv_ind = np.nonzero(np.around(a_opt[:, 0])) X_sv = X[sv_ind] t_sv = t[sv_ind] a_sv = a_opt[sv_ind] plt.figure() plt.axis('equal') plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o') plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o') plt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3') plt.show() # YOUR CODE HERE # raise NotImplementedError() w_opt = np.squeeze(np.dot(a_opt.T, np.dot(np.diag(t), X))) K_sv = computeK(X_sv) N_sv = size(sv_ind) atk_sv = np.dot(a_sv.T * t_sv, K_sv) b = np.sum(t_sv - atk_sv)/N_sv x_lim = np.array([1, 4]) y_lim = (-w_opt[0] * x_lim - b) / w_opt[1] plt.figure() plt.axis('equal') plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o') plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o') plt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3') plt.plot(x_lim, y_lim, c='black') plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Lab 3 Step2: Part 1 Step3: 1. Sampling from the Gaussian process prior (30 points) Step4: 1.2 computeK( X1, X2, thetas ) (10 points) Step5: 1.3 Plot function samples (15 points) Step6: 2. Predictive distribution (35 points) Step7: 2.2 gp_log_likelihood(...) (10 points) Step8: 2.3 Plotting (10 points) Step9: 2.4 More plotting (5 points) Step10: Part 2 Step11: b) (10 points) In the next step we will combine the two datasets X_1, X_2 and generate a vector t containing the labels. Write a function create_X_and_t(X1, X2) it should return the combined data set X and the corresponding target vector t. Step12: 2.2 Finding the support vectors (15 points) Step13: Next, we will rewrite the dual representation so that we can make use of computationally efficient vector-matrix multiplication. The objective becomes Step14: 2.3 Plot support vectors (5 points) Step15: 2.4 Plot the decision boundary (10 Points)
3,289
<ASSISTANT_TASK:> Python Code: %pylab notebook from __future__ import print_function import datacube import xarray as xr from datacube.helpers import ga_pq_fuser from datacube.storage import masking from datacube.storage.masking import mask_to_dict from matplotlib import pyplot as plt dc = datacube.Datacube(app='combining data from multiple sensors') #### DEFINE SPATIOTEMPORAL RANGE AND BANDS OF INTEREST #Use this to manually define an upper left/lower right coords #Define temporal range start_of_epoch = '1998-01-01' end_of_epoch = '2016-12-31' #Define wavelengths/bands of interest, remove this kwarg to retrieve all bands bands_of_interest = [#'blue', #'green', 'red', #'nir', #'swir1', #'swir2' ] #Define sensors of interest sensors = ['ls8', 'ls7', 'ls5'] query = {'time': (start_of_epoch, end_of_epoch)} lat_max = -17.42 lat_min = -17.45 lon_max = 140.90522 lon_min = 140.8785 query['x'] = (lon_min, lon_max) query['y'] = (lat_max, lat_min) query['crs'] = 'EPSG:4326' print(query) #Define which pixel quality artefacts you want removed from the results mask_components = {'cloud_acca':'no_cloud', 'cloud_shadow_acca' :'no_cloud_shadow', 'cloud_shadow_fmask' : 'no_cloud_shadow', 'cloud_fmask' :'no_cloud', 'blue_saturated' : False, 'green_saturated' : False, 'red_saturated' : False, 'nir_saturated' : False, 'swir1_saturated' : False, 'swir2_saturated' : False, 'contiguous':True} #Retrieve the NBAR and PQ data for sensor n sensor_clean = {} for sensor in sensors: #Load the NBAR and corresponding PQ sensor_nbar = dc.load(product= sensor+'_nbar_albers', group_by='solar_day', measurements = bands_of_interest, **query) sensor_pq = dc.load(product= sensor+'_pq_albers', group_by='solar_day', fuse_func=ga_pq_fuser, **query) #grab the projection info before masking/sorting crs = sensor_nbar.crs crswkt = sensor_nbar.crs.wkt affine = sensor_nbar.affine #Apply the PQ masks to the NBAR cloud_free = masking.make_mask(sensor_pq, **mask_components) good_data = cloud_free.pixelquality.loc[start_of_epoch:end_of_epoch] sensor_nbar = sensor_nbar.where(good_data) sensor_clean[sensor] = sensor_nbar sensor_clean['ls5'] #change nbar_clean to nbar_sorted nbar_clean = xr.concat(sensor_clean.values(), dim='time') time_sorted = nbar_clean.time.argsort() nbar_clean = nbar_clean.isel(time=time_sorted) nbar_clean.attrs['crs'] = crs nbar_clean.attrs['affine'] = affine nbar_clean red_ls5 = sensor_clean['ls5'].red.isel(x=[100],y=[100]).dropna('time', how = 'any') red_ls7 = sensor_clean['ls7'].red.isel(x=[100],y=[100]).dropna('time', how = 'any') red_ls8 = sensor_clean['ls8'].red.isel(x=[100],y=[100]).dropna('time', how = 'any') #plot a time series for each sensor fig = plt.figure(figsize=(8,5)) red_ls5.plot() red_ls7.plot() red_ls8.plot() #plot multi sensor time series red_multi_sensor = nbar_clean.red.isel(x=[100],y=[100]).dropna('time', how = 'any') fig = plt.figure(figsize=(8,5)) red_multi_sensor.plot() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: retrieve the NBAR and PQ for the spatiotemporal range of interest Step2: Plotting an image, view the transect and select a location to retrieve a time series
3,290
<ASSISTANT_TASK:> Python Code: from pyannote.core import SlidingWindowFeature, SlidingWindow # one 4-dimensional feature vector extracted every 100ms from a 200ms window frame = SlidingWindow(start=0.0, step=0.100, duration=0.200) # random for illustration purposes data = np.random.randn(100, 4) features = SlidingWindowFeature(data, frame) help(features.crop) from pyannote.core import Segment features.crop(Segment(2, 3)) help(SlidingWindowFeature) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: SlidingWindowFeature are used to manage feature vectors extracted on a sliding window (e.g. MFCC in audio processing). Step2: Cropping Step3: Need help?
3,291
<ASSISTANT_TASK:> Python Code: import os import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from google.cloud import bigquery from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard from tensorflow.keras.layers import ( GRU, LSTM, RNN, Bidirectional, Conv1D, Dense, MaxPool1D, Reshape, ) from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import to_categorical # To plot pretty figures %matplotlib inline mpl.rc("axes", labelsize=14) mpl.rc("xtick", labelsize=12) mpl.rc("ytick", labelsize=12) # For reproducible results. from numpy.random import seed seed(1) tf.random.set_seed(2) PROJECT = !(gcloud config get-value core/project) PROJECT = PROJECT[0] %env PROJECT = {PROJECT} %env BUCKET = {PROJECT} %env REGION = "us-central1" %%time bq = bigquery.Client(project=PROJECT) bq_query = #standardSQL SELECT symbol, Date, direction, close_values_prior_260 FROM `stock_market.eps_percent_change_sp500` LIMIT 100 df_stock_raw = bq.query(bq_query).to_dataframe() df_stock_raw.head() def clean_data(input_df): Cleans data to prepare for training. Args: input_df: Pandas dataframe. Returns: Pandas dataframe. df = input_df.copy() # TF doesn't accept datetimes in DataFrame. df["Date"] = pd.to_datetime(df["Date"], errors="coerce") df["Date"] = df["Date"].dt.strftime("%Y-%m-%d") # TF requires numeric label. df["direction_numeric"] = df["direction"].apply( lambda x: {"DOWN": 0, "STAY": 1, "UP": 2}[x] ) return df df_stock = clean_data(df_stock_raw) df_stock.head() STOCK_HISTORY_COLUMN = "close_values_prior_260" COL_NAMES = ["day_" + str(day) for day in range(0, 260)] LABEL = "direction_numeric" def _scale_features(df): z-scale feature columns of Pandas dataframe. Args: features: Pandas dataframe. Returns: Pandas dataframe with each column standardized according to the values in that column. avg = df.mean() std = df.std() return (df - avg) / std def create_features(df, label_name): Create modeling features and label from Pandas dataframe. Args: df: Pandas dataframe. label_name: str, the column name of the label. Returns: Pandas dataframe # Expand 1 column containing a list of close prices to 260 columns. time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series) # Rename columns. time_series_features.columns = COL_NAMES time_series_features = _scale_features(time_series_features) # Concat time series features with static features and label. label_column = df[LABEL] return pd.concat([time_series_features, label_column], axis=1) df_features = create_features(df_stock, LABEL) df_features.head() ix_to_plot = [0, 1, 9, 5] fig, ax = plt.subplots(1, 1, figsize=(15, 8)) for ix in ix_to_plot: label = df_features["direction_numeric"].iloc[ix] example = df_features[COL_NAMES].iloc[ix] ax = example.plot(label=label, ax=ax) ax.set_ylabel("scaled price") ax.set_xlabel("prior days") ax.legend() def _create_split(phase): Create string to produce train/valid/test splits for a SQL query. Args: phase: str, either TRAIN, VALID, or TEST. Returns: String. floor, ceiling = "2002-11-01", "2010-07-01" if phase == "VALID": floor, ceiling = "2010-07-01", "2011-09-01" elif phase == "TEST": floor, ceiling = "2011-09-01", "2012-11-30" return WHERE Date >= '{}' AND Date < '{}' .format( floor, ceiling ) def create_query(phase): Create SQL query to create train/valid/test splits on subsample. Args: phase: str, either TRAIN, VALID, or TEST. sample_size: str, amount of data to take for subsample. Returns: String. basequery = #standardSQL SELECT symbol, Date, direction, close_values_prior_260 FROM `stock_market.eps_percent_change_sp500` return basequery + _create_split(phase) bq = bigquery.Client(project=PROJECT) for phase in ["TRAIN", "VALID", "TEST"]: # 1. Create query string query_string = create_query(phase) # 2. Load results into DataFrame df = bq.query(query_string).to_dataframe() # 3. Clean, preprocess dataframe df = clean_data(df) df = create_features(df, label_name="direction_numeric") # 3. Write DataFrame to CSV if not os.path.exists("../data"): os.mkdir("../data") df.to_csv( f"../data/stock-{phase.lower()}.csv", index_label=False, index=False, ) print( "Wrote {} lines to {}".format( len(df), f"../data/stock-{phase.lower()}.csv" ) ) ls -la ../data N_TIME_STEPS = 260 N_LABELS = 3 Xtrain = pd.read_csv("../data/stock-train.csv") Xvalid = pd.read_csv("../data/stock-valid.csv") ytrain = Xtrain.pop(LABEL) yvalid = Xvalid.pop(LABEL) ytrain_categorical = to_categorical(ytrain.values) yvalid_categorical = to_categorical(yvalid.values) def plot_curves(train_data, val_data, label="Accuracy"): Plot training and validation metrics on single axis. Args: train_data: list, metrics obtrained from training data. val_data: list, metrics obtained from validation data. label: str, title and label for plot. Returns: Matplotlib plot. plt.plot( np.arange(len(train_data)) + 0.5, train_data, "b.-", label="Training " + label, ) plt.plot( np.arange(len(val_data)) + 1, val_data, "r.-", label="Validation " + label, ) plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True)) plt.legend(fontsize=14) plt.xlabel("Epochs") plt.ylabel(label) plt.grid(True) sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0] # TODO 1a model = Sequential() model.add( Dense( units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.compile( optimizer=Adam(lr=0.001), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=30, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) # TODO 1b dnn_hidden_units = [16, 8] model = Sequential() for layer in dnn_hidden_units: model.add(Dense(units=layer, activation="relu")) model.add( Dense( units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.compile( optimizer=Adam(lr=0.001), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=10, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) # TODO 1c model = Sequential() # Convolutional layer model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) model.add( Conv1D( filters=5, kernel_size=5, strides=2, padding="valid", input_shape=[None, 1], ) ) model.add(MaxPool1D(pool_size=2, strides=None, padding="valid")) # Flatten the result and pass through DNN. model.add(tf.keras.layers.Flatten()) model.add(Dense(units=N_TIME_STEPS // 4, activation="relu")) model.add( Dense( units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.compile( optimizer=Adam(lr=0.01), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=10, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) # TODO 2a model = Sequential() # Reshape inputs to pass through RNN layer. model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) model.add(LSTM(N_TIME_STEPS // 8, activation="relu", return_sequences=False)) model.add( Dense( units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) # Create the model. model.compile( optimizer=Adam(lr=0.001), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=40, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) # TODO 2b rnn_hidden_units = [N_TIME_STEPS // 16, N_TIME_STEPS // 32] model = Sequential() # Reshape inputs to pass through RNN layer. model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) for layer in rnn_hidden_units[:-1]: model.add(GRU(units=layer, activation="relu", return_sequences=True)) model.add(GRU(units=rnn_hidden_units[-1], return_sequences=False)) model.add( Dense( units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.compile( optimizer=Adam(lr=0.001), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=50, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) # TODO 3a model = Sequential() # Reshape inputs for convolutional layer model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) model.add( Conv1D( filters=20, kernel_size=4, strides=2, padding="valid", input_shape=[None, 1], ) ) model.add(MaxPool1D(pool_size=2, strides=None, padding="valid")) model.add( LSTM( units=N_TIME_STEPS // 2, return_sequences=False, kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.add(Dense(units=N_LABELS, activation="softmax")) model.compile( optimizer=Adam(lr=0.001), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=30, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) # TODO 3b rnn_hidden_units = [N_TIME_STEPS // 32, N_TIME_STEPS // 64] model = Sequential() # Reshape inputs and pass through RNN layer. model.add(Reshape(target_shape=[N_TIME_STEPS, 1])) for layer in rnn_hidden_units: model.add(LSTM(layer, return_sequences=True)) # Apply 1d convolution to RNN outputs. model.add(Conv1D(filters=5, kernel_size=3, strides=2, padding="valid")) model.add(MaxPool1D(pool_size=4, strides=None, padding="valid")) # Flatten the convolution output and pass through DNN. model.add(tf.keras.layers.Flatten()) model.add( Dense( units=N_TIME_STEPS // 32, activation="relu", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.add( Dense( units=N_LABELS, activation="softmax", kernel_regularizer=tf.keras.regularizers.l1(l=0.1), ) ) model.compile( optimizer=Adam(lr=0.001), loss="categorical_crossentropy", metrics=["accuracy"], ) history = model.fit( x=Xtrain.values, y=ytrain_categorical, batch_size=Xtrain.shape[0], validation_data=(Xvalid.values, yvalid_categorical), epochs=80, verbose=0, ) plot_curves(history.history["loss"], history.history["val_loss"], label="Loss") plot_curves( history.history["accuracy"], history.history["val_accuracy"], label="Accuracy", ) np.mean(history.history["val_accuracy"][-5:]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: Explore time series data Step4: The function clean_data below does three things Step7: Read data and preprocessing Step8: Let's plot a few examples and see that the preprocessing steps were implemented correctly. Step13: Make train-eval-test split Step14: Modeling Step16: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy. Step17: Baseline Step18: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set. Step19: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training. Step20: Deep Neural Network Step21: Convolutional Neural Network Step22: Recurrent Neural Network Step23: Multi-layer RNN Step24: Combining CNN and RNN architecture Step25: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN.
3,292
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd import glob import os import scipy as sp from scipy import stats from tools.plt import color2d #from the 'srcole/tools' repo from matplotlib import cm # Load cities info df_cities = pd.read_csv('/gh/data2/yelp/city_pop.csv', index_col=0) df_cities.head() # Load restaurants df_restaurants = pd.read_csv('/gh/data2/yelp/food_by_city/df_restaurants.csv', index_col=0) df_restaurants.head() # Load categories by restaurant df_categories = pd.read_csv('/gh/data2/yelp/food_by_city/df_categories.csv', index_col=0) df_categories.head() # Manually concatenate categories with at least 500 counts # Find categories D and V such that category 'D' should be counted as vategory 'V' category_subsets = {'delis': 'sandwiches', 'sushi': 'japanese', 'icecream': 'desserts', 'cafes': 'coffee', 'sportsbars': 'bars', 'hotdog': 'hotdogs', 'wine_bars': 'bars', 'pubs': 'bars', 'cocktailbars': 'bars', 'beerbar': 'bars', 'tacos': 'mexican', 'gastropubs': 'bars', 'ramen': 'japanese', 'chocolate': 'desserts', 'dimsum': 'chinese', 'cantonese': 'chinese', 'szechuan': 'chinese', 'coffeeroasteries': 'coffee', 'hookah_bars': 'bars', 'irish_pubs': 'bars'} for k in category_subsets.keys(): df_categories[category_subsets[k]] = np.logical_or(df_categories[k], df_categories[category_subsets[k]]) # Remove some categories # R category_remove = ['hotdog', 'cafes'] for k in category_remove: df_categories.drop(k, axis=1, inplace=True) # Top categories N = 20 category_counts = df_categories.sum().sort_values(ascending=False) top_N_categories = list(category_counts.head(N).keys()) top_N_categories_counts = category_counts.head(N).values category_counts.head(N) # Bar chart plt.figure(figsize=(12,5)) plt.bar(np.arange(N), top_N_categories_counts / len(df_restaurants), color='k', ecolor='.5') plt.xticks(np.arange(N), top_N_categories) plt.ylabel('Fraction of restaurants', size=20) plt.xlabel('Restaurant category', size=20) plt.xticks(size=15, rotation='vertical') plt.yticks(size=15); gb = df_restaurants.groupby('name') df_chains = gb.mean()[['rating', 'review_count', 'cost']] df_chains['count'] = gb.size() df_chains.sort_values('count', ascending=False, inplace=True) df_chains.head(10) # Only consider restaurants with at least 50 locations min_count = 50 df_temp = df_chains[df_chains['count'] >= min_count] plt.figure(figsize=(8,12)) plt_num = 1 for i, k1 in enumerate(df_temp.keys()): for j, k2 in enumerate(df_temp.keys()[i+1:]): if k1 in ['review_count', 'count']: if k2 in ['review_count', 'count']: plot_f = plt.loglog else: plot_f = plt.semilogx else: if k2 in ['review_count', 'count']: plot_f = plt.semilogy else: plot_f = plt.plot plt.subplot(3, 2, plt_num) plot_f(df_temp[k1], df_temp[k2], 'k.') plt.xlabel(k1) plt.ylabel(k2) plt_num += 1 r, p = stats.spearmanr(df_temp[k1], df_temp[k2]) plt.title(r) plt.tight_layout() from bokeh.io import output_notebook from bokeh.layouts import row, widgetbox from bokeh.models import CustomJS, Slider, Legend, HoverTool from bokeh.plotting import figure, output_file, show, ColumnDataSource output_notebook() # Slider variables min_N_franchises = 100 # Determine dataframe sources df_chains2 = df_chains[df_chains['count'] > 10].reset_index() df_temp = df_chains2[df_chains2['count'] >= min_N_franchises] # Create data source for plotting and Slider callback source1 = ColumnDataSource(df_temp, id='source1') source2 = ColumnDataSource(df_chains2, id='source2') hover = HoverTool(tooltips=[ ("Name", "@name"), ("Avg Stars", "@rating"), ("# locations", "@count")]) # Make initial figure of net income vs years of saving plot = figure(plot_width=400, plot_height=400, x_axis_label='Number of locations', y_axis_label='Average rating', x_axis_type="log", tools=[hover]) plot.scatter('count', 'rating', source=source1, line_width=3, line_alpha=0.6, line_color='black') # Declare how to update plot on slider change callback = CustomJS(args=dict(s1=source1, s2=source2), code= var d1 = s1.get("data"); var d2 = s2.get("data"); var N = N.value; d1["count"] = []; d1["rating"] = []; for(i=0;i <=d2["count"].length; i++){ if (d2["count"][i] >= N) { d1["count"].push(d2["count"][i]); d1["rating"].push(d2["rating"][i]); d1["name"].push(d2["name"][i]); } } s1.change.emit(); ) N_slider = Slider(start=10, end=1000, value=min_N_franchises, step=10, title="minimum number of franchises", callback=callback) callback.args["N"] = N_slider # Define layout of plot and sliders layout = row(plot, widgetbox(N_slider)) # Output and show output_file("/gh/srcole.github.io/assets/misc/yelp_bokeh.html", title="Yelp WIP") show(layout) N_bins_per_factor10 = 8 bins_by_key = {'rating': np.arange(0.75, 5.75, .5), 'review_count': np.logspace(1, 5, num=N_bins_per_factor10*4+1), 'cost': np.arange(.5, 5, 1)} log_by_key = {'rating': False, 'review_count': True, 'cost': False} plt.figure(figsize=(12, 4)) for i, k in enumerate(bins_by_key.keys()): weights = np.ones_like(df_restaurants[k].values)/float(len(df_restaurants[k].values)) plt.subplot(1, 3, i+1) plt.hist(df_restaurants[k].values, bins_by_key[k], log=log_by_key[k], color='k', edgecolor='.5', weights=weights) if k == 'review_count': plt.semilogx(1,1) plt.xlim((10, 40000)) elif i == 0: plt.ylabel('Probability') plt.xlabel(k) plt.tight_layout() # Prepare histogram analysis gb_cost = df_restaurants.groupby('cost').groups gb_rating = df_restaurants.groupby('rating').groups # Remove 0 from gb_rating gb_rating.pop(0.0) N_bins_cost = len(gb_cost.keys()) N_bins_count = len(bins_by_key['review_count']) - 1 N_bins_rate = len(bins_by_key['rating']) - 1 # Hist: review count and rating as fn of cost hist_count_by_cost = np.zeros((N_bins_cost, N_bins_count)) hist_rate_by_cost = np.zeros((N_bins_cost, N_bins_rate)) points_count_by_cost = np.zeros((N_bins_cost, 3)) points_rate_by_cost = np.zeros((N_bins_cost, 3)) for i, k in enumerate(gb_cost.keys()): # Make histogram of review count as fn of cost x = df_restaurants.loc[gb_cost[k]]['review_count'].values hist_temp, _ = np.histogram(x, bins=bins_by_key['review_count']) # Make each cost sum to 1 hist_count_by_cost[i] = hist_temp / np.sum(hist_temp) # Compute percentiles points_count_by_cost[i,0] = np.mean(x) points_count_by_cost[i,1] = np.std(x) points_count_by_cost[i,2] = np.min([np.std(x), 5-np.mean(x)]) # Repeat for rating x = df_restaurants.loc[gb_cost[k]]['rating'].values hist_temp, _ = np.histogram(x, bins=bins_by_key['rating']) hist_rate_by_cost[i] = hist_temp / np.sum(hist_temp) points_rate_by_cost[i,0] = np.mean(x) points_rate_by_cost[i,1] = np.std(x) points_rate_by_cost[i,2] = np.min([np.std(x), 5-np.mean(x)]) # Make histograms of review count as fn of rating hist_count_by_rate = np.zeros((N_bins_rate, N_bins_count)) points_count_by_rate = np.zeros((N_bins_rate, 3)) for i, k in enumerate(gb_rating.keys()): # Make histogram of review count as fn of cost x = df_restaurants.loc[gb_rating[k]]['review_count'].values hist_temp, _ = np.histogram(x, bins=bins_by_key['review_count']) # Make each cost sum to 1 hist_count_by_rate[i] = hist_temp / np.sum(hist_temp) points_count_by_rate[i,0] = np.mean(x) points_count_by_rate[i,1] = np.std(x) points_count_by_rate[i,2] = np.min([np.std(x), 5-np.mean(x)]) # Make a 2d colorplot plt.figure(figsize=(10,4)) color2d(hist_rate_by_cost, cmap=cm.viridis, clim=[0,.4], cticks = np.arange(0,.41,.05), color_label='Probability', plot_xlabel='Rating', plot_ylabel='Cost ($)', plot_xticks_locs=range(N_bins_rate), plot_xticks_labels=gb_rating.keys(), plot_yticks_locs=range(N_bins_cost), plot_yticks_labels=gb_cost.keys(), interpolation='none', fontsize_minor=14, fontsize_major=19) # On top, plot the mean and st. dev. # plt.errorbar(points_rate_by_cost[:,0] / , np.arange(N_bins_cost), fmt='.', color='w', ms=10, # xerr=points_rate_by_cost[:,1:].T, ecolor='w', alpha=.5) # Make a 2d colorplot xbins_label = np.arange(0,N_bins_per_factor10*2+1, N_bins_per_factor10) plt.figure(figsize=(10,4)) color2d(hist_count_by_cost, cmap=cm.viridis, clim=[0,.2], cticks = np.arange(0,.21,.05), color_label='Probability', plot_xlabel='Number of reviews', plot_ylabel='Cost ($)', plot_xticks_locs=xbins_label, plot_xticks_labels=bins_by_key['review_count'][xbins_label].astype(int), plot_yticks_locs=range(N_bins_cost), plot_yticks_labels=gb_cost.keys(), interpolation='none', fontsize_minor=14, fontsize_major=19) plt.xlim((-.5,N_bins_per_factor10*2 + .5)) # Make a 2d colorplot xbins_label = np.arange(0,N_bins_per_factor10*2+1, N_bins_per_factor10) plt.figure(figsize=(10,6)) color2d(hist_count_by_rate, cmap=cm.viridis, clim=[0,.4], cticks = np.arange(0,.41,.1), color_label='Probability', plot_xlabel='Number of reviews', plot_ylabel='Rating', plot_xticks_locs=xbins_label, plot_xticks_labels=bins_by_key['review_count'][xbins_label].astype(int), plot_yticks_locs=range(N_bins_rate), plot_yticks_labels=gb_rating.keys(), interpolation='none', fontsize_minor=14, fontsize_major=19) plt.xlim((-.5,N_bins_per_factor10*2 + .5)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load dataframes Step2: 1. What are most popular categories? Step3: 2. What are the most common restaurant chains? Step4: 2a. Correlations in chain properties Step6: 2b. Number of franchises vs rating (bokeh) Step7: 3. Distributions of ratings, review counts, and costs Step8: 3b. Correlations (histograms)
3,293
<ASSISTANT_TASK:> Python Code: import pymongo as pm client = pm.MongoClient() client.drop_database("tutorial") import bson.son as son # start a client client = pm.MongoClient() # connect to a database db = client.tutorial # get a collection coll = db.test_collection example_0 = {} example_1 = {"name": "Michael", "age": 32, "grades": [71, 85, 90, 34]} example_2 = \ {"first name": "Michael", "last name": "Mathioudakis", "age": 32, "grades": { "ModernDB": 69, "Data Mining": 71, "Machine Learning": 95 }, "graduated": True, "previous schools": ["NTUA", "UofT"] } import datetime example_3 = {"name": "Modern Database Systems", "start": datetime.datetime(2016, 1, 12), "end": datetime.datetime(2016, 3, 26), "tags": ["rdbms", "mongodb", "spark"]} coll.insert_one(example_0) coll.find() for doc in coll.find(): print(doc) coll.insert_one(example_1) for doc in coll.find(): print(doc) print() coll.insert_many([example_2, example_3]) for doc in coll.find(): print(doc) print() query_result = coll.find({"name": "Michael"}) for doc in query_result: print(doc) query_result = coll.find({"name": "Michael"}, {"_id": 0}) for doc in query_result: print(doc) query_result = coll.find({"name": "Michael"}, {"_id": 0, "grades": 1}) for doc in query_result: print(doc) %%bash mongoimport --db tutorial --collection restaurants --drop --file primer-dataset.json restaurants = db.restaurants # our new collection # how many restaurants? restaurants.count() # retrieve a cursor over all documents in the collection cursor = restaurants.find() # define printing function def print_my_docs(cursor, num): for i in range(num): # print only up to num next documents from cursor try: print(next(cursor)) print() except: break # let's print a few documents print_my_docs(cursor, 3) next(cursor) # get one more document # top-level field cursor = restaurants.find({"borough": "Manhattan"}) print_my_docs(cursor, 2) # nested field (in embedded document) cursor = restaurants.find({"address.zipcode": "10075"}) print_my_docs(cursor, 2) # query by field in array cursor = restaurants.find({"grades.grade": "B"}) # print one document from the query result next(cursor)['grades'] # exact array match cursor = restaurants.find({"address.coord": [-73.98513559999999, 40.7676919]}) print_my_docs(cursor, 10) cursor = restaurants.find({"grades.score": {"$gt": 30}}) cursor = restaurants.find({"grades.score": {"$lt": 10}}) next(cursor)["grades"] # logical AND cursor = restaurants.find({"cuisine": "Italian", "address.zipcode": "10075"}) next(cursor) # logical OR cursor = restaurants.find({"$or": [{"cuisine": "Italian"}, {"address.zipcode": "10075"}]}) print_my_docs(cursor, 3) # logical AND, differently cursor = restaurants.find({"$and": [{"cuisine": "Italian"}, {"address.zipcode": "10075"}]}) next(cursor) cursor = restaurants.find() # to sort, specify list of sorting criteria, # each criterion given as a tuple # (field_name, sort_order) # here we have only one sorted_cursor = cursor.sort([("borough", pm.ASCENDING)]) print_my_docs(cursor, 2) another_sorted_cursor = restaurants.find().sort([("borough", pm.ASCENDING), ("address.zipcode", pm.DESCENDING)]) print_my_docs(another_sorted_cursor, 3) # Group Documents by a Field and Calculate Count cursor = restaurants.aggregate( [ {"$group": {"_id": "$borough", "count": {"$sum": 1}}} ] ) print_my_docs(cursor, 10) # Filter and Group Documents cursor = restaurants.aggregate( [ {"$match": {"borough": "Queens", "cuisine": "Brazilian"}}, {"$group": {"_id": "$address.zipcode", "count": {"$sum": 1}}} ] ) print_my_docs(cursor, 10) # Filter and Group and then Filter Again documents cursor = restaurants.aggregate( [ {"$match": {"borough": "Manhattan", "cuisine": "American"}}, {"$group": {"_id": "$address.zipcode", "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 1}}} ] ) print_my_docs(cursor, 10) # Filter and Group and then Filter Again and then Sort Documents cursor = restaurants.aggregate( [ {"$match": {"borough": "Manhattan", "cuisine": "American"}}, {"$group": {"_id": "$address.zipcode", "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 1}}}, {"$sort": {"count": -1, "_id": -1}} ] ) print_my_docs(cursor, 10) # Same but sort by multiple fields # Filter and Group and then Filter Again and then Sort Documents cursor = restaurants.aggregate( [ {"$match": {"borough": "Manhattan", "cuisine": "American"}}, {"$group": {"_id": "$address.zipcode", "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 1}}}, {"$sort": son.SON([("count", -1), ("_id", 1)])} # order matters!! ] ) print_my_docs(cursor, 10) # what will this do? cursor = restaurants.aggregate( [ {"$group": {"_id": None, "count": {"$sum": 1}} } ] ) print_my_docs(cursor, 10) # projection # what will this do? cursor = restaurants.aggregate( [ {"$group": {"_id": "$address.zipcode", "count": {"$sum": 1}}}, {"$project": {"_id": 0, "count": 1}} ] ) print_my_docs(cursor, 10) # what will this do? cursor = restaurants.aggregate( [ {"$group": {"_id": {"cuisine": "$cuisine"}, "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] ) print_my_docs(cursor, 5) # what will this do? cursor = restaurants.aggregate( [ {"$group": {"_id": {"zip": "$address.zipcode"}, "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] ) print_my_docs(cursor, 5) # what will this do? cursor = restaurants.aggregate( [ {"$group": {"_id": {"cuisine": "$cuisine", "zip": "$address.zipcode"}, "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] ) print_my_docs(cursor, 5) # what will this do? cursor = restaurants.aggregate( [ {"$group": {"_id": {"cuisine": "$cuisine", "zip": "$address.zipcode"}, "count": {"$sum": 1}}}, {"$sort": {"count": -1}}, {"$limit": 10} # See comment under "In-class questions" ] ) for doc in cursor: print(doc["_id"]["cuisine"], doc["_id"]["zip"], doc["count"]) restaurants.aggregate( [ {"$match": {"borough": "Manhattan"}}, {"$out": "manhattan"} ] ) cursor = restaurants.aggregate( [ {"$group": {"_id": None, "count": {"$sum": 1}} } ] ) cursor = restaurants.aggregate( [ {"$group": {"_id": {"borough": "$borough", "cuisine": "$cuisine"}, "count": {"$sum": 1}}} ] ) cursor = restaurants.aggregate( [ {"$group": {"_id": {"borough": "$borough", "cuisine": "$cuisine"}, "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 3}}} ] ) cursor = restaurants.aggregate( [ {"$match": {"borough": "Manhattan"}}, {"$group": {"_id": {"zipcode": "$address.zipcode", "cuisine": "$cuisine"}, "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 3}}} ] ) print_my_docs(cursor, 5) cursor = restaurants.aggregate( [ {"$match": {"borough": "Manhattan"}}, {"$group": {"_id": {"zipcode": "$address.zipcode", "cuisine": "$cuisine"}, "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 3}}}, {"$sort": {"count": 1}} ] ) cursor = restaurants.aggregate( [ {"$match": {"borough": "Manhattan"}}, {"$group": {"_id": {"zipcode": "$address.zipcode", "cuisine": "$cuisine"}, "count": {"$sum": 1}}}, {"$match": {"count": {"$gt": 3}}}, {"$sort": {"count": 1}} ], allowDiskUse = True # this can be useful when data does not fit in memory, e.g., to perform external sorting ) # note that the argument is a list of tuples # [(<field>: <type>), ...] # here, we specify only one such tuple for one field restaurants.create_index([("borough", pm.ASCENDING)]) # compound index (more than one indexed fields) restaurants.create_index([ ("cuisine", pm.ASCENDING), ("address.zipcode", pm.DESCENDING) ]) restaurants.drop_index('borough_1') # drop this index restaurants.drop_index('cuisine_1_address.zipcode_-1') # drop that index restaurants.drop_indexes() # drop all indexes!!1 restaurants.find_one() restaurants.create_index([("address.coord", 1)]) restaurants.create_index([("grades.score", 1)]) restaurants.create_index([("grades.grade", 1), ("grades.score", 1)]) restaurants.create_index([("address.coord", 1), ("grades.score", 1)]) # NOPE! restaurants.drop_indexes() # we drop all indexes first -- use this with care! restaurants.create_index([("borough", pm.ASCENDING)]) # build an index on field "borough", in ascending order my_cursor = restaurants.find({"borough": "brooklyn"}) # submit query to find restaurants from specific borough my_cursor.explain()["queryPlanner"]["winningPlan"] # ask mongodb to explain execution plan restaurants.drop_indexes() # we drop all indexes first -- use this with care! my_cursor = restaurants.find({"borough": "brooklyn"}) # submit query to find restaurants from specific borough my_cursor.explain()["queryPlanner"]["winningPlan"] # ask mongodb to explain execution plan for a in restaurants.find({"borough": "Manhattan"}).limit(7): for b in restaurants.find({"borough": "Bronx"}).limit(5): if a["cuisine"] == b["cuisine"]: print(a["cuisine"], a["address"]["zipcode"], b["address"]["zipcode"]) # create first collection orders_docs = [{ "_id" : 1, "item" : "abc", "price" : 12, "quantity" : 2 }, { "_id" : 2, "item" : "jkl", "price" : 20, "quantity" : 1 }, { "_id" : 3 }] orders = db.orders orders.drop() orders.insert_many(orders_docs) # create second collection inventory_docs = [ { "_id" : 1, "item" : "abc", "description": "product 1", "instock" : 120 }, { "_id" : 2, "item" : "def", "description": "product 2", "instock" : 80 }, { "_id" : 3, "item" : "ijk", "description": "product 3", "instock" : 60 }, { "_id" : 4, "item" : "jkl", "description": "product 4", "instock" : 70 }, { "_id" : 5, "item": None, "description": "Incomplete" }, { "_id" : 6 } ] inventory = db.inventory inventory.drop() inventory.insert_many(inventory_docs) result = orders.aggregate([ # "orders" is the outer collection { "$lookup": { "from": "inventory", # the inner collection "localField": "item", # the join field of the outer collection "foreignField": "item", # the join field of the outer collection "as": "inventory_docs" # name of field with array of joined inner docs } } ]) print_my_docs(result, 10) # using the $not operator # "find restaurants that contain no grades that are not equal to A" cursor = restaurants.find({"grades.grade": {"$exists": True}, "grades": {"$not": {"$elemMatch": {"grade": {"$ne": "A"}}}}}) print_my_docs(cursor, 3) # simple example of a collection mycoll = db.mycoll mycoll.drop() # insert three documents mycoll.insert_one({"grades": [7, 7]}) mycoll.insert_one({"grades": [7, 3]}) mycoll.insert_one({"grades": [3, 3]}) mycoll.insert_one({"grades": []}) mycoll.insert_one({}) # find documents that have no "grades" elements that are not equal to "A" mycursor = mycoll.find({"grades": {"$not": {"$elemMatch": {"$ne": 7}}}}) print_my_docs(mycursor, 10) # using aggregation mycursor = restaurants.aggregate( [ # unwind the grades array {"$unwind": "$grades"}, #now each document contains one "grades" value # group by document "_id" and count: # (i) the total number of documents in each group as `count` # -- this is the same as the number of elements in the original array # (ii) the number of documents that satisfy the condition (grade = "A") as `num_satisfied` {"$group": {"_id": "$_id", "count": {"$sum": 1}, "num_satisfied": {"$sum": {"$cond": [{"$eq": ["$grades.grade", "A"]}, 1, 0]}}}}, # create a field (named `same`) that is 1 if (count = num_satisfied) and 0 otherwise {"$project": {"_id": 1, "same_count": {"$cond": [{"$eq": ["$count", "$num_satisfied"]} , 1, 0]}}}, # keep only the document ids for which (same = 1) {"$match": {"same_count": 1}} ] ) print_my_docs(mycursor, 5) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: "Hello World!" Step2: Documents follow the JSON format and MongoDB stores them in a binary version of it (BSON). Step3: Note that we can also use native Python objects, like the datetime object below, to specify values. Step4: Inserting and finding documents Step5: If we call the collection's function find(), we get back a cursor. Step6: We can use the cursor to iterate over all documents in the collection. Step7: Notice that the empty document we inserted is not really empty, but associated with an "_id" key, added by MongoDB. Step8: Notice how MongoDB added an "_id" for the new document, as well. Step9: Notice how the document we insert do not follow a schema? Step10: Projecting fields Step11: What if we're interested in keeping only some of the rest of the fields -- let's say, only "grades"? Step12: Loading a larger dataset Step13: Alternatively, you can import the dataset by running the same command on a terminal. Step14: Querying the Dataset Step15: Specify equality conditions Step16: Specify Range Conditions Step17: Multiple Conditions Step18: Sorting Step19: Aggregation Step20: Limiting the number of results Step21: Storing the result as a collection Step22: SQL to Aggregation Step23: SQL query Step24: SQL query Step25: SQL Query Step26: SQL Query Step27: Using secondary memory (disk) Step28: Indexing Step29: The index is created only if it does not already exist. Step30: Deleting indexes Step31: Multi-key index Step32: The following will not work! Step33: Retrieving the execution plan Step34: As we see in this example, MongoDB makes use of an index (as indicated by keyword "IXSCAN") -- and particularly the index ('borough_1') we constructed to execute the query. Step35: In that case, MongoDB simply performs a scan over the collection (as indicated by keyword "COLLSCAN"). Step36: Joins with \$lookup Step37: Questions from tutorial sessions Step38: Note on the semantics of the \$not operator Step39: The result of the following query contains documents that do not contain the "grades" field. Step40: We can remove such documents from the result as a post-processing step. (Exercise
3,294
<ASSISTANT_TASK:> Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt from compecon import BasisChebyshev, NLP, nodeunif from compecon.demos import demo alpha= 1.0; eta= 1.5; D = lambda p: p** (-eta) n= 25; a= 0.1; b= 3.0 S= BasisChebyshev(n, a, b, labels= ['price'], l=['supply']) p= S.nodes S.y= np.ones_like(p) def resid(c): S.c= c # update interpolation coefficients q= S(p) # compute quantity supplied at price nodes return p- q* (p** (eta+ 1)/ eta)- alpha* np.sqrt(q)- q** 2 cournot = NLP(resid) S.c = cournot.broyden(S.c, tol=1e-12) nFirms= 5; pplot = nodeunif(501, a, b) demo.figure('Cournot Effective Firm Supply Function', 'Quantity', 'Price', [0, nFirms], [a, b]) plt.plot(nFirms* S(pplot), pplot, D(pplot), pplot) plt.legend(('Supply','Demand')) plt.show(); p= pplot demo.figure('Residual Function for Cournot Problem', 'Quantity', 'Residual') plt.hlines(0, a, b, 'k', '--', lw= 2) plt.plot(pplot, resid(S.c)) plt.plot(S.nodes,np.zeros_like(S.nodes),'r*'); plt.show(); m= np.array([1, 3, 5, 10, 15, 20]) demo.figure('Supply and Demand Functions', 'Quantity', 'Price', [0, 13]) plt.plot(np.outer(S(pplot), m), pplot) plt.plot(D(pplot), pplot, linewidth= 2, color='black') plt.legend(['m= 1', 'm= 3', 'm= 5', 'm= 10', 'm= 15', 'm= 20', 'demand']); plt.show(); pp= (b+ a)/ 2 dp= (b- a)/ 2 m = np.arange(1, 26) for i in range(50): dp/= 2 pp= pp- np.sign(S(pp)* m- D(pp))* dp demo.figure('Cournot Equilibrium Price as Function of Industry Size', 'Number of Firms', 'Price') plt.bar(m, pp); plt.show(); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: and set the $\alpha$ and $\eta$ parameters Step2: For convenience, we define a lambda function to represent the demand. Note Step3: We will approximate the solution for prices in the $p\in [a, b]$ interval, using 25 collocation nodes. The compecon library provides the BasisChebyshev class to make computations with Chebyshev bases Step4: Let's assume that our first guess is $S(p)=1$. To that end, we set the value of S to one in each of the nodes Step5: It is important to highlight that in this problem the unknowns are the $c_k$ coefficients from the Chebyshev basis; however, an object of BasisChebyshev class automatically adjusts those coefficients so they are consistent with the values we set for the function at the nodes (here indicated by the .y property). Step6: Note that the resid function takes a single argument (the coefficients for the Chebyshev basis). All other parameters (Q, p, eta, alpha must be declared in the main script, where Python will find their values. Step7: After 20 iterations, Broyden's method converges to the desired solution. We can visualize this in Figure 3, which shows the value of the function on 501 different points within the approximation interval. Notice that the residual plot crosses the horizontal axis 25 times; this occurs precisely at the collocation nodes (represented by red dots). This figure also shows the precision of the approximation Step8: Figure 3 Step9: Figure 4 Step10: In Figure 4 notice how the equilibrium price and quantity change as the number of firms increases.
3,295
<ASSISTANT_TASK:> Python Code: # 数値計算やデータフレーム操作に関するライブラリをインポートする import numpy as np import pandas as pd # URL によるリソースへのアクセスを提供するライブラリをインポートする。 # import urllib # Python 2 の場合 import urllib.request # Python 3 の場合 # 図やグラフを図示するためのライブラリをインポートする。 import matplotlib.pyplot as plt %matplotlib inline # 線形回帰を行なうライブラリ from sklearn import linear_model # ウェブ上のリソースを指定する url = 'https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt' # 指定したURLからリソースをダウンロードし、名前をつける。 # urllib.urlretrieve(url, 'winequality-red.csv') # Python 2 の場合 urllib.request.urlretrieve(url, 'winequality-red.txt') # Python 3 の場合 # データの読み込み df1 = pd.read_csv('winequality-red.txt', sep='\t', index_col=0) df1.head() # 先頭5行まで表示 clf = linear_model.LinearRegression() X = df1.loc[:, ['pH']].as_matrix() # 説明変数 = pH pd.DataFrame(X).T # 中身の確認。縦に長いと見にくいので転置して表示。 Y = df1['fixed acidity'].as_matrix() # 目的変数 = fixed acidity pd.DataFrame(Y).T # 中身の確認。縦に長いと見にくいので転置して表示。 clf.fit(X, Y) # 予測モデルを作成 # 回帰係数 clf.coef_ # 切片 clf.intercept_ # 決定係数 clf.score(X, Y) # 散布図 plt.scatter(X, Y) # 回帰直線 plt.title('Linear regression') plt.plot(X, clf.predict(X)) plt.xlabel('pH') plt.ylabel('fixed acidity') plt.grid() plt.show() clf = linear_model.LinearRegression() # 説明変数に "quality (品質スコア以外すべて)" を利用 df1_except_quality = df1.drop('quality', axis=1) X = df1_except_quality.as_matrix() # 目的変数に "quality (品質スコア)" を利用 Y = df1['quality'].as_matrix() # 予測モデルを作成 clf.fit(X, Y) # 切片 (誤差) print(clf.intercept_) # 偏回帰係数 pd.DataFrame({'Name':df1_except_quality.columns, 'Coefficients':clf.coef_}).sort_values('Coefficients', ascending=False) clf.score(X, Y) clf = linear_model.LinearRegression() # データフレームの各列を正規化 df1s = df1.apply(lambda x: (x - np.mean(x)) / (np.max(x) - np.min(x))) # 説明変数に "quality (品質スコア以外すべて)" を利用 df1s_except_quality = df1s.drop("quality", axis=1) X = df1s_except_quality.as_matrix() # 目的変数に "quality (品質スコア)" を利用 Y = df1s['quality'].as_matrix() # 予測モデルを作成 clf.fit(X, Y) # 偏回帰係数 pd.DataFrame({'Name':df1s_except_quality.columns, 'Coefficients':np.abs(clf.coef_)}).sort_values('Coefficients', ascending=False) # 練習4.1 <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: <h3 STYLE="background Step2: <h3 STYLE="background Step3: 以上の結果は、説明変数と目的変数の関係が次の回帰式で表されることを示しています。 Step4: 結果を 2 次元座標上にプロットすると、以下のようになります。青線が回帰直線を表します。 Step5: <h3 STYLE="background Step6: 上記の結果からワインの品質スコアは、以下のような回帰式で表せることがわかります。 Step7: 各変数がどの程度目的変数に影響しているかを確認するには、各変数を正規化 (標準化) し、平均 = 0, 標準偏差 = 1 になるように変換した上で、重回帰分析を行うと偏回帰係数の大小で比較することができるようになります。 Step8: 正規化した偏回帰係数を確認すると、alcohol (アルコール度数) が最も高い値を示し、品質に大きな影響を与えていることがわかります。
3,296
<ASSISTANT_TASK:> Python Code: # keras.datasets.imdb is broken in TensorFlow 1.13 and 1.14 due to numpy 1.16.3 !pip install numpy==1.16.2 # All the imports! import tensorflow as tf import numpy as np from tensorflow.keras.preprocessing import sequence from numpy import array # Supress deprecation warnings import logging logging.getLogger('tensorflow').disabled = True # Fetch "IMDB Movie Review" data, constraining our reviews to # the 10000 most commonly used words vocab_size = 10000 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=vocab_size) # Map for readable classnames class_names = ["Negative", "Positive"] # Show the currently installed version of TensorFlow print("TensorFlow version: ",tf.version.VERSION) # Get the word index from the dataset word_index = tf.keras.datasets.imdb.get_word_index() # Ensure that "special" words are mapped into human readable terms word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNKNOWN>"] = 2 word_index["<UNUSED>"] = 3 # Perform reverse word lookup and make it callable # TODO reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) def decode_review(text): return ' '.join([reverse_word_index.get(i, '?') for i in text]) # Concatenate test and training datasets allreviews = np.concatenate((x_train, x_test), axis=0) # Review lengths across test and training whole datasets print("Maximum review length: {}".format(len(max((allreviews), key=len)))) print("Minimum review length: {}".format(len(min((allreviews), key=len)))) result = [len(x) for x in allreviews] print("Mean review length: {}".format(np.mean(result))) # Print a review and it's class as stored in the dataset. Replace the number # to select a different review. print("") print("Machine readable Review") print(" Review Text: " + str(x_train[60])) print(" Review Sentiment: " + str(y_train[60])) # Print a review and it's class in human readable format. Replace the number # to select a different review. print("") print("Human Readable Review") print(" Review Text: " + decode_review(x_train[60])) print(" Review Sentiment: " + class_names[y_train[60]]) # The length of reviews review_length = 500 # Padding / truncated our reviews x_train = sequence.pad_sequences(x_train, maxlen = review_length) x_test = sequence.pad_sequences(x_test, maxlen = review_length) # Check the size of our datasets. Review data for both test and training should # contain 25000 reviews of 500 integers. Class data should contain 25000 values, # one for each review. Class values are 0 or 1, indicating a negative # or positive review. print("Shape Training Review Data: " + str(x_train.shape)) print("Shape Training Class Data: " + str(y_train.shape)) print("Shape Test Review Data: " + str(x_test.shape)) print("Shape Test Class Data: " + str(y_test.shape)) # Note padding is added to start of review, not the end print("") print("Human Readable Review Text (post padding): " + decode_review(x_train[60])) # We begin by defining an empty stack. We'll use this for building our # network, later by layer. model = tf.keras.models.Sequential() # The Embedding Layer provides a spatial mapping (or Word Embedding) of all the # individual words in our training set. Words close to one another share context # and or meaning. This spatial mapping is learning during the training process. model.add( tf.keras.layers.Embedding( input_dim = vocab_size, # The size of our vocabulary output_dim = 32, # Dimensions to which each words shall be mapped input_length = review_length # Length of input sequences ) ) # Dropout layers fight overfitting and forces the model to learn multiple # representations of the same data by randomly disabling neurons in the # learning phase. # TODO model.add( tf.keras.layers.Dropout( rate=0.25 # Randomly disable 25% of neurons ) ) # We are using a fast version of LSTM which is optimised for GPUs. This layer # looks at the sequence of words in the review, along with their word embeddings # and uses both of these to determine the sentiment of a given review. # TODO model.add( tf.keras.layers.LSTM( units=32 # 32 LSTM units in this layer ) ) # Add a second dropout layer with the same aim as the first. # TODO model.add( tf.keras.layers.Dropout( rate=0.25 # Randomly disable 25% of neurons ) ) # All LSTM units are connected to a single node in the dense layer. A sigmoid # activation function determines the output from this node - a value # between 0 and 1. Closer to 0 indicates a negative review. Closer to 1 # indicates a positive review. model.add( tf.keras.layers.Dense( units=1, # Single unit activation='sigmoid' # Sigmoid activation function (output from 0 to 1) ) ) # Compile the model model.compile( loss=tf.keras.losses.binary_crossentropy, # loss function optimizer=tf.keras.optimizers.Adam(), # optimiser function metrics=['accuracy']) # reporting metric # Display a summary of the models structure model.summary() tf.keras.utils.plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=False) # Train the LSTM on the training data history = model.fit( # Training data : features (review) and classes (positive or negative) x_train, y_train, # Number of samples to work through before updating the # internal model parameters via back propagation. The # higher the batch, the more memory you need. batch_size=256, # An epoch is an iteration over the entire training data. epochs=3, # The model will set apart his fraction of the training # data, will not train on it, and will evaluate the loss # and any model metrics on this data at the end of # each epoch. validation_split=0.2, verbose=1 ) # Get Model Predictions for test data # TODO from sklearn.metrics import classification_report predicted_classes = np.argmax(model.predict(x_test), axis=-1) print(classification_report(y_test, predicted_classes, target_names=class_names)) predicted_classes_reshaped = np.reshape(predicted_classes, 25000) incorrect = np.nonzero(predicted_classes_reshaped!=y_test)[0] # We select the first 10 incorrectly classified reviews for j, incorrect in enumerate(incorrect[0:20]): predicted = class_names[predicted_classes_reshaped[incorrect]] actual = class_names[y_test[incorrect]] human_readable_review = decode_review(x_test[incorrect]) print("Incorrectly classified Test Review ["+ str(j+1) +"]") print("Test Review #" + str(incorrect) + ": Predicted ["+ predicted + "] Actual ["+ actual + "]") print("Test Review Text: " + human_readable_review.replace("<PAD> ", "")) print("") # Write your own review review = "this was a terrible film with too much sex and violence i walked out halfway through" #review = "this is the best film i have ever seen it is great and fantastic and i loved it" #review = "this was an awful film that i will never see again" # Encode review (replace word with integers) tmp = [] for word in review.split(" "): tmp.append(word_index[word]) # Ensure review is 500 words long (by padding or truncating) tmp_padded = sequence.pad_sequences([tmp], maxlen=review_length) # Run your processed review against the trained model rawprediction = model.predict(array([tmp_padded][0]))[0][0] prediction = int(round(rawprediction)) # Test the model and print the result print("Review: " + review) print("Raw Prediction: " + str(rawprediction)) print("Predicted Class: " + class_names[prediction]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Note Step2: Create map for converting IMDB dataset to readable reviews Step3: Data Insight Step4: Pre-processing Data Step5: Create and build LSTM Recurrent Neural Network Step6: Visualise the Model Step7: Train the LSTM Step8: Evaluate model with test data and view results Step9: View some incorrect predictions Step10: Run your own text against the trained model
3,297
<ASSISTANT_TASK:> Python Code: import pandas as pd # For monitoring duration of pandas processes from tqdm import tqdm, tqdm_pandas # To avoid RuntimeError: Set changed size during iteration tqdm.monitor_interval = 0 # Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm` # (can use `tqdm_gui`, `tqdm_notebook`, optional kwargs, etc.) tqdm.pandas(desc="Progress:") # Now you can use `progress_apply` instead of `apply` # and `progress_map` instead of `map` # can also groupby: # df.groupby(0).progress_apply(lambda x: x**2) # df0 = pd.read_pickle('../data/interim/004_synonyms_grouped_1k.p') df0 = pd.read_pickle('../data/interim/002_keyed_nouns.p') df0.head() dictionary_df00 = pd.read_pickle('../data/interim/003_dictionary.p') len(dictionary_df00) dictionary_df00.head() dictionary_df00.loc[dictionary_df00['frequency'] > 5].describe() dictionary_df00['word'].loc[dictionary_df00['frequency'] > 4].count() gt4_dictionary_df01 = dictionary_df00.loc[dictionary_df00['frequency'] > 4] dictionary_df00['frequency'].loc[dictionary_df00['frequency'] > 4].describe() # Use threshold for first quantile final_dic = gt4_dictionary_df01.loc[dictionary_df00['frequency'] < 8] len(final_dic) final_dic_df01 = final_dic.assign(normalised = final_dic['frequency'].progress_apply(lambda frequency:frequency/486)) final_dic_df01.head() df0.head() df1 = pd.DataFrame(df0.uniqueKey.str.split('##',1).tolist(),columns = ['userId','asin']) df1.head() df_reviewText = pd.DataFrame(df0['reviewText']) df_reviewText.head() df_new = pd.concat([df1, df_reviewText], axis=1) df_new.head() df_new_01 = df_new.assign(wordCountBefore = df_new['reviewText'].progress_apply(lambda review:len(review))) df_new_01.head() final_dic_df01['word'] = final_dic_df01['word'].progress_apply(lambda word: word.replace(" ","")) final_dic_df01 = final_dic_df01.reset_index() final_dic_df01.head() filtered_dict = final_dic_df01['word'].to_dict() inv_filtered_dict = {v: k for k, v in filtered_dict.items()} inv_filtered_dict def filter_words(review): new_review = [] for word in review: word = word.strip() if word in inv_filtered_dict: new_review.append(word) return new_review df_new_02 = df_new_01.assign(filteredText = df_new_01['reviewText'].progress_apply(lambda review:filter_words(review))) df_new_03 = df_new_02.assign(wordCountAfter = df_new_02['filteredText'].progress_apply(lambda review:len(review))) df_new_03[0:20] remaining = 1 - df_new_03['wordCountAfter'].sum() / df_new_03['wordCountBefore'].sum() print("Average noun reduction achieved:" + str(remaining*100) + "%") df_books_bigReviews = pd.DataFrame(df_new_03[['asin','filteredText']].groupby(['asin'])['filteredText'].progress_apply(list)) df_books_bigReviews = df_books_bigReviews.reset_index() df_books_bigReviews = df_books_bigReviews.assign(transactions = df_books_bigReviews['filteredText'].progress_apply(lambda reviews_lis:len(reviews_lis))) df_books_bigReviews.head() from apyori import apriori # Support # Support is an indication of how frequently the itemset appears in the dataset. # Confidence # Confidence is an indication of how often the rule has been found to be true. # Lift # The ratio of the observed support to that expected if X and Y were independent. def apply_arm(transactions): return list(apriori(transactions, min_support = 1/len(transactions), min_confidence = 1, min_lift = len(transactions), max_length = 4)) books_with_arm = df_books_bigReviews.assign(arm = df_books_bigReviews['filteredText'].progress_apply(lambda list_of_reviews:apply_arm(list_of_reviews))) books_with_arm.head() def get_important_nouns(arms): imp_nns = [] if "items" in pd.DataFrame(arms).keys(): results = list(pd.DataFrame(arms)['items']) for result in results: if len(list(result)) > 4: imp_nns = imp_nns + list(list(result)) if(len(imp_nns)==0): for result in results: if len(list(result)) > 3: imp_nns = imp_nns + list(list(result)) return list(set(imp_nns)) return list(set(imp_nns)) imp_nns_df = books_with_arm.assign(imp_nns = books_with_arm['arm'] .progress_apply(lambda arms:get_important_nouns(arms))) imp_nns_df.head() imp_nns_df = imp_nns_df[['asin','imp_nns']] imp_nns_df.head() imp_nns_df.to_pickle("../data/interim/005_important_nouns.p") imp_nns_df = imp_nns_df.assign(num_of_imp_nouns = imp_nns_df['imp_nns'].progress_apply(lambda imp_nouns:len(imp_nouns))) imp_nns_df.head() import plotly import plotly.plotly as py from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import cufflinks as cf print(cf.__version__) # Configure cufflings cf.set_config_file(offline=False, world_readable=True, theme='pearl') # Filter out synonyms again booksWithNoImportantNouns = imp_nns_df.loc[imp_nns_df['num_of_imp_nouns'] == 0] len(booksWithNoImportantNouns) booksWithNoImportantNouns = imp_nns_df.loc[imp_nns_df['num_of_imp_nouns'] != 0] len(booksWithNoImportantNouns) booksWithNoImportantNouns[0:20] booksWithNoImportantNouns['num_of_imp_nouns'].iplot(kind='histogram', bins=100, xTitle='Number of Important Nouns', yTitle='Number of Books') booksWithNoImportantNouns.describe() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The idea Step2: Begin noun filtering Step3: Association Rules Mining Filtering Step4: Some more stats
3,298
<ASSISTANT_TASK:> Python Code: # code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', '..', 'notebook_format')) from formats import load_style load_style(plot_style=False) os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format='retina' import time import fasttext import numpy as np import pandas as pd import matplotlib.pyplot as plt # prevent scientific notations pd.set_option('display.float_format', lambda x: '%.3f' % x) %watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,fasttext,scipy # download the data and un-tar it under the 'data' folder # -P or --directory-prefix specifies which directory to download the data to !wget https://dl.fbaipublicfiles.com/fasttext/data/cooking.stackexchange.tar.gz -P data # -C specifies the target directory to extract an archive to !tar xvzf data/cooking.stackexchange.tar.gz -C data !head -n 3 data/cooking.stackexchange.txt # train/test split from fasttext_module.split import train_test_split_file from fasttext_module.utils import prepend_file_name data_dir = 'data' test_size = 0.2 input_path = os.path.join(data_dir, 'cooking.stackexchange.txt') input_path_train = prepend_file_name(input_path, 'train') input_path_test = prepend_file_name(input_path, 'test') random_state = 1234 encoding = 'utf-8' train_test_split_file(input_path, input_path_train, input_path_test, test_size, random_state, encoding) print('train path: ', input_path_train) print('test path: ', input_path_test) # train the fasttext model fasttext_params = { 'input': input_path_train, 'lr': 0.1, 'lrUpdateRate': 1000, 'thread': 8, 'epoch': 15, 'wordNgrams': 1, 'dim': 80, 'loss': 'ova' } model = fasttext.train_supervised(**fasttext_params) print('vocab size: ', len(model.words)) print('label size: ', len(model.labels)) print('example vocab: ', model.words[:5]) print('example label: ', model.labels[:5]) model_checkpoint = os.path.join('model', 'model.fasttext') model.save_model(model_checkpoint) # model.get_input_matrix().shape print('output matrix shape: ', model.get_output_matrix().shape) model.get_output_matrix() from scipy.cluster.vq import kmeans2, vq def compute_code_books(vectors, sub_size=2, n_cluster=128, n_iter=20, minit='points', seed=123): n_rows, n_cols = vectors.shape n_sub_cols = n_cols // sub_size np.random.seed(seed) code_books = np.zeros((sub_size, n_cluster, n_sub_cols), dtype=np.float32) for subspace in range(sub_size): sub_vectors = vectors[:, subspace * n_sub_cols:(subspace + 1) * n_sub_cols] centroid, label = kmeans2(sub_vectors, n_cluster, n_iter, minit=minit) code_books[subspace] = centroid return code_books sub_size = 2 # m n_cluster = 64 # k # learning the cluster centroids / code books for our output matrix/embedding code_books = compute_code_books(model.get_output_matrix(), sub_size, n_cluster) print('code book size: ', code_books.shape) def encode(vectors, code_books): n_rows, n_cols = vectors.shape sub_size = code_books.shape[0] n_sub_cols = n_cols // sub_size codes = np.zeros((n_rows, sub_size), dtype=np.int32) for subspace in range(sub_size): sub_vectors = vectors[:, subspace * n_sub_cols:(subspace + 1) * n_sub_cols] code, dist = vq(sub_vectors, code_books[subspace]) codes[:, subspace] = code return codes # our original embedding now becomes the cluster centroid for each subspace vector_codes = encode(model.get_output_matrix(), code_books) print('encoded vector codes size: ', vector_codes.shape) vector_codes (vector_codes.nbytes + code_books.nbytes) / model.get_output_matrix().nbytes # we'll get one of the labels to find its nearest neighbors label_id = 0 print(model.labels[label_id]) query = model.get_output_matrix()[label_id] query.shape # printing out the shape of the code book to hopefully make it easier code_books.shape def query_dist_table(query, code_books): sub_size, n_cluster, n_sub_cols = code_books.shape dist_table = np.zeros((sub_size, n_cluster)) for subspace in range(sub_size): sub_query = query[subspace * n_sub_cols:(subspace + 1) * n_sub_cols] diff = code_books[subspace] - sub_query.reshape(1, -1) diff = np.sum(diff ** 2, axis=1) dist_table[subspace, :] = diff return dist_table dist_table = query_dist_table(query, code_books) print(dist_table.shape) dist_table[:, :5] # lookup the distance dists = np.sum(dist_table[range(sub_size), vector_codes], axis=1) dists[:5] # the numpy indexing trick is equivalent to the following loop approach n_rows = vector_codes.shape[0] dists = np.zeros(n_rows).astype(np.float32) for n in range(n_rows): for m in range(sub_size): dists[n] += dist_table[m][vector_codes[n][m]] dists[:5] # find the nearest neighbors and "translate" it to the original labels k = 5 nearest = np.argsort(dists)[:k] [model.labels[label] for label in nearest] from typing import Dict def score(input_path_train: str, input_path_test: str, model: fasttext.FastText._FastText, k: int, round_digits: int=3) -> Dict[str, float]: file_path_dict = { 'train': input_path_train, 'test': input_path_test } result = {} for group, file_path in file_path_dict.items(): num_records, precision_at_k, recall_at_k = model.test(file_path, k) f1_at_k = 2 * (precision_at_k * recall_at_k) / (precision_at_k + recall_at_k) metric = { f'{group}_precision@{k}': round(precision_at_k, round_digits), f'{group}_recall@{k}': round(recall_at_k, round_digits), f'{group}_f1@{k}': round(f1_at_k, round_digits) } result.update(metric) return result k = 1 result = score(input_path_train, input_path_test, model, k) result def compute_file_size(file_path: str) -> str: Calculate the file size and format it into a human readable string. References ---------- https://stackoverflow.com/questions/2104080/how-can-i-check-file-size-in-python file_size = compute_raw_file_size(file_path) file_size_str = convert_bytes(file_size) return file_size_str def compute_raw_file_size(file_path: str) -> int: Calculate the file size in bytes. file_info = os.stat(file_path) return file_info.st_size def convert_bytes(num: int) -> str: Convert bytes into more human readable MB, GB, etc. for unit in ['bytes', 'KB', 'MB', 'GB', 'TB']: if num < 1024.0: return "%3.1f %s" % (num, unit) num /= 1024.0 compute_file_size(model_checkpoint) dsubs = [-1, 2, 4, 8] results = [] for dsub in dsubs: # ensure we are always loading from the original model, # i.e. do not over-ride the model_checkpoint variable fasttext_model = fasttext.load_model(model_checkpoint) if dsub > 0: dir_name = os.path.dirname(model_checkpoint) model_path = os.path.join(dir_name, f'model_quantized_dsub{dsub}.fasttext') # qnorm, normalized the vector and quantize it fasttext_model.quantize(dsub=dsub, qnorm=True) fasttext_model.save_model(model_path) else: model_path = model_checkpoint result = score(input_path_train, input_path_test, fasttext_model, k) result['dsub'] = dsub result['file_size'] = compute_raw_file_size(model_path) results.append(result) df_results = pd.DataFrame.from_dict(results) df_results # change default style figure and font size plt.rcParams['figure.figsize'] = 15, 6 plt.rcParams['font.size'] = 12 fig, (ax1, ax2) = plt.subplots(1, 2) fig.suptitle('Fasttext Quantization Experiments') ax1.plot(df_results['dsub'], df_results['file_size']) ax1.set_title('dsub versus file size') ax1.set_xlabel('dsub') ax1.set_ylabel('file size (bytes)') ax2.plot(df_results['dsub'], df_results['test_precision@1']) ax2.set_title('dsub versus test precision@1') ax2.set_xlabel('dsub') ax2.set_ylabel('precision@1') plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Product Quantization for Model Compression Step2: Product Quantization from Scratch Step3: Encode Step4: We can calculate the potential size/memory savings if we were to go from storing the original vector into storing the encoded codes and the code book. Step5: Instead of directly compressing the original embedding, we can also learn the codebooks and compress all new incoming embeddings on the fly. Step6: To do so, we'll be computing the distance between each subspace of the query with the cluster centroid of each subspace, giving us a $m \times k$ distance table, where each one denotes the squared Euclidean distance between the $m_{th}$ subvector of the query and the $k_{th}$ code/cluster centroid for that $m_{th}$ subvector. Step7: Then assuming for original vector is already encoded in advance, we can lookup the distances for each cluster centroid and add them up. Step11: The approach illustrated here is more of a naive approach as it still involves calculating the distances to all the vectors, which can still be inefficient for large $n$ (number of data points). We won't be discussing how to speed up the nearest neighborhood search process for product quantization as this documentation is more focused on the compression aspect of it. Step12: For this part of the experiment, we'll tweak the parameter, dimension of subvector, dsub. Remember that this is one of main parameter that controls the tradeoff between the compression ratio and amount of distortion (deviation from the original vector). Step13: We can visualize the table results. Our main observation is that setting dsub to 2 seems to give the most memory reduction while preserving most of the model's performance.
3,299
<ASSISTANT_TASK:> Python Code: # Installing sotware prerequisites via the python package index: !pip install -U numpy matplotlib sklearn pysptools wget #Import packages # Ensure that this code works on both python 2 and python 3 from __future__ import division, print_function, absolute_import, unicode_literals # basic numeric computation: import numpy as np # The package used for creating and manipulating HDF5 files: import h5py # Plotting and visualization: import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable # for downloading files: import wget import os # multivariate analysis: from sklearn.decomposition import NMF from pysptools import eea import pysptools.abundance_maps as amp from pysptools.eea import nfindr # finally import pycroscopy: import pycroscopy as px # configure the notebook to place plots after code cells within the notebook: %matplotlib inline # download the data file from Github: url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/NanoIR.txt' data_file_path = 'temp.txt' _ = wget.download(url, data_file_path) #data_file_path = px.io.uiGetFile(filter='Anasys NanoIR text export (*.txt)') # Load the data from file to memory data_mat = np.loadtxt(data_file_path, delimiter ='\t', skiprows =1 ) print('Data currently of shape:', data_mat.shape) # Only every fifth column is of interest (position) data_mat = data_mat[:, 1::5] # The data is structured as [wavelength, position] # nans cannot be handled in most of these decompositions. So set them to be zero. data_mat[np.isnan(data_mat)]=0 # Finally, taking the transpose of the matrix to match [position, wavelength] data_mat = data_mat.T num_pos = data_mat.shape[0] spec_pts = data_mat.shape[1] print('Data currently of shape:', data_mat.shape) x_label = 'Spectral dimension' y_label = 'Intensity (a.u.)' folder_path, file_name = os.path.split(data_file_path) file_name = file_name[:-4] + '_' h5_path = os.path.join(folder_path, file_name + '.h5') # Use NumpyTranslator to convert the data to h5 tran = px.io.NumpyTranslator() h5_path = tran.translate(h5_path, data_mat, num_pos, 1, scan_height=spec_pts, scan_width=1, qty_name='Intensity', data_unit='a.u', spec_name=x_label, spatial_unit='a.u.', data_type='NanoIR') h5_file = h5py.File(h5_path, mode='r+') # See if a tree has been created within the hdf5 file: px.hdf_utils.print_tree(h5_file) h5_main = h5_file['Measurement_000/Channel_000/Raw_Data'] h5_spec_vals = px.hdf_utils.getAuxData(h5_main,'Spectroscopic_Values')[0] h5_pos_vals = px.hdf_utils.getAuxData(h5_main,'Position_Values')[0] x_label = px.hdf_utils.get_formatted_labels(h5_spec_vals)[0] y_label = px.hdf_utils.get_formatted_labels(h5_pos_vals)[0] descriptor = px.hdf_utils.get_data_descriptor(h5_main) fig, axis = plt.subplots(figsize=(8,5)) px.plot_utils.plot_map(axis, h5_main, cmap='inferno') axis.set_title('Raw data - ' + descriptor) axis.set_xlabel(x_label) axis.set_ylabel(y_label) vec = h5_spec_vals[0] cur_x_ticks = axis.get_xticks() for ind in range(1,len(cur_x_ticks)-1): cur_x_ticks[ind] = h5_spec_vals[0, ind] axis.set_xticklabels([str(val) for val in cur_x_ticks]); h5_svd_grp = px.processing.doSVD(h5_main) U = h5_svd_grp['U'] S = h5_svd_grp['S'] V = h5_svd_grp['V'] # Visualize the variance / statistical importance of each component: px.plot_utils.plotScree(S, title='Note the exponential drop of variance with number of components') # Visualize the eigenvectors: px.plot_utils.plot_loops(np.arange(spec_pts), V, x_label=x_label, y_label=y_label, plots_on_side=3, subtitles='Component', title='SVD Eigenvectors', evenly_spaced=False); # Visualize the abundance maps: px.plot_utils.plot_loops(np.arange(num_pos), np.transpose(U), plots_on_side=3, subtitles='Component', title='SVD Abundances', evenly_spaced=False); num_comps = 4 estimators = px.Cluster(h5_main, 'KMeans', num_comps=num_comps) h5_kmeans_grp = estimators.do_cluster(h5_main) h5_kmeans_labels = h5_kmeans_grp['Labels'] h5_kmeans_mean_resp = h5_kmeans_grp['Mean_Response'] fig, axes = plt.subplots(ncols=2,figsize=(18,8)) for clust_ind, end_member in enumerate(h5_kmeans_mean_resp): axes[0].plot(end_member+(500*clust_ind), label = 'Cluster #' + str(clust_ind)) axes[0].legend(bbox_to_anchor = [1.05,1.0], fontsize=12) axes[0].set_title('K-Means Cluster Centers', fontsize=14) axes[0].set_xlabel(x_label, fontsize=14) axes[0].set_ylabel(y_label, fontsize=14) axes[1].plot(h5_kmeans_labels) axes[1].set_title('KMeans Labels', fontsize=14) axes[1].set_xlabel('Position', fontsize=14) axes[1].set_ylabel('Label'); num_comps = 4 # Make sure the data is non-negative: data_mat[h5_main[()] < 0] = 0 model = NMF(n_components=num_comps, init='random', random_state=0) model.fit(data_mat) fig, axis = plt.subplots() for comp_ind, end_member in enumerate(model.components_): axis.plot(end_member + comp_ind * 50, label = 'NMF Component #' + str(comp_ind)) axis.set_xlabel(x_label, fontsize=12) axis.set_ylabel(y_label, fontsize=12) axis.set_title('NMF Components', fontsize=14) axis.legend(bbox_to_anchor=[1.0,1.0], fontsize=12); num_comps = 4 nfindr_results = eea.nfindr.NFINDR(data_mat, num_comps) #Find endmembers end_members = nfindr_results[0] fig, axis = plt.subplots() for comp_ind, end_member in enumerate(end_members): axis.plot(end_member + comp_ind * 1000, label = 'NFINDR Component #' + str(comp_ind)) axis.set_title('NFINDR Endmembers', fontsize=14) axis.set_xlabel(x_label, fontsize=12) axis.set_ylabel(y_label, fontsize=12) axis.legend(bbox_to_anchor=[1.0,1.0], fontsize=12) # fully constrained least squares model: fcls = amp.FCLS() # Find abundances: amap = fcls.map(data_mat[np.newaxis, :, :], end_members) # Reshaping amap to match those of conventional endmembers amap = np.squeeze(amap).T fig2, axis2 = plt.subplots() for comp_ind, abundance in enumerate(amap): axis2.plot(abundance, label = 'NFIND R Component #' + str(comp_ind) ) axis2.set_title('Abundances', fontsize=14) axis2.set_xlabel(x_label, fontsize=12) axis2.set_ylabel('Abundance (a. u.)', fontsize=12) axis2.legend(bbox_to_anchor=[1.0,1.0], fontsize=12); # Delete the temporarily downloaded file os.remove(data_file_path) # Close and delete the h5_file h5_file.close() os.remove(h5_path) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The Data Step2: Convert to H5 Step3: Extracting the data and parameters Step4: Visualize the Amplitude Data Step5: 1. Singular Value Decomposition (SVD) Step6: 2. KMeans Clustering Step7: 3. Non-negative Maxtrix Factorization (NMF) Step8: 4. NFINDR