markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Test your content loss. You should see errors less than 0.001.
def content_loss_test(correct): content_image = 'styles/tubingen.jpg' image_size = 192 content_layer = 3 content_weight = 6e-2 c_feats, content_img_var = features_from_img(content_image, image_size) bad_img = Variable(torch.zeros(*content_img_var.data.size())) feats = extract_features(bad_img, cnn) student_output = content_loss(content_weight, c_feats[content_layer], feats[content_layer]).data.numpy() error = rel_error(correct, student_output) print('Maximum error is {:.3f}'.format(error)) content_loss_test(answers['cl_out'])
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Style loss Now we can tackle the style loss. For a given layer $\ell$, the style loss is defined as follows: First, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results. Given a feature map $F^\ell$ of shape $(1, C_\ell, M_\ell)$, the Gram matrix has shape $(1, C_\ell, C_\ell)$ and its elements are given by: $$G_{ij}^\ell = \sum_k F^{\ell}{ik} F^{\ell}{jk}$$ Assuming $G^\ell$ is the Gram matrix from the feature map of the current image, $A^\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\ell$ a scalar weight term, then the style loss for the layer $\ell$ is simply the weighted Euclidean distance between the two Gram matrices: $$L_s^\ell = w_\ell \sum_{i, j} \left(G^\ell_{ij} - A^\ell_{ij}\right)^2$$ In practice we usually compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $\ell$; then the total style loss is the sum of style losses at each layer: $$L_s = \sum_{\ell \in \mathcal{L}} L_s^\ell$$ Begin by implementing the Gram matrix computation below:
def gram_matrix(features, normalize=True): """ Compute the Gram matrix from features. Inputs: - features: PyTorch Variable of shape (N, C, H, W) giving features for a batch of N images. - normalize: optional, whether to normalize the Gram matrix If True, divide the Gram matrix by the number of neurons (H * W * C) Returns: - gram: PyTorch Variable of shape (N, C, C) giving the (optionally normalized) Gram matrices for the N input images. """ pass
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Test your Gram matrix code. You should see errors less than 0.001.
def gram_matrix_test(correct): style_image = 'styles/starry_night.jpg' style_size = 192 feats, _ = features_from_img(style_image, style_size) student_output = gram_matrix(feats[5].clone()).data.numpy() error = rel_error(correct, student_output) print('Maximum error is {:.3f}'.format(error)) gram_matrix_test(answers['gm_out'])
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Next, implement the style loss:
# Now put it together in the style_loss function... def style_loss(feats, style_layers, style_targets, style_weights): """ Computes the style loss at a set of layers. Inputs: - feats: list of the features at every layer of the current image, as produced by the extract_features function. - style_layers: List of layer indices into feats giving the layers to include in the style loss. - style_targets: List of the same length as style_layers, where style_targets[i] is a PyTorch Variable giving the Gram matrix the source style image computed at layer style_layers[i]. - style_weights: List of the same length as style_layers, where style_weights[i] is a scalar giving the weight for the style loss at layer style_layers[i]. Returns: - style_loss: A PyTorch Variable holding a scalar giving the style loss. """ # Hint: you can do this with one for loop over the style layers, and should # not be very much code (~5 lines). You will need to use your gram_matrix function. pass
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Test your style loss implementation. The error should be less than 0.001.
def style_loss_test(correct): content_image = 'styles/tubingen.jpg' style_image = 'styles/starry_night.jpg' image_size = 192 style_size = 192 style_layers = [1, 4, 6, 7] style_weights = [300000, 1000, 15, 3] c_feats, _ = features_from_img(content_image, image_size) feats, _ = features_from_img(style_image, style_size) style_targets = [] for idx in style_layers: style_targets.append(gram_matrix(feats[idx].clone())) student_output = style_loss(c_feats, style_layers, style_targets, style_weights).data.numpy() error = rel_error(correct, student_output) print('Error is {:.3f}'.format(error)) style_loss_test(answers['sl_out'])
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Total-variation regularization It turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or "total variation" in the pixel values. You can compute the "total variation" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$: $L_{tv} = w_t \times \sum_{c=1}^3\sum_{i=1}^{H-1} \sum_{j=1}^{W-1} \left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \right)$ In the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.
def tv_loss(img, tv_weight): """ Compute total variation loss. Inputs: - img: PyTorch Variable of shape (1, 3, H, W) holding an input image. - tv_weight: Scalar giving the weight w_t to use for the TV loss. Returns: - loss: PyTorch Variable holding a scalar giving the total variation loss for img weighted by tv_weight. """ # Your implementation should be vectorized and not require any loops! pass
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Test your TV loss implementation. Error should be less than 0.001.
def tv_loss_test(correct): content_image = 'styles/tubingen.jpg' image_size = 192 tv_weight = 2e-2 content_img = preprocess(PIL.Image.open(content_image), size=image_size) content_img_var = Variable(content_img.type(dtype)) student_output = tv_loss(content_img_var, tv_weight).data.numpy() error = rel_error(correct, student_output) print('Error is {:.3f}'.format(error)) tv_loss_test(answers['tv_out'])
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Now we're ready to string it all together (you shouldn't have to modify this function):
def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight, style_layers, style_weights, tv_weight, init_random = False): """ Run style transfer! Inputs: - content_image: filename of content image - style_image: filename of style image - image_size: size of smallest image dimension (used for content loss and generated image) - style_size: size of smallest style image dimension - content_layer: layer to use for content loss - content_weight: weighting on content loss - style_layers: list of layers to use for style loss - style_weights: list of weights to use for each layer in style_layers - tv_weight: weight of total variation regularization term - init_random: initialize the starting image to uniform random noise """ # Extract features for the content image content_img = preprocess(PIL.Image.open(content_image), size=image_size) content_img_var = Variable(content_img.type(dtype)) feats = extract_features(content_img_var, cnn) content_target = feats[content_layer].clone() # Extract features for the style image style_img = preprocess(PIL.Image.open(style_image), size=style_size) style_img_var = Variable(style_img.type(dtype)) feats = extract_features(style_img_var, cnn) style_targets = [] for idx in style_layers: style_targets.append(gram_matrix(feats[idx].clone())) # Initialize output image to content image or nois if init_random: img = torch.Tensor(content_img.size()).uniform_(0, 1) else: img = content_img.clone().type(dtype) # We do want the gradient computed on our image! img_var = Variable(img, requires_grad=True) # Set up optimization hyperparameters initial_lr = 3.0 decayed_lr = 0.1 decay_lr_at = 180 # Note that we are optimizing the pixel values of the image by passing # in the img_var Torch variable, whose requires_grad flag is set to True optimizer = torch.optim.Adam([img_var], lr=initial_lr) f, axarr = plt.subplots(1,2) axarr[0].axis('off') axarr[1].axis('off') axarr[0].set_title('Content Source Img.') axarr[1].set_title('Style Source Img.') axarr[0].imshow(deprocess(content_img.cpu())) axarr[1].imshow(deprocess(style_img.cpu())) plt.show() plt.figure() for t in range(200): if t < 190: img.clamp_(-1.5, 1.5) optimizer.zero_grad() feats = extract_features(img_var, cnn) # Compute loss c_loss = content_loss(content_weight, feats[content_layer], content_target) s_loss = style_loss(feats, style_layers, style_targets, style_weights) t_loss = tv_loss(img_var, tv_weight) loss = c_loss + s_loss + t_loss loss.backward() # Perform gradient descents on our image values if t == decay_lr_at: optimizer = torch.optim.Adam([img_var], lr=decayed_lr) optimizer.step() if t % 100 == 0: print('Iteration {}'.format(t)) plt.axis('off') plt.imshow(deprocess(img.cpu())) plt.show() print('Iteration {}'.format(t)) plt.axis('off') plt.imshow(deprocess(img.cpu())) plt.show()
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Generate some pretty pictures! Try out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook. The content_image is the filename of content image. The style_image is the filename of style image. The image_size is the size of smallest image dimension of the content image (used for content loss and generated image). The style_size is the size of smallest style image dimension. The content_layer specifies which layer to use for content loss. The content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content). style_layers specifies a list of which layers to use for style loss. style_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image. tv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. Below the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.
# Composition VII + Tubingen params1 = { 'content_image' : 'styles/tubingen.jpg', 'style_image' : 'styles/composition_vii.jpg', 'image_size' : 192, 'style_size' : 512, 'content_layer' : 3, 'content_weight' : 5e-2, 'style_layers' : (1, 4, 6, 7), 'style_weights' : (20000, 500, 12, 1), 'tv_weight' : 5e-2 } style_transfer(**params1) # Scream + Tubingen params2 = { 'content_image':'styles/tubingen.jpg', 'style_image':'styles/the_scream.jpg', 'image_size':192, 'style_size':224, 'content_layer':3, 'content_weight':3e-2, 'style_layers':[1, 4, 6, 7], 'style_weights':[200000, 800, 12, 1], 'tv_weight':2e-2 } style_transfer(**params2) # Starry Night + Tubingen params3 = { 'content_image' : 'styles/tubingen.jpg', 'style_image' : 'styles/starry_night.jpg', 'image_size' : 192, 'style_size' : 192, 'content_layer' : 3, 'content_weight' : 6e-2, 'style_layers' : [1, 4, 6, 7], 'style_weights' : [300000, 1000, 15, 3], 'tv_weight' : 2e-2 } style_transfer(**params3)
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Feature Inversion The code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations). Now, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image. (Similarly, you could do "texture synthesis" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) [1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015
# Feature Inversion -- Starry Night + Tubingen params_inv = { 'content_image' : 'styles/tubingen.jpg', 'style_image' : 'styles/starry_night.jpg', 'image_size' : 192, 'style_size' : 192, 'content_layer' : 3, 'content_weight' : 6e-2, 'style_layers' : [1, 4, 6, 7], 'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss 'tv_weight' : 2e-2, 'init_random': True # we want to initialize our image to be random } style_transfer(**params_inv)
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
UltronAI/Deep-Learning
mit
Ride Report Method Here, we use the match method from the OSRM API with the code modified to return only the endpoints of segments. This allows us to aggregate over OSM segments since the node IDs are uniquely associated with a lat/lon pair given sufficient precision in the returned coordinates. The API recommends not using every single value for the match method, but I'm giving them regardless because it's easier to code. Down-sampling the ride might actually help to smooth some of the rides. (or perhaps not if we accidentally get a jagged part). Currently, I am unsure how to mark up OSM data with bumpiness information, as we have data that look like this in the raw OSM file: &lt;way id="23642309" version="25" timestamp="2013-12-26T23:03:24Z" changeset="19653154" uid="28775" user="StellanL"&gt; &lt;nd ref="258965973"/&gt; &lt;nd ref="258023463"/&gt; &lt;nd ref="736948618"/&gt; &lt;nd ref="258023391"/&gt; &lt;nd ref="736948622"/&gt; &lt;nd ref="930330659"/&gt; &lt;nd ref="736861978"/&gt; &lt;nd ref="930330542"/&gt; &lt;nd ref="930330544"/&gt; &lt;nd ref="929808660"/&gt; &lt;nd ref="736934948"/&gt; &lt;nd ref="930330644"/&gt; &lt;nd ref="736871567"/&gt; &lt;nd ref="619628331"/&gt; &lt;nd ref="740363293"/&gt; &lt;nd ref="931468900"/&gt; &lt;tag k="name" v="North Wabash Avenue"/&gt; &lt;tag k="highway" v="tertiary"/&gt; &lt;tag k="loc_ref" v="44 E"/&gt; &lt;/way&gt;" My tentative idea is to match up the lat/lons with OSM id using IMPOSM, then find the nd refs in the original data and add a property that contains bumpiness information.
rides, readings = data_munging.read_raw_data() readings = data_munging.clean_readings(readings) readings = data_munging.add_proj_to_readings(readings, data_munging.NAD83)
src/Snapping_Readings_OSRM.ipynb
zscore/pavement_analysis
mit
If using a Dockerized OSRM instance, you can get the IP address by linking up to the Docker container running OSRM and pinging it. Usually though, the url here will be correct since it is the default.
digital_ocean_url = 'http://162.243.23.60/osrm-chi-vanilla/' local_docker_url = 'http://172.17.0.2:5000/' url = local_docker_url nearest_request = url + 'nearest?loc={0},{1}' match_request = url + 'match?loc={0},{1}&t={2}&loc={3},{4}&t={5}' def readings_to_match_str(readings): data_str = '&loc={0},{1}&t={2}' output_str = '' elapsed_time = 0 for i, reading in readings.iterrows(): elapsed_time += 1 new_str = data_str.format(str(reading['start_lat']), str(reading['start_lon']), str(elapsed_time)) output_str += new_str return url + 'match?' + output_str[1:]
src/Snapping_Readings_OSRM.ipynb
zscore/pavement_analysis
mit
This is a small example of how everything should work for troubleshooting and other purposes.
test_request = readings_to_match_str(readings.loc[readings['ride_id'] == 128, :]) print(test_request) matched_ride = requests.get(test_request).json() snapped_points = pd.DataFrame(matched_ride['matchings'][0]['matched_points'], columns=['lat', 'lon']) ax = snapped_points.plot(x='lon', y='lat', kind='scatter') readings.loc[readings['ride_id'] == 128, :].plot(x='start_lon', y='start_lat', kind='scatter', ax=ax) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) plt.show() a_reading = readings.loc[0, :] test_match_request = match_request.format(a_reading['start_lat'], a_reading['start_lon'], 0, a_reading['end_lat'], a_reading['end_lon'], 1) # This does not work because OSRM does not accept floats as times. # test_map_request = map_request.format(*tuple(a_reading[['start_lat', 'start_lon', 'start_time', # 'end_lat', 'end_lon', 'end_time']])) test_nearest_request = nearest_request.format(a_reading['start_lat'], a_reading['start_lon']) osrm_response = requests.get(test_match_request).json() osrm_response['matchings'][0]['matched_points'] osrm_response = requests.get(test_nearest_request).json() osrm_response['mapped_coordinate'] readings['snapped_lat'] = 0 readings['snapped_lon'] = 0 chi_readings = data_munging.filter_readings_to_chicago(readings) chi_rides = list(set(chi_readings.ride_id)) # This is a small list of rides that I think are bad based upon their graphs. # I currently do not have an automatic way to update this. bad_rides = [128, 129, 5.0, 7.0, 131, 133, 34, 169] good_chi_rides = [i for i in chi_rides if i not in bad_rides] for ride_id in chi_rides: if ride_id in bad_rides: print('ride_id') try: print('num readings: ' + str(sum(readings['ride_id'] == ride_id))) except: print('we had some issues here.') all_snapped_points = [] readings['snapped_lat'] = np.NaN readings['snapped_lon'] = np.NaN for ride_id in chi_rides: if pd.notnull(ride_id): ax = readings.loc[readings['ride_id'] == ride_id, :].plot(x='start_lon', y='start_lat') try: matched_ride = requests.get(readings_to_match_str(readings.loc[readings['ride_id'] == ride_id, :])).json() readings.loc[readings['ride_id'] == ride_id, ['snapped_lat', 'snapped_lon']] = matched_ride['matchings'][0]['matched_points'] readings.loc[readings['ride_id'] == ride_id, :].plot(x='snapped_lon', y='snapped_lat', ax=ax) except: print('could not snap') plt.title('Plotting Ride ' + str(ride_id)) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) plt.show() ax = readings.loc[readings['ride_id'] == 2, :].plot(x='snapped_lon', y='snapped_lat', style='r-') for ride_id in good_chi_rides: print(ride_id) try: # readings.loc[readings['ride_id'] == ride_id, :].plot(x='start_lon', y='start_lat', ax=ax) readings.loc[readings['ride_id'] == ride_id, :].plot(x='snapped_lon', y='snapped_lat', ax=ax, style='b-') except: print('bad') ax = readings.loc[readings['ride_id'] == 2, :].plot(x='snapped_lon', y='snapped_lat', style='r-', ax=ax) fig = plt.gcf() fig.set_size_inches(36, 36) plt.show() # This code goes through a ride backwards in order to figure out what two endpoints # the bicycle was going between. readings['next_snapped_lat'] = np.NaN readings['next_snapped_lon'] = np.NaN for ride_id in chi_rides: next_lat_lon = (np.NaN, np.NaN) for index, row in reversed(list(readings.loc[readings['ride_id'] == ride_id, :].iterrows())): readings.loc[index, ['next_snapped_lat', 'next_snapped_lon']] = next_lat_lon if (row['snapped_lat'], row['snapped_lon']) != next_lat_lon: next_lat_lon = (row['snapped_lat'], row['snapped_lon']) clean_chi_readings = readings.loc[[ride_id in chi_rides for ride_id in readings['ride_id']], :] clean_chi_readings.to_csv(data_munging.data_dir + 'clean_chi_readings.csv') clean_chi_readings = pd.read_csv(data_munging.data_dir + 'clean_chi_readings.csv') road_bumpiness = collections.defaultdict(list) for index, reading in clean_chi_readings.iterrows(): if reading['gps_mph'] < 30 and reading['gps_mph'] > 3: osm_segment = [(reading['snapped_lat'], reading['snapped_lon']), (reading['next_snapped_lat'], reading['next_snapped_lon'])] osm_segment = sorted(osm_segment) if all([lat_lon != (np.NaN, np.NaN) for lat_lon in osm_segment]): road_bumpiness[tuple(osm_segment)].append(reading['abs_mean_over_speed']) # sorted_road_bumpiness = sorted(road_bumpiness.items(), key=lambda i: len(i[1]), reverse=True) total_road_readings = dict((osm_segment, len(road_bumpiness[osm_segment])) for osm_segment in road_bumpiness) agg_road_bumpiness = dict((osm_segment, np.mean(road_bumpiness[osm_segment])) for osm_segment in road_bumpiness) agg_path = data_munging.data_dir + 'agg_road_bumpiness.txt'
src/Snapping_Readings_OSRM.ipynb
zscore/pavement_analysis
mit
This section here functions as a shortcut if you just want to load up the aggregate bumpiness instead of having to calculate all of it
with open(agg_path, 'w') as f: f.write(str(agg_road_bumpiness)) with open(agg_path, 'r') as f: agg_road_bumpiness = f.read() agg_road_bumpiness = eval(agg_road_bumpiness) def osm_segment_is_null(osm_segment): return (pd.isnull(osm_segment[0][0]) or pd.isnull(osm_segment[0][1]) or pd.isnull(osm_segment[1][0]) or pd.isnull(osm_segment[1][1])) agg_road_bumpiness = dict((osm_segment, agg_road_bumpiness[osm_segment]) for osm_segment in agg_road_bumpiness if not osm_segment_is_null(osm_segment)) # This is where we filter out all osm segments that are too long def find_seg_dist(lat_lon): return data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0]) seg_dist = dict() for lat_lon in agg_road_bumpiness: seg_dist[lat_lon] = data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0]) with open('../dat/chi_agg_info.csv', 'w') as f: f.write('lat_lon_tuple|agg_road_bumpiness|total_road_readings|seg_dist\n') for lat_lon in agg_road_bumpiness: if data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0]) < 200: f.write(str(lat_lon) + '|' + str(agg_road_bumpiness[lat_lon]) + '|' + str(total_road_readings[lat_lon]) + '|' + str(seg_dist[lat_lon]) + '\n') seg_dist[lat_lon] np.max(agg_road_bumpiness.values()) plt.hist(agg_road_bumpiness.values()) import matplotlib.colors as colors plasma = cm = plt.get_cmap('plasma') cNorm = colors.Normalize(vmin=0, vmax=1.0) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=plasma) for osm_segment, bumpiness in agg_road_bumpiness.items(): # lat_lon = osm_segment # color = (1, 0, 0) if data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0]) > 100 else (0, 1, 0) plt.plot([osm_segment[0][1], osm_segment[1][1]], [osm_segment[0][0], osm_segment[1][0]], # color=color) color=scalarMap.to_rgba(bumpiness)) fig = plt.gcf() fig.set_size_inches(24, 48) plt.show() filtered_agg_bumpiness = dict((lat_lon, agg_road_bumpiness[lat_lon]) for lat_lon in agg_road_bumpiness if find_seg_dist(lat_lon) < 200) with open(data_dir + 'filtered_chi_road_bumpiness.txt', 'w') as f: f.write(str(filtered_agg_bumpiness))
src/Snapping_Readings_OSRM.ipynb
zscore/pavement_analysis
mit
Plot train and valid set NLL
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600. fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(111) ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record) ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record) ax1.set_xlabel('Epochs') ax1.legend(['Valid', 'Train']) ax1.set_ylabel('NLL') ax1.set_ylim(0., 5.) ax1.grid(True) ax2 = ax1.twiny() ax2.set_xticks(np.arange(0,tr.shape[0],20)) ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]]) ax2.set_xlabel('Hours') plt.plot(model.monitor.channels['train_term_1_l1_penalty'].val_record) plt.plot(model.monitor.channels['train_term_2_weight_decay'].val_record) pv = get_weights_report(model=model) img = pv.get_img() img = img.resize((4*img.size[0], 4*img.size[1])) img_data = io.BytesIO() img.save(img_data, format='png') display(Image(data=img_data.getvalue(), format='png')) plt.plot(model.monitor.channels['learning_rate'].val_record)
notebooks/model_run_and_result_analyses/Revisiting alexnet based experiment with 64 inputs (large).ipynb
Neuroglycerin/neukrill-net-work
mit
Plot ratio of update norms to parameter norms across epochs for different layers
h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record]) h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record]) plt.plot(h1_W_norms / h1_W_up_norms) #plt.ylim(0,1000) plt.show() plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record) h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record]) h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record]) plt.plot(h2_W_norms / h2_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record) h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record]) h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record]) plt.plot(h3_W_norms / h3_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record) h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_kernel_norm_mean'].val_record]) h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_kernel_norms_mean'].val_record]) plt.plot(h4_W_norms / h4_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h4_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h4_kernel_norms_max'].val_record) h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_kernel_norm_mean'].val_record]) h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_kernel_norms_mean'].val_record]) plt.plot(h5_W_norms / h5_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h5_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h5_kernel_norms_max'].val_record) h6_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h6_W_col_norm_mean'].val_record]) h6_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h6_col_norms_mean'].val_record]) plt.plot(h6_W_norms / h6_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h6_col_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h6_col_norms_max'].val_record) y_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_softmax_W_col_norm_mean'].val_record]) y_W_norms = np.array([float(v) for v in model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record]) plt.plot(y_W_norms / y_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_y_y_1_col_norms_max'].val_record)
notebooks/model_run_and_result_analyses/Revisiting alexnet based experiment with 64 inputs (large).ipynb
Neuroglycerin/neukrill-net-work
mit
<img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here.
# Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling for grayscale image data ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.')
deeplearning/intro-to-tensorflow/intro_to_tensorflow.ipynb
syednasar/datascience
mit
Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%"> For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
# All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # TODO: Set the features and labels tensors # features = # labels = # TODO: Set the weights and biases tensors # weights = # biases = ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.')
deeplearning/intro-to-tensorflow/intro_to_tensorflow.ipynb
syednasar/datascience
mit
<img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here.
# Change if you have memory restrictions batch_size = 128 # TODO: Find the best parameters for each configuration # epochs = # learning_rate = ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy))
deeplearning/intro-to-tensorflow/intro_to_tensorflow.ipynb
syednasar/datascience
mit
As we can see from the output of the command above, by the straight wording of the question, there are exactly 1,355 mentions of Jo, 683 of Meg, 645 of Amy, and 459 of Beth in Little Women. If we were to assume that diminutive or nickname forms might count as well, we might add mentions of "Megs", "Bethy", and "Meggy" to these counts, for example. For the purposes of this solution, however, I assume that these are not required, because the text might need to be consulted directly by someone familiar with the novel to determine which nicknames are valid, which seems beyond the scope of the assignment. Part B - Juliet and Romeo in Romeo and Juliet How many times do each of the characters Juliet and Romeo have speaking lines in Romeo and Juliet? Keep in mind that this is the text of a play.
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/romeo.txt
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
First we must recall -- as the problem highlights -- that this text is that of a play. Because of this, we cannot simply count mentions of "Romeo," as we might accidentally inflate the count due to mentions of this character, for example, by other characters in their speaking lines. Instead, we must first look for a patter that indicates Romeo's speaking lines specifically.
!cat romeo.txt | grep "Rom" | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
In this brief sample, we can see title lines and metadata that include mention of Romeo, and both stage directions ("Enter Romeo") and spoken lines that include his name. What stands out, though, is that lines spoken by Romeo appear to be delineated by "Rom.", so we can search for this specific pattern. Let's verify that the same should hold true for mentions of Juliet.
!cat romeo.txt | grep "Jul" | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
We see that the pattern seems to hold for both. I will assume that matches of the exact characters "Rom." and "Jul." indicate the start of a speaking line for one or the other characters, and will explicitly count only those lines.
!cat romeo.txt | grep -w "Rom\." \ | grep -oE '\w{{2,}}\.' \ | grep "Rom" \ | sort | uniq -c | sort -rn !cat romeo.txt | grep -w "Jul\." \ | grep -oE '\w{{2,}}\.' \ | grep "Jul" \ | sort | uniq -c | sort -rn
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
The two pipelines above indicate that Romeo has 163 speaking lines, while Juliet has only 117. To match the specific case with a trailing ., the first regular expressions in both above cases use the -w flag to denote a word match and the escape sequence \. to match the literal trailing period. The second regular expressions include this literal at the end of the match sequence as well, with the trailing literal period in '\w{{2,}}\.' requiring that the match include the period at the end. Problem 2 - Capital Bikeshare Part A - Station counts Which 10 Capital Bikeshare stations were the most popular departing stations in Q1 2016?
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/2016q1.csv.zip !unzip 2016q1.csv.zip !head -5 2016q1.csv | csvlook !csvcut -n 2016q1.csv !csvcut -c5 2016q1.csv | tail -n +2 | csvsort | uniq -c | sort -rn | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
As we can see in the above results, the top ten starting stations in this time period were led by Columbus Circle / Union Station with over 13,000 rides, followed by Dupont Circle and the Lincoln Memorial and the rest as listed. In the pipeline above, tail -n +2 ensures we skip the header line before the sort process begins. Which 10 were the most popular destination stations in Q1 2016?
!csvcut -c7 2016q1.csv | tail -n +2 | csvsort | uniq -c | sort -rn | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
The above results show us very similar numbers for destination stations during the same time period, with the first four stations unchanged and led again by Union Station with over 13,000 rides. Thomas Circle appears to be a more prominent start station than end station, as does Eastern Market, which does not even make the top ten destination stations. Part B - bike counts For the most popular departure station, which 10 bikes were used most in trips departing from there? In this part, we will use csvgrep to select only the required stations - Union Station, in both cases.
!csvgrep -c5 -m "Columbus Circle / Union Station" 2016q1.csv | head
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
We can further limit the columns used to cut down on the data flowing through the pipe.
!csvcut -c5,8 2016q1.csv | csvgrep -c1 -m "Columbus Circle / Union Station" | head !csvcut -c5,8 2016q1.csv \ | csvgrep -c1 -m "Columbus Circle / Union Station" \ | csvcut -c2 \ | tail -n +2 \ | sort | uniq -c | sort -rn | head -12
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Above are the most commonly used bikes in trips departing from Union Station, led by bike number W22227. As we might expect it appears that the distribution seems rather uniform. Note that because several bikes had exactly 15 trips starting from Union Station, the list includes the top twelve bikes, rather than the top ten. Which 10 bikes were used most in trips ending at the most popular destination station?
!csvcut -c7,8 2016q1.csv \ | csvgrep -c1 -m "Columbus Circle / Union Station" \ | csvcut -c2 \ | tail -n +2 \ | sort | uniq -c | sort -rn | head -15
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Above are the most commonly used bikes in trips arriving at Union Station, let by bike number W00485. It is interesting to note that bike W22227, the top departing bike, is in second place, but bike W00485, the top arriving bike, does not appear in the top ten departing bikes. In any case these also seem at first glance to be uniformly distributed. Again, the list is expanded, this time to fifteen bikes, to account for the tie at exactly fifteen trips. Problem 3 - Filters Part A - split and lowercase filters Write a Python filter than replaces grep -oE '\w{2,}' to split lines of text into one word per line, and write an additional Python filter to replace tr '[:upper:]' '[:lower:]' to transform text into lower case. With your two new filters, repeat the original pipeline, and substitute your new filters as appropriate. You should obtain the same results.
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/simplefilter.py !cp simplefilter.py split.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
The file split.py is modified from the template to split lines of text into one word per line. To demonstrate this, we can compare the original pipeline with a new pipeline with split.py substituting for the first grep command.
!cat women.txt \ | grep -oE '\w{{1,}}' \ | tr '[:upper:]' '[:lower:]' \ | sort \ | uniq -c \ | sort -rn \ | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
We can ignore the broken pipe and related errors as the output appears to be correct. Next, we repeat the pipeline with split.py substituted:
!chmod +x split.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Examining the filter script below, the key line, #14, removes trailing newlines, splits tokens by the space (' '), and removes words that are not entirely alphabetical.
!grep -n '' split.py !cat women.txt \ | ./split.py \ | tr '[:upper:]' '[:lower:]' \ | sort \ | uniq -c \ | sort -rn \ | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Almost the exact words listed appear in nearly the same order, but with lower counts for each. We can examine the output of each command to see if there are obvious differences:
!cat women.txt | grep -oE '\w{{2,}}' | head -25 !cat women.txt | ./split.py | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
We can see straight away on the first few lines that there is a difference. Let's look at the text itself:
!head -3 women.txt
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Three obvious issues jump out. First, the initial "The" is elided; it is not clear why. Next, "Women" is removed, perhaps due to the trailing comma, which will cause the token to fail the isalpha() test. Also, "Alcott" is removed, perhaps having to do with its position at the end of the line. We can update the filter to use Python's regular expression model and a similar expression, \w{1,} to find all matches more intelligently. Here the regular expression is prepared in line 13 and used in line 18.
!grep -n '' split.py !cat women.txt | ./split.py | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
This looks much better. We can try the full pipeline again:
!cat women.txt \ | ./split.py \ | tr '[:upper:]' '[:lower:]' \ | sort \ | uniq -c \ | sort -rn \ | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
This looks to be an exact match.
!cp simplefilter.py lowercase.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
The filter lowercase.py is modified from the template to lowercase incoming lines of text.
!chmod +x lowercase.py !grep -n '' lowercase.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Note that the only line aside from the comments that changes in the above script is line #12, which adds the lower() to the print statement.
!head women.txt | ./lowercase.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
This looks correct, so we'll first attempt to replace the original pipeline's use of tr with lowercase.py:
!cat women.txt \ | grep -oE '\w{{1,}}' \ | ./lowercase.py \ | sort \ | uniq -c \ | sort -rn \ | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Looks good so far, we are seeing the exact same counts. To address the problem's challenge, we finally replace both filters at once.
!cat women.txt \ | ./split.py \ | ./lowercase.py \ | sort \ | uniq -c \ | sort -rn \ | head -10
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
This completes Problem 3 - Part A. Part B - stop words Write a Python filter that removes at least ten common words of English text, commonly known as "stop words". Sources of English stop word lists are readily available online, or you may generate your own list from the text. We begin by acquiring a common list of English stop words, gathered from the site http://www.textfixer.com/resources/common-english-words.txt as linked from the Wikipedia page on stop words.
!wget http://www.textfixer.com/resources/common-english-words.txt !head common-english-words.txt
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Next we copy the template filter script as before, renaming it appropriately.
!cp simplefilter.py stopwords.py !chmod +x stopwords.py !grep -n '' stopwords.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
The key changes in stopwords.py from the template are line #13, which imports the list of stopwords, and line #20, which checks whether an incoming word is in the stopword list. Note also that in line #19 the removal of a trailing newline occurs before checking for stopwords. The assumption that incoming text will already be split into one word per line and lowercased is stated explicitly in the first comment, lines #6-7.
!head women.txt | ./split.py | ./lowercase.py | ./stopwords.py
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
This appears to be correct. Let's put it all together:
!cat women.txt \ | ./split.py \ | ./lowercase.py \ | ./stopwords.py \ | sort \ | uniq -c \ | sort -rn \ | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
This would seem to be correct - we see the names we looked for earlier appearing near the top of the list, and common stop words are indeed removed - however the list starts with odd "words", in "t", "s", "m", and "ll". Is it possible that these are occurences of contractions? We can check a few different ways. First, let's see if our split.py is causing the problem:
!cat women.txt \ | grep -oE '\w{{1,}}' \ | ./lowercase.py \ | ./stopwords.py \ | sort \ | uniq -c \ | sort -rn \ | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
No, the results are exactly the same. Instead, we'll need to look for occurrences of "t" and "s" by themselves. The --context option to grep might help us here, pointing out surrounding text to search for in the source.
!cat women.txt \ | ./split.py \ | ./lowercase.py \ | grep --context=2 -oE '^t$' \ | head -20 !grep -i "we haven't got" women.txt
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Aha, it does appear that the occurences of a bare "t" are from contractions. Let's repeat with "s", which might occur in possessives.
!cat women.txt \ | ./split.py \ | ./lowercase.py \ | grep --context=2 -oE '^s$' \ | head -20 !grep -i "amy's valley" women.txt
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
There we have it - the counts from above were correct, and we could eliminate "t" and "s" from consideration with a grep -v, and we can further assume that the "ll" and "m" occurences are also from contractions, so we'll remove them as well.
!cat women.txt \ | ./split.py \ | ./lowercase.py \ | ./stopwords.py \ | grep -v -oE '^s|t|m|ll$' \ | sort \ | uniq -c \ | sort -rn \ | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Here we have a final count. It is interesting to note that these counts of character names (Jo, Meg, etc.) are slightly different from before, perhaps due to punctuation handling, but it seems beyond the scope of the question to answer it precisely. Extra credit - parallel stop words Use GNU parallel to count the 25 most common words across all the 109 texts in the zip file provided, with stop words removed.
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/texts.zip !unzip -l texts.zip | head -5 !mkdir all-texts !unzip -d all-texts texts.zip !time ls all-texts/*.txt \ | parallel --eta -j+0 "grep -oE '\w{1,}' {} | tr '[:upper:]' '[:lower:]' | grep -v -oE '^s|t|m|l|ll|d$' | ./stopwords.py >> all-words.txt"
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
In the above line, I've limited the word size to one character, removed common contractions, and piped the overall result through the new stopwords.py Python filter.
!wc -l all-words.txt !time sort all-words.txt | uniq -c | sort -rn | head -25
projects/project-01/solution/problem-01-solution.ipynb
gwsb-istm-6212-fall-2016/syllabus-and-schedule
cc0-1.0
Authenticate with the docker registry first bash gcloud auth configure-docker If using TPUs please also authorize Cloud TPU to access your project as described here. Set up your output bucket
BUCKET = "gs://" # your bucket here assert re.search(r'gs://.+', BUCKET), 'A GCS bucket is required to store your results.'
courses/fast-and-lean-data-science/fairing_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build a base image to work with fairing
!cat Dockerfile !docker build . -t {base_image} !docker push {base_image}
courses/fast-and-lean-data-science/fairing_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Start an AI Platform job
additional_files = '' # If your code requires additional files, you can specify them here (or include everything in the current folder with glob.glob('./**', recursive=True)) # If your code does not require any dependencies or config changes, you can directly start from an official Tensorflow docker image #fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image='gcr.io/deeplearning-platform-release/tf-gpu.1-13') # base image fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image=base_image) # AI Platform job hardware config fairing.config.set_deployer('gcp', job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}}) # input and output notebooks fairing.config.set_preprocessor('full_notebook', notebook_file="05K_MNIST_TF20Keras_Tensorboard_playground.ipynb", input_files=additional_files, output_file=os.path.join(BUCKET, 'fairing-output', 'mnist-001.ipynb')) # GPU settings for single K80, single p100 respectively # job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}} # job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}} # These job_config settings for TPUv2 #job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}} #job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'n1-standard-8', 'workerType': 'cloud_tpu', 'workerCount': 1, # 'workerConfig': {'accelerator_config': {'type': 'TPU_V2','count': 8}}}}) # On AI Platform, TPUv3 support is alpha and available to whitelisted customers only fairing.config.run()
courses/fast-and-lean-data-science/fairing_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Parameters
BATCH_SIZE = 32 #@param {type:"integer"} BUCKET = 'gs://' #@param {type:"string"} assert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.' training_images_file = 'gs://mnist-public/train-images-idx3-ubyte' training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte' validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte' validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte' #@title visualization utilities [RUN ME] """ This cell contains helper functions used for visualization and downloads only. You can skip reading it. There is very little useful Keras/Tensorflow code here. """ # Matplotlib config plt.rc('image', cmap='gray_r') plt.rc('grid', linewidth=0) plt.rc('xtick', top=False, bottom=False, labelsize='large') plt.rc('ytick', left=False, right=False, labelsize='large') plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white') plt.rc('text', color='a8151a') plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf") # pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO) def dataset_to_numpy_util(training_dataset, validation_dataset, N): # get one batch from each: 10000 validation digits, N training digits unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch()) v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next() t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next() # Run once, get one batch. Session.run returns numpy results with tf.Session() as ses: (validation_digits, validation_labels, training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels]) # these were one-hot encoded in the dataset validation_labels = np.argmax(validation_labels, axis=1) training_labels = np.argmax(training_labels, axis=1) return (training_digits, training_labels, validation_digits, validation_labels) # create digits from local fonts for testing def create_digits_from_local_fonts(n): font_labels = [] img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1 font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25) font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25) d = PIL.ImageDraw.Draw(img) for i in range(n): font_labels.append(i%10) d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2) font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded) font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28]) return font_digits, font_labels # utility to display a row of digits with their predictions def display_digits(digits, predictions, labels, title, n): plt.figure(figsize=(13,3)) digits = np.reshape(digits, [n, 28, 28]) digits = np.swapaxes(digits, 0, 1) digits = np.reshape(digits, [28, 28*n]) plt.yticks([]) plt.xticks([28*x+14 for x in range(n)], predictions) for i,t in enumerate(plt.gca().xaxis.get_ticklabels()): if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red plt.imshow(digits) plt.grid(None) plt.title(title) # utility to display multiple rows of digits, sorted by unrecognized/recognized status def display_top_unrecognized(digits, predictions, labels, n, lines): idx = np.argsort(predictions==labels) # sort order: unrecognized first for i in range(lines): display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n], "{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n) # utility to display training and validation curves def display_training_curves(training, validation, title, subplot): if subplot%10==1: # set up the subplots on the first call plt.subplots(figsize=(10,10), facecolor='#F0F0F0') plt.tight_layout() ax = plt.subplot(subplot) ax.grid(linewidth=1, color='white') ax.plot(training) ax.plot(validation) ax.set_title('model '+ title) ax.set_ylabel(title) ax.set_xlabel('epoch') ax.legend(['train', 'valid.'])
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Colab-only auth for this notebook and the TPU
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence if IS_COLAB_BACKEND: from google.colab import auth auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
TPU detection
#TPU REFACTORING: detect the TPU try: # TPU detection tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # Picks up a connected TPU on Google's Colab, ML Engine, Kubernetes and Deep Learning VMs accessed through the 'ctpu up' utility #tpu = tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME') # If auto-detection does not work, you can pass the name of the TPU explicitly (tip: on a VM created with "ctpu up" the TPU has the same name as the VM) print('Running on TPU ', tpu.cluster_spec().as_dict()['worker']) USE_TPU = True except ValueError: tpu = None print("Running on GPU or CPU") USE_TPU = False
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
tf.data.Dataset: parse files and prepare training and validation datasets Please read the best practices for building input pipelines with tf.data.Dataset
def read_label(tf_bytestring): label = tf.decode_raw(tf_bytestring, tf.uint8) label = tf.reshape(label, []) label = tf.one_hot(label, 10) return label def read_image(tf_bytestring): image = tf.decode_raw(tf_bytestring, tf.uint8) image = tf.cast(image, tf.float32)/256.0 image = tf.reshape(image, [28*28]) return image def load_dataset(image_file, label_file): imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16) imagedataset = imagedataset.map(read_image, num_parallel_calls=16) labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8) labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16) dataset = tf.data.Dataset.zip((imagedataset, labelsdataset)) return dataset def get_training_dataset(image_file, label_file, batch_size): dataset = load_dataset(image_file, label_file) dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset dataset = dataset.shuffle(5000, reshuffle_each_iteration=True) dataset = dataset.repeat() # Mandatory for TPU for now dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size) return dataset #TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too # def get_validation_dataset(image_file, label_file): def get_validation_dataset(image_file, label_file, batch_size): dataset = load_dataset(image_file, label_file) dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset #TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too # dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch dataset = dataset.batch(batch_size, drop_remainder=True) dataset = dataset.repeat() # Mandatory for TPU for now return dataset # instantiate the datasets training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE) validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file, 10000) # For TPU, we will need a function that returns the dataset # TPU REFACTORING: input_fn's must have a params argument though which TPUEstimator passes params['batch_size'] # training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE) # validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file) training_input_fn = lambda params: get_training_dataset(training_images_file, training_labels_file, params['batch_size']) validation_input_fn = lambda params: get_validation_dataset(validation_images_file, validation_labels_file, params['batch_size'])
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's have a look at the data
N = 24 (training_digits, training_labels, validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N) display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N) display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N) font_digits, font_labels = create_digits_from_local_fonts(N)
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Estimator model If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs # TPU REFACTORING: model_fn must have a params argument. TPUEstimator passes batch_size and use_tpu into it #def model_fn(features, labels, mode): def model_fn(features, labels, mode, params): is_training = (mode == tf.estimator.ModeKeys.TRAIN) x = features y = tf.reshape(x, [-1, 28, 28, 1]) y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False)(y) # no bias necessary before batch norm y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) # no batch norm scaling necessary before "relu" y = tf.nn.relu(y) # activation after batch norm y = tf.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2)(y) y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) y = tf.nn.relu(y) y = tf.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2)(y) y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) y = tf.nn.relu(y) y = tf.layers.Flatten()(y) y = tf.layers.Dense(200, use_bias=False)(y) y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) y = tf.nn.relu(y) y = tf.layers.Dropout(0.5)(y, training=is_training) logits = tf.layers.Dense(10)(y) predictions = tf.nn.softmax(logits) classes = tf.math.argmax(predictions, axis=-1) if (mode != tf.estimator.ModeKeys.PREDICT): loss = tf.losses.softmax_cross_entropy(labels, logits) step = tf.train.get_or_create_global_step() # TPU REFACTORING: step is now increased once per GLOBAL_BATCH_SIZE = 8*BATCH_SIZE. Must adjust learning rate schedule accordingly # lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000, 1/math.e) lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000//8, 1/math.e) # TPU REFACTORING: custom Tensorboard summaries do not work. Only default Estimator summaries will appear in Tensorboard. # tf.summary.scalar("learn_rate", lr) optimizer = tf.train.AdamOptimizer(lr) # TPU REFACTORING: wrap the optimizer in a CrossShardOptimizer: this implements the multi-core training logic if params['use_tpu']: optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer) # little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not. train_op = tf.contrib.training.create_train_op(loss, optimizer) #train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step()) # TPU REFACTORING: a metrics_fn is needed for TPU # metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))} metric_fn = lambda classes, labels: {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))} tpu_metrics = (metric_fn, [classes, labels]) # pair of metric_fn and its list of arguments, there can be multiple pairs in a list # metric_fn will run on CPU, not TPU: more operations are allowed else: loss = train_op = metrics = tpu_metrics = None # None of these can be computed in prediction mode because labels are not available # TPU REFACTORING: EstimatorSpec => TPUEstimatorSpec ## return tf.estimator.EstimatorSpec( return tf.contrib.tpu.TPUEstimatorSpec( mode=mode, predictions={"predictions": predictions, "classes": classes}, # name these fields as you like loss=loss, train_op=train_op, # TPU REFACTORING: a metrics_fn is needed for TPU, passed into the eval_metrics field instead of eval_metrics_ops # eval_metric_ops=metrics eval_metrics = tpu_metrics ) # Called once when the model is saved. This function produces a Tensorflow # graph of operations that will be prepended to your model graph. When # your model is deployed as a REST API, the API receives data in JSON format, # parses it into Tensors, then sends the tensors to the input graph generated by # this function. The graph can transform the data so it can be sent into your # model input_fn. You can do anything you want here as long as you do it with # tf.* functions that produce a graph of operations. def serving_input_fn(): # placeholder for the data received by the API (already parsed, no JSON decoding necessary, # but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.) inputs = {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON features = inputs['serving_input'] # no transformation needed return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn # Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Train and validate the model, this time on TPU
EPOCHS = 10 # TPU_REFACTORING: to use all 8 cores, increase the batch size by 8 GLOBAL_BATCH_SIZE = BATCH_SIZE * 8 # TPU_REFACTORING: TPUEstimator increments the step once per GLOBAL_BATCH_SIZE: must adjust epoch length accordingly # steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset steps_per_epoch = 60000 // GLOBAL_BATCH_SIZE # 60,000 images in training dataset MODEL_EXPORT_NAME = "mnist" # name for exporting saved model # TPU_REFACTORING: the TPU will run multiple steps of training before reporting back TPU_ITERATIONS_PER_LOOP = steps_per_epoch # report back after each epoch tf_logging.set_verbosity(tf_logging.INFO) now = datetime.datetime.now() MODEL_DIR = BUCKET+"/mnistjobs/job" + "-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}".format(now.year, now.month, now.day, now.hour, now.minute, now.second) # TPU REFACTORING: the RunConfig has changed #training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=10, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4) training_config = tf.contrib.tpu.RunConfig( cluster=tpu, model_dir=MODEL_DIR, tpu_config=tf.contrib.tpu.TPUConfig(TPU_ITERATIONS_PER_LOOP)) # TPU_REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training #export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn) # TPU_REFACTORING: Estimator => TPUEstimator #estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config) estimator = tf.contrib.tpu.TPUEstimator( model_fn=model_fn, model_dir=MODEL_DIR, # TPU_REFACTORING: training and eval batch size must be the same for now train_batch_size=GLOBAL_BATCH_SIZE, eval_batch_size=10000, # 10000 digits in eval dataset predict_batch_size=10000, # prediction on the entire eval dataset in the demo below config=training_config, use_tpu=USE_TPU, # TPU REFACTORING: setting the kind of model export we want export_to_tpu=False) # we want an exported model for CPU/GPU inference because that is what is supported on ML Engine # TPU REFACTORING: train_and_evaluate does not work on TPU yet, TrainSpec not needed # train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch) # TPU REFACTORING: train_and_evaluate does not work on TPU yet, EvalSpec not needed # eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint # TPU REFACTORING: train_and_evaluate does not work on TPU yet, must train then eval manually # tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) estimator.train(training_input_fn, steps=steps_per_epoch*EPOCHS) estimator.evaluate(input_fn=validation_input_fn, steps=1) # TPU REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training estimator.export_savedmodel(os.path.join(MODEL_DIR, MODEL_EXPORT_NAME), serving_input_fn) tf_logging.set_verbosity(tf_logging.WARN)
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Visualize predictions
# recognize digits from local fonts # TPU REFACTORING: TPUEstimator.predict requires a 'params' in ints input_fn so that it can pass params['batch_size'] #predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N), predictions = estimator.predict(lambda params: tf.data.Dataset.from_tensor_slices(font_digits).batch(N), yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call predicted_font_classes = next(predictions)['classes'] display_digits(font_digits, predicted_font_classes, font_labels, "predictions from local fonts (bad predictions in red)", N) # recognize validation digits predictions = estimator.predict(validation_input_fn, yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call predicted_labels = next(predictions)['classes'] display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Deploy the trained model to ML Engine Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience. You will need a GCS bucket and a GCP project for this. Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing. Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore. Configuration
PROJECT = "" #@param {type:"string"} NEW_MODEL = True #@param {type:"boolean"} MODEL_NAME = "estimator_mnist_tpu" #@param {type:"string"} MODEL_VERSION = "v0" #@param {type:"string"} assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.' #TPU REFACTORING: TPUEstimator does not create the 'export' subfolder #export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME) export_path = os.path.join(MODEL_DIR, MODEL_EXPORT_NAME) last_export = sorted(tf.gfile.ListDirectory(export_path))[-1] export_path = os.path.join(export_path, last_export) print('Saved model directory found: ', export_path)
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Deploy the model This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
# Create the model if NEW_MODEL: !gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1 # Create a version of this model (you can add --async at the end of the line to make this call non blocking) # Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions # You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter !echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}" !gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Test the deployed model Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine" command line tool but any tool that can send a JSON payload to a REST endpoint will work.
# prepare digits to send to online prediction endpoint digits = np.concatenate((font_digits, validation_digits[:100-N])) labels = np.concatenate((font_labels, validation_labels[:100-N])) with open("digits.json", "w") as f: for digit in digits: # the format for ML Engine online predictions is: one JSON object per line data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because that is what you defined in your serving_input_fn: {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} f.write(data+'\n') # Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line. predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION} predictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest display_top_unrecognized(digits, predictions, labels, N, 100//N)
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's generate a mesh in PHOEBE
b = phoebe.default_binary() b.add_dataset('mesh', times=[0], columns=['teffs', 'vws']) b.run_compute() verts = b.get_value(qualifier='uvw_elements', component='primary', context='model') print(verts.shape) # [polygon, vertex, dimension] teffs = b.get_value(qualifier='teffs', component='primary', context='model') print(teffs.shape) # [polygon] vzs = b.get_value(qualifier='vws', component='primary', context='model') print(vzs.shape) # [polygon] xs = verts[:, :, 0] ys = verts[:, :, 1] zs = verts[:, :, 2] print(xs.shape, ys.shape, zs.shape) # [polygon, vertex]
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
Meshes can be drawn by calling the mesh (instead of plot) method of a figure. Most syntax and features are identical between the two, with the following exceptions: * NO 'c' or 's' dimensions * ADDITION of 'fc' (facecolor) and 'ec' (edgecolor) dimensions * linestyle applies to the edges * NO highlight * uncover DEFAULTS to True * trail DEFAULTS to 0 * NO marker * NO linebreak If 'z' is passed, the polygons will automatically be sorted in the order of positive z. It is therefore suggested to pass 'z' for any 3D meshes even if plotting in 2D. The edgecolor will default to 'black' and the facecolor to 'none' if not provided:
autofig.reset() autofig.mesh(x=xs, y=ys, z=zs, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad') mplfig = autofig.draw()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
As was the case for dimensions in plot, 'fc' (facecolor) and 'ec' (edgecolor) accept the following suffixes: * label * unit * map * lim
autofig.reset() autofig.mesh(x=xs, y=ys, z=zs, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad', fc=teffs, fcmap='afmhot', fclabel='teff', fcunit='K') mplfig = autofig.draw()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
The edges can be turned off by passing ec='none'. Also see how fclim='symmetric' will force the white in the 'bwr' colormap to correspond to vz=0.
autofig.reset() autofig.mesh(x=xs, y=ys, z=zs, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad', fc=-vzs, fcmap='bwr', fclim='symmetric', fclabel='rv', fcunit='solRad/d', ec='none') mplfig = autofig.draw()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
The facecolor default to 'none' allows you to see "through" the mesh:
autofig.reset() autofig.mesh(x=xs, y=ys, z=zs, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad', ec=-vzs, ecmap='bwr', eclim='symmetric', eclabel='rv', ecunit='solRad/d') mplfig = autofig.draw()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
In order to not see through the mesh, set the facecolor to 'white':
autofig.reset() autofig.mesh(x=xs, y=ys, z=zs, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad', ec=-vzs, ecmap='bwr', eclim='symmetric', eclabel='rv', ecunit='solRad/d', fc='white') mplfig = autofig.draw()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
We can of course provide different arrays and colormaps for the edge and face:
autofig.reset() autofig.mesh(x=xs, y=ys, z=zs, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad', fc=teffs, fcmap='afmhot', fclabel='teff', fcunit='K', ec=-vzs, ecmap='bwr', eclim='symmetric', eclabel='rv', ecunit='solRad/d') mplfig = autofig.draw()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
Animate and Limits
times = np.linspace(0,1,21) b = phoebe.default_binary() b.add_dataset('mesh', times=times, columns='vws') b.run_compute()
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
Rather than add an extra dimension, we can make a separate call to mesh for each time and pass the time to the 'i' dimension as a float.
autofig.reset() for t in times: for c in ['primary', 'secondary']: verts = b.get_value(time=t, component=c, qualifier='uvw_elements', context='model') vzs = b.get_value(time=t, component=c, qualifier='vws', context='model') xs = verts[:, :, 0] ys = verts[:, :, 1] zs = verts[:, :, 2] autofig.mesh(x=xs, y=ys, z=zs, i=t, xlabel='x', xunit='solRad', ylabel='y', yunit='solRad', fc=-vzs, fcmap='bwr', fclim='symmetric', fclabel='rv', fcunit='solRad/d', ec='none', consider_for_limits=c=='primary') mplfig = autofig.draw() autofig.gcf().axes[0].pad_aspect=False # pad_aspect=True (default) causes issues with fixed limits... sigh anim = autofig.animate(i=times, save='mesh_1.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
autofig.gcf().axes[0].x.lim = None anim = autofig.animate(i=times, save='mesh_2.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
autofig.gcf().axes[0].x.lim = 4 anim = autofig.animate(i=times, save='mesh_3.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/mesh.ipynb
kecnry/autofig
gpl-3.0
1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient): $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small." We know the following: $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checking Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input. You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. <img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;"> <caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption> The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
# GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = x*theta ### END CODE HERE ### return J x, theta = 2, 4 J = forward_propagation(x, theta) print ("J = " + str(J))
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table style=> <tr> <td> ** J ** </td> <td> 8</td> </tr> </table> Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
# GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta x, theta = 2, 4 dtheta = backward_propagation(x, theta) print ("dtheta = " + str(dtheta))
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table> <tr> <td> ** dtheta ** </td> <td> 2 </td> </tr> </table> Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking. Instructions: - First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$ - Then compute the gradient using backward propagation, and store the result in a variable "grad" - Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula: $$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$ You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them. - If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
# GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta+epsilon # Step 1 thetaminus = theta-epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus-J_minus)/(2*epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox) # Step 1' denominator = np.linalg.norm(grad)+np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference x, theta = 2, 4 difference = gradient_check(x, theta) print("difference = " + str(difference))
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: The gradient is correct! <table> <tr> <td> ** difference ** </td> <td> 2.9193358103083e-10 </td> </tr> </table> Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. <img src="images/NDgrad_kiank.png" style="width:600px;height:400px;"> <caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption> Let's look at your implementations for forward propagation and backward propagation.
def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Now, run backward propagation.
def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) * 2 db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. How does gradient checking work?. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still: $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them. The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary. <img src="images/dictionary_to_vector.png" style="width:600px;height:400px;"> <caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption> We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that. Exercise: Implement gradient_check_n(). Instructions: Here is pseudo-code that will help you implement the gradient check. For each i in num_parameters: - To compute J_plus[i]: 1. Set $\theta^{+}$ to np.copy(parameters_values) 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )). - To compute J_minus[i]: do the same thing with $\theta^{-}$ - Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$ Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
# GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0]+epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0]-epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox, ord=2) # Step 1' denominator = np.linalg.norm(grad, ord=2)+np.linalg.norm(gradapprox, ord=2) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y)
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Recommending movies: ranking This tutorial is a slightly adapted version of the basic ranking tutorial from TensorFlow Recommenders documentation. Imports Let's first get our imports out of the way.
!pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Preparing the dataset We're continuing to use the MovieLens dataset. This time, we're also going to keep the ratings: these are the objectives we are trying to predict.
ratings = tfds.load("movielens/100k-ratings", split="train") ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], "user_rating": x["user_rating"] })
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
We'll split the data by putting 80% of the ratings in the train set, and 20% in the test set.
tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000)
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Next we figure out unique user ids and movie titles present in the data so that we can create the embedding user and movie embedding tables.
movie_titles = ratings.batch(1_000_000).map(lambda x: x["movie_title"]) user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"]) unique_movie_titles = np.unique(np.concatenate(list(movie_titles))) unique_user_ids = np.unique(np.concatenate(list(user_ids)))
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Implementing a model Architecture Ranking models do not face the same efficiency constraints as retrieval models do, and so we have a little bit more freedom in our choice of architectures. We can implement our ranking model as follows:
class RankingModel(tf.keras.Model): def __init__(self): super().__init__() embedding_dimension = 32 # Compute embeddings for users. self.user_embeddings = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_user_ids, mask_token=None), tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) # Compute embeddings for movies. self.movie_embeddings = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) # Compute predictions. self.ratings = tf.keras.Sequential([ # Learn multiple dense layers. tf.keras.layers.Dense(256, activation="relu"), tf.keras.layers.Dense(64, activation="relu"), # Make rating predictions in the final layer. tf.keras.layers.Dense(1) ]) def call(self, inputs): user_id, movie_title = inputs user_embedding = self.user_embeddings(user_id) movie_embedding = self.movie_embeddings(movie_title) return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1))
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Loss and metrics We'll make use of the Ranking task object: a convenience wrapper that bundles together the loss function and metric computation. We'll use it together with the MeanSquaredError Keras loss in order to predict the ratings.
task = tfrs.tasks.Ranking( loss = tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()] )
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
The full model We can now put it all together into a model.
class MovielensModel(tfrs.models.Model): def __init__(self): super().__init__() self.ranking_model: tf.keras.Model = RankingModel() self.task: tf.keras.layers.Layer = tfrs.tasks.Ranking( loss = tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()] ) def call(self, features: Dict[str, tf.Tensor]) -> tf.Tensor: return self.ranking_model( (features["user_id"], features["movie_title"])) def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: labels = features.pop("user_rating") rating_predictions = self(features) # The task computes the loss and the metrics. return self.task(labels=labels, predictions=rating_predictions)
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Fitting and evaluating After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model. Let's first instantiate the model.
model = MovielensModel() model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Then shuffle, batch, and cache the training and evaluation data.
cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache()
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Then train the model:
model.fit(cached_train, epochs=3)
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
As the model trains, the loss is falling and the RMSE metric is improving. Finally, we can evaluate our model on the test set:
model.evaluate(cached_test, return_dict=True)
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
The lower the RMSE metric, the more accurate our model is at predicting ratings. Exporting for serving The model can be easily exported for serving:
tf.saved_model.save(model, "exported-ranking/123")
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
We will deploy the model with TensorFlow Serving soon.
# Zip the SavedModel folder for easier download !zip -r exported-ranking.zip exported-ranking/
tfrs-flutter/step5/backend/ranking/ranking.ipynb
flutter/codelabs
bsd-3-clause
Wir öffnen die Datenbank und lassen uns die Keys der einzelnen Tabellen ausgeben. 
hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf') print(hdf.keys)
notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb
hhain/sdap17
mit
Aufgabe 2: Inspektion eines einzelnen Dataframes Wir laden den Frame x1_t1_trx_1_4 und betrachten seine Dimension.
df_x1_t1_trx_1_4 = hdf.get('/x1/t1/trx_1_4') print("Rows:", df_x1_t1_trx_1_4.shape[0]) print("Columns:", df_x1_t1_trx_1_4.shape[1])
notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb
hhain/sdap17
mit
Als nächstes Untersuchen wir exemplarisch für zwei Empfänger-Sender-Gruppen die Attributzusammensetzung.
# first inspection of columns from df_x1_t1_trx_1_4 df_x1_t1_trx_1_4.head(5)
notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb
hhain/sdap17
mit
Für die Analyse der Frames definieren wir einige Hilfsfunktionen.
# Little function to retrieve sender-receiver tuples from df columns def extract_snd_rcv(df): regex = r"trx_[1-4]_[1-4]" # creates a set containing the different pairs snd_rcv = {x[4:7] for x in df.columns if re.search(regex, x)} return [(x[0],x[-1]) for x in snd_rcv] # Sums the number of columns for each sender-receiver tuple def get_column_counts(snd_rcv, df): col_counts = {} for snd,rcv in snd_rcv: col_counts['Columns for pair {} {}:'.format(snd, rcv)] = len([i for i, word in enumerate(list(df.columns)) if word.startswith('trx_{}_{}'.format(snd, rcv))]) return col_counts # Analyze the column composition of a given measurement. def analyse_columns(df): df_snd_rcv = extract_snd_rcv(df) cc = get_column_counts(df_snd_rcv, df) for x in cc: print(x, cc[x]) print("Sum of pair related columns: %i" % sum(cc.values())) print() print("Other columns are:") for att in [col for col in df.columns if 'ifft' not in col and 'ts' not in col]: print(att) # Analyze the values of the target column. def analyze_target(df): print(df['target'].unique()) print("# Unique values in target: %i" % len(df['target'].unique()))
notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb
hhain/sdap17
mit
Bestimme nun die Spaltezusammensetzung von df_x1_t1_trx_1_4.
analyse_columns(df_x1_t1_trx_1_4)
notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb
hhain/sdap17
mit