markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Example output: { "deployedModel": { "id": "7407594554280378368" } }
# The unique ID for the deployed model deployed_model_id = result.deployed_model.id print(deployed_model_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.predict Prepare file for online prediction
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Request
prediction_request = {"endpoint": endpoint_id, "instances": INSTANCES} print(json.dumps(prediction_request, indent=2))
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376", "instances": [ [ 1.4, 1.3, 5.1, 2.8 ], [ 1.5, 1.2, 4.7, 2.4 ] ] } Call
request = clients["prediction"].predict(endpoint=endpoint_id, instances=INSTANCES)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "predictions": [ 2.045193195343018, 1.961864471435547 ], "deployedModelId": "7407594554280378368" } projects.locations.endpoints.undeployModel Call
request = clients["endpoint"].undeploy_model( endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={} )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
We want to generalize this result to several massess connected by several springs. The spring constant as a second derivative of potential The force related to poential energy by $$ F = -\frac{d}{dx}V(x).$$ Ths equation comes directly from the definition that work is force times distance. Integrating this, we find the ...
import numpy as np a = np.array([[1, -1], [-1, 1]]) freq, vectors = np.linalg.eig(a) vectors = vectors.transpose()
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
The frequencies of the two modes of vibration are (in multiples of $\sqrt{k/m}$)
freq
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
The first mode is a vibrational mode were the masses vibrate against each other (moving in opposite directions). This can be seen from the eigenvector.
vectors[0]
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
The second mode is a translation mode with zero frequency—both masses move in the same direction.
vectors[1]
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
We can interactively illustrate the vibrational mode.
import matplotlib.pyplot as plt %matplotlib inline from IPython.html import widgets def make_plot(t): fig, ax = plt.subplots() x,y = np.array([-1,1]), np.array([0,0]) plt.plot(x, y, 'k.') plt.plot(x + 0.3 * vectors[0] * t, y, 'bo') plt.xlim(-1.5,1.5) plt.ylim(-1.5,1.5) widgets.interact(make_pl...
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
Finding the dynamical matrix with numerical derivatives We start from a function $V(x)$. If we want to calculate a derivative, we just use the difference formula but don't take the delta too small. Using $\delta x = 10^{-6}$ is safe. $$ F = -\frac{dV(x)}{dx} \approx \frac{V(x+\Delta x) - V(x-\Delta x)}{2\Delta x} $$ N...
def V(x): return 0.5 * x**2 deltax = 1e-6 def F_approx(x): return ( V(x + deltax) - V(x - deltax) ) / (2 * deltax) [(x, F_approx(x)) for x in np.linspace(-2,2,9)]
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
Next, we can find the second derivative by using the difference formula twice. We find the nice expression, $$ \frac{d^2V}{dx^2} \approx \frac{V(x+\Delta x) - 2V(x) + V(x-\Delta x)}{(\Delta x)^2}. $$ This formula has the nice interpretation of comparing the value of $V(x)$ to the average of points on either side. If it...
def dV2dx2_approx(x): return ( V(x + deltax) - 2 * V(x) + V(x - deltax) ) / deltax**2 [(x, dV2dx2_approx(x)) for x in np.linspace(-2,2,9)]
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
Now we can use these derivative formulas to calcuate the dynamical matrix for the two masses on one spring. Well use $k=1$ and $m=1$ for simplicity.
def V2(x1, x2): return 0.5 * (x1 - x2)**2 x1, x2 = -1, 1 mat = np.array( [[(V2(x1+deltax, x2) - 2 * V2(x1,x2) + V2(x1-deltax, x2)) / deltax**2 , (V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax) - V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2], [(V2(x1+deltax, x2+deltax) - V2(x1-de...
HarmonicOscillator.ipynb
shumway/srt_bootcamp
mit
I want to count the number of rentals per vehicle ID in reservations.csv, appending these values as a column in vehicles.csv, in order to compare vehicle properties to reservation numbers directly. I also expect that a key factor in customer decisions may be the price difference between the actual and recommended price...
# Count frequency of rentals and add column to VEHICLE_DATA veh_id = VEHICLE_DATA.as_matrix(columns=['vehicle_id']) res_id = RESERVATION_DATA.as_matrix(columns=['vehicle_id']) # Use numpy here to ensure zero counts as a value n_reservations = np.zeros(len(veh_id)) for i,id in enumerate(veh_id): n_reservations[i] =...
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
Finding the most important factors driving the total number of reservations
from pandas.tools.plotting import scatter_matrix scatter_matrix(VEHICLE_DATA[PLOT_COLUMNS], diagonal='kde') plt.show()
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
The plot above shows every measured parameter, along with the newly added price-difference and number-of-reservations parameters, plotted against one another. I used this to try to quickly determine, visually, what parameters might drive the number of reservations. This is probably at the limit, in terms of number of p...
plt.rcParams.update({'font.size': LABEL_SIZE}) plt.rc('figure',figsize=(10,8)) VEHICLE_DATA.plot.hexbin(x='actual_price',y='recommended_price',C='num_reservations',reduce_C_function=np.max, gridsize=25,figsize=(10,8)) plt.ylabel("Recommended Price") plt.xlabel("Actual Price") plt.plot([34,90],[...
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
In the above figure, I show the recommended price versus the actual price in a hex density plot, with the color intensity representing the number of reservations. The orange line represents a one-to-one correlation between the two parameters. One can see immediately that there is a high density of reservations correspo...
# Define feature columns to explore with machine learning FEATURE_COLUMNS = ['technology', 'num_images', 'street_parked', 'description','diff_price','actual_price'] # Random forest regressor for continuous num_reservations TARGET_COLUMN = ['num_reservations'] rf = RandomForestRegressor() rf.fit(VEHICLE_DATA[FEATURE_CO...
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
In the above code, I explore the data set using a Random Forest algorithm. It is a relatively quick and exceptionally versatile way to examine labelled data and derive relationships. In this case, I am using it to "score" the various parameters by how much they contribute to the number of reservations. I ask the Rando...
fig, axes = plt.subplots(nrows=1, ncols=2, sharey=True) fig.set_figheight(6) fig.set_figwidth(16) MERGED_DATA[['actual_price']].plot.hist(bins=15,ax=axes[0]) axes[0].set_xlabel("Actual Price") MERGED_DATA[['diff_price']].plot.hist(bins=15,ax=axes[1]) plt.xlabel("Price Difference") plt.show()
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
In the above figures, I examine exactly how the price factors in to the number of reservations. These histograms show the number of reservations on the y-axis as a function of either the actual price or the difference between the actual price and the recommended price. On the left, it appears that as price goes down, t...
MERGED_DATA[['description']].plot.hist(bins=15,figsize=(8,6)) plt.show()
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
The final important parameter is the length of the description. The above plot shows the frequency of reservations as a function of the description character length. I can quickly conclude that more reservations are made for cars with descriptions less than 50 characters. After this point, the length of the description...
plt.rc('figure',figsize=(8,6)) plt.figure(1) MERGED_DATA.loc[MERGED_DATA['reservation_type']==1]['technology'].plot.hist(alpha=0.5,title="Hourly", normed=True) plt.xlabel("With or Without Technology") plt.figure(2) MERGED_DATA.loc[MERGED_DATA[...
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
In the above plots, I show the normalized frequency of reservation for the three types of reservations. On the x-axis, a 0 represents the absence of the technology package, and a 1 represents a vehicle having the technology. It is visually obvious that a proportionately larger number of hourly reservations are made wit...
import scipy.stats KSstatistic, pvalue = scipy.stats.ks_2samp(MERGED_DATA.loc[MERGED_DATA['reservation_type']==3]['technology'], MERGED_DATA.loc[MERGED_DATA['reservation_type']==2]['technology']) print "KS probability that Weekly and Daily reservations are drawn from the sam...
Data Science Example Rental Cars.ipynb
nigelmathes/Data_Science
gpl-3.0
Let's experiment with neural networks! Learning Objectives Gain familiarity with Standard NN libraries: Keras and Tensorflow One standard NN architecture: fully connected ('dense') networks Two standard tasks performed with NNs: Binary Classification, Multi-class Classification Diagnostics of NNs 1. History...
# import MNIST data (x_train_temp, y_train_temp), (x_test_temp, y_test_temp) = mnist.load_data()
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Reshape the data arrays: flatten for ingestion into a Dense layer. We're going to use a Dense Neural Network Architecture, which takes 1D data structures, not 2D images. So we need to make the input shape appropriate, so we 'flatten' it.
# Flatten the images img_rows, img_cols = x_train[0].shape[0], x_train[0].shape[1] img_size = img_rows*img_cols x_train = x_train.reshape(x_train.shape[0], img_size) x_test = x_test.reshape(x_test.shape[0], img_size) print("New shape", x_train.shape)
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Create a Neural Net and train it! Decide model format
model = Sequential()
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
For later: check out the keras documentation to see what other kinds of model formats are available. What might they be useful for? Add layers to the model sequentially
model.add(Dense(units=32, activation='sigmoid', input_shape=(img_size,))) # dense layer of 32 neurons model.add(Dense(units=num_classes, activation='softmax')) # dense layer of 'num_classes' neurons, because these are the number of options for the classification #model.add(BatchNormalization()) model.summary() # look a...
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Review the model summary, noting the major features --- the meaning of each column, the flow downward, the number of parameters at the bottom. What's the math that leads to the number of parameters in each layer? This is a little bit of fun math to play with. In the long run it can help understand how to debug model co...
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The optimizer is an element that I sometimes tune if I want more control over the training. Check the Keras docs for the optiosn available for the optimizer. Fit (read: Train) the model
# Training parameters batch_size = 32 # number of images per epoch num_epochs = 5 # number of epochs validation_split = 0.8 # fraction of the training set that is for validation only history = model.fit(x_train, y_train, batch_size=batch_size, epochs=num_epochs, ...
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
"Exerciiiiiiisee the neurons! This Network is cleeeah!" (bonus points if you know the 90's movie, from which this is a near-quote.) How well does the training go if we don't normalize the data to be between 0 and 1.0? What is the effect of batch_size on how long it takes the network to achieve 80% accuracy? What happe...
loss_train, acc_train = model.evaluate(x_train, y_train, verbose=False) loss_test, acc_test = model.evaluate(x_test, y_test, verbose=False) print(f'Train acc/loss: {acc_train:.3}, {loss_train:.3}') print(f'Test acc/loss: {acc_test:.3}, {loss_test:.3}')
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Define Confusion Matrix
# Function: Convert from categorical back to numerical value def convert_to_index(array_categorical): array_index = [np.argmax(array_temp) for array_temp in array_categorical] return array_index def plot_confusion_matrix(cm, normalize=False, title='Confusion matr...
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Plot Confusion Matrix
# apply conversion function to data y_test_ind = convert_to_index(y_test) y_pred_test_ind = convert_to_index(y_pred_test) # compute confusion matrix cm_test = ConfusionMatrix(y_test_ind, y_pred_test_ind) np.set_printoptions(precision=2) # plot confusion matrix result plt.figure() plot_confusion_matrix(cm_test,title='...
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problems Problem 1: Binary data Problem 3a: Generate a binary data sets from the MNIST data set. Problem 3b: Prepare the data set in the same way as the multi-class scenario, but apprioriately for only two classes. Problem 3c: Re-train the neural network for the binary classification problem. Problem 3d: Perform all ...
!gsutil cp -r gs://lsst-dsfp2018/stronglenses ./ !ls .
Sessions/Session07/Day4/LetsGetNetworking.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Breaking Out Of A For Loop
for army in armies: print(army) if army == 'Blue Army': print('Blue Army Found! Stopping.') break
python/exiting_a_loop.ipynb
tpin3694/tpin3694.github.io
mit
Notice that the loop stopped after the conditional if statement was satisfied. Exiting If Loop Completed A loop will exit when completed, but using an else statement we can add an action at the conclusion of the loop if it hasn't been exited earlier.
for army in armies: print(army) if army == 'Orange Army': break else: print('Looped Through The Whole List, No Orange Army Found')
python/exiting_a_loop.ipynb
tpin3694/tpin3694.github.io
mit
<h2><font color="red">Challenge!</font></h2> Copy this sample code and use it to calculate the mass of the muons. Make a histogram of this quantity. <i>Hint!</i> Make sure you do this for all the muons! Each collision can produce differing numbers of muons, so take care when you code this up. Your histogram should lo...
from IPython.display import Image Image(filename='images/muons_sketch.jpeg') # Your code here
activities/activity00_cms_muons.ipynb
particle-physics-playground/playground
mit
Suppose we didn't know anything about special relativity and we tried calculating the mass from what we know about classical physics. $$KE = \frac{1}{2}mv^2 \qquad KE = \frac{p^2}{2m} \qquad m = \frac{p^2}{2KE}$$ Let's interpret the energy from the CMS data as the kinetic energy ($KE$). Use classical mechanics then to...
# Your code here
activities/activity00_cms_muons.ipynb
particle-physics-playground/playground
mit
Android Management API - Quickstart If you have not yet read the Android Management API Codelab we recommend that you do so before using this notebook. If you opened this notebook from the Codelab then follow the next instructions on the Codelab. In order to run this notebook, you need: An Android 6.0+ device. Setup ...
from apiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from random import randint # This is a public OAuth config, you can use it to run this guide but please use # different credentials when building your own solution. CLIENT_CONFIG = { 'installed': { 'client_id':'88...
notebooks/codelab_kiosk.ipynb
google/android-management-api-samples
apache-2.0
Select an enterprise An Enterprise resource binds an organization to your Android Management solution. Devices and Policies both belong to an enterprise. Typically, a single enterprise resource is associated with a single organization. However, you can create multiple enterprises for the same organization based on thei...
enterprise_name = 'enterprises/LC02de1hmx'
notebooks/codelab_kiosk.ipynb
google/android-management-api-samples
apache-2.0
Create a policy A Policy is a group of settings that determine the behavior of a managed device and the apps installed on it. Each Policy resource represents a unique group of device and app settings and can be applied to one or more devices. Once a device is linked to a policy, any updates to the policy are automatica...
import json # Create a random policy name to avoid colision with other Codelabs if 'policy_name' not in locals(): policy_name = enterprise_name + '/policies/' + str(randint(1, 1000000000)) policy_json = ''' { "applications": [ { "packageName": "com.google.samples.apps.iosched", "installType": "FOR...
notebooks/codelab_kiosk.ipynb
google/android-management-api-samples
apache-2.0
Simple k-means aggregation Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical' and 'k_medoids'.
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24, clusterMethod = 'k_means')
examples/aggregation_method_showcase.ipynb
FZJ-IEK3-VSA/tsam
mit
Simple k-medoids aggregation of weeks Initialize a time series aggregation which integrates the day with the minimal temperature and the day with the maximal load as periods.
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24*7, clusterMethod = 'k_medoids', )
examples/aggregation_method_showcase.ipynb
FZJ-IEK3-VSA/tsam
mit
Save typical periods to .csv file with weeks order by GHI to get later testing consistency
typPeriods.reindex(typPeriods['GHI'].unstack().sum(axis=1).sort_values().index, level=0).to_csv(os.path.join('results','testperiods_kmedoids.csv'))
examples/aggregation_method_showcase.ipynb
FZJ-IEK3-VSA/tsam
mit
AWS (S3, Redshift, Kinesis) + Databricks Spark = Real-time Smart Meter Analytics Create S3 Bucket
s3 = boto3.client('s3') s3.list_buckets() def create_s3_bucket(bucketname): """Quick method to create bucket with exception handling""" s3 = boto3.resource('s3') exists = True bucket = s3.Bucket(bucketname) try: s3.meta.client.head_bucket(Bucket=bucketname) except botocore.exceptions.C...
SmartMeterResearch_Phase2.ipynb
dougkelly/SmartMeterResearch
apache-2.0
Copy Postgres to S3 via Postgres dump to CSV and s3cmd upload
# Note: Used s3cmd tools because awscli tools not working in conda env # 14m rows or ~ 1.2 GB local unzipped; 10min write to CSV and another 10min to upload to S3 # !s3cmd put ~/Users/Doug/PecanStreet/electricity-03-06-2016.csv s3://pecanstreetresearch-2016/electricity-03-06-2016.csv # 200k rows ~ 15 MB local unzippe...
SmartMeterResearch_Phase2.ipynb
dougkelly/SmartMeterResearch
apache-2.0
Amazon Redshift: NoSQL Columnar Data Warehouse Quick data cleanup before ETL
# Quick geohashing before uploading to Redshift weather_df = pd.read_csv('/Users/Doug/PecanStreet/weather_03-06-2016.csv') weather_df.groupby(['latitude', 'longitude', 'city']).count() weather_df['city'] = weather_df['Austin' if weather_df.latitude=30.292432 elif ''] weather_df['city'] = 'city' weather_df.city.uniq...
SmartMeterResearch_Phase2.ipynb
dougkelly/SmartMeterResearch
apache-2.0
create table electricity ( dataid integer not null, localhour timestamp not null distkey sortkey, use decimal(30,26), air1 decimal(30,26), furnace1 decimal(30,26), car1 decimal(30,26) ); create table weather ( localhour timestamp not null distkey sortkey, latitude decimal(30,26), longitude decimal(30,26), temperature d...
# Complete COPY electricity FROM 's3://pecanstreetresearch-2016/electricity/electricity-03-06-2016.csv' CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY' CSV IGNOREHEADER 1 dateformat 'auto'; # Complete COPY weather FROM 's3://pecanstreetresearch-2016/weather/weather-03-06...
SmartMeterResearch_Phase2.ipynb
dougkelly/SmartMeterResearch
apache-2.0
Databricks Spark Analysis (see Databricks): Batch analytics on S3, Streaming using Amazon Kinesis Stream Create Amazon Kinesis Stream for writing streaming data to S3
kinesis = boto3.client('kinesis') kinesis.create_stream(StreamName='PecanStreet', ShardCount=2) kinesis.list_streams() firehose = boto3.client('firehose') # firehose.create_delivery_stream(DeliveryStreamName='pecanstreetfirehose', S3DestinationConfiguration={'RoleARN': '', 'BucketARN': 'pecanstreetresearch-2016'}) ...
SmartMeterResearch_Phase2.ipynb
dougkelly/SmartMeterResearch
apache-2.0
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section) It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_nump...
arrays = np.load('module-4-assignment-numpy-arrays.npz') feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train'] feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Building on logistic regression with no L2 penalty assignment Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathb...
''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients ## YOUR CODE HERE score= np.dot(feature_matrix, coefficients) # Compute P(y_i = +1 | x_i, ...
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Adding L2 penalty Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail. Recall from lecture and the previous assignment that for logi...
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): # Compute the dot product of errors and feature ## YOUR CODE HERE derivative = np.dot(errors, feature) # add L2 penalty term for any feature that isn't the intercept. if not feature_is_constant: ...
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$? The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in xrange(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function ## YO...
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words. Quiz Question. Which of the following is not listed in eithe...
coefficients = list(coefficients_0_penalty[1:]) # exclude intercept word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)] word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True) positive_words = word_coefficient_tuples[:5] negative...
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
positive_words = table.sort('coefficients [L2=0]', ascending = False)['word'][0:5] positive_words negative_words = table.sort('coefficients [L2=0]')['word'][0:5] negative_words import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = 10, 6 def make_coefficient_plot(table, positive_words, n...
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Stock Market Similarity Searches: Daily Returns We have provided a year of daily returns for 379 S&P 500 stocks. We have explicitly excluded stocks with incomplete or missing data. We have pre-loaded 350 stocks in the database, and have excluded 29 stocks for later use in similarity searches. Data source: <a href='www....
# load data with open('data/returns_include.json') as f: stock_data_include = json.load(f) with open('data/returns_exclude.json') as f: stock_data_exclude = json.load(f) # keep track of which stocks are included/excluded from the database stocks_include = list(stock_data_include.keys()) stocks_exclude ...
docs/stock_example_returns.ipynb
Mynti207/cs207project
mit
Database Initialization Let's start by initializing all the database components.
# 1. load the database server # when running from the terminal # python go_server_persistent.py --ts_length 244 --db_name 'stock_prices' # here we load the server as a subprocess for demonstration purposes server = subprocess.Popen(['python', '../go_server_persistent.py', '--ts_length', str...
docs/stock_example_returns.ipynb
Mynti207/cs207project
mit
Let's pick one of our excluded stocks and carry out a vantage point similarity search.
# pick the stock stock = np.random.choice(stocks_exclude) print('Stock:', stock) # run the vantage point similarity search result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1) stock_match = list(result)[0] stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_...
docs/stock_example_returns.ipynb
Mynti207/cs207project
mit
iSAX Tree Search Let's pick another one of our excluded stocks and carry out an iSAX tree similarity search. Note that this is an approximate search technique, so it will not always be able to find a similar stock.
# pick the stock stock = np.random.choice(stocks_exclude) print('Stock:', stock) # run the isax tree similarity search result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock])) # could not find a match if result == 'ERROR: NO_MATCH': print('Could not find a similar stoc...
docs/stock_example_returns.ipynb
Mynti207/cs207project
mit
Comparing Similarity Searches Now, let's pick one more random stock, carry out both types of similarity searches, and compare the results.
# pick the stock stock = np.random.choice(stocks_exclude) print('Stock:', stock) # run the vantage point similarity search result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1) match_vp = list(result)[0] ts_vp = web_interface.select(fields=['ts'], md={'pk': match_vp})[m...
docs/stock_example_returns.ipynb
Mynti207/cs207project
mit
Now, we make the predictions based on population. We first import multiple models from the scikit-learn library.
from sklearn import ensemble,tree,neighbors rfr = ensemble.RandomForestRegressor() dtr = tree.DecisionTreeRegressor() knr = neighbors.KNeighborsRegressor() #nnn = neural_network.BernoulliRBM()
development_folder/ipc_microsim_sklearn.ipynb
UN-DESA-Modelling/Electricity_Consumption_Surveys
gpl-3.0
Evaluate the accuracy for each model based on cross-validation These predictions are for the Medium Variance Projections. The arrays below whow the accuracy of each model using cross-validation
from sklearn.cross_validation import cross_val_score print 'Random Forests',cross_val_score(rfr,estimates,medVariance_projections) print 'Decision Trees',cross_val_score(dtr,estimates,medVariance_projections) print 'Nearest Neighbors',cross_val_score(knr,estimates,medVariance_projections) from sklearn.cross_validation...
development_folder/ipc_microsim_sklearn.ipynb
UN-DESA-Modelling/Electricity_Consumption_Surveys
gpl-3.0
Using groupby(), plot the number of films that have been released each decade in the history of cinema.
titles['decade'] = titles.year // 10 * 10 titles.groupby('decade').size().plot(kind='bar')
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
Use groupby() to plot the number of "Hamlet" films made each decade.
titles[titles.title=='Hamlet'].groupby('decade').size().sort_index().plot(kind='bar')
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?
cast[(cast.year//10==195)&(cast.n==1)].groupby(['year','type']).type.value_counts()
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
In the 1950s decade taken as a whole, how many total roles were available to actors, and how many to actresses, for each "n" number 1 through 5?
cast[(cast.year//10==195)&(cast.n<=5)].groupby(['n','type']).type.value_counts()
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
Use groupby() to determine how many roles are listed for each of the Pink Panther movies.
cast[cast.title.str.contains('Pink Panther')].groupby('year').size()
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
List, in order by year, each of the films in which Frank Oz has played more than 1 role.
cast[cast.name=='Frank Oz'].groupby(['title','year']).size()>1
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
List each of the characters that Frank Oz has portrayed at least twice.
cast[cast.name=='Frank Oz'].groupby('character').size()>2
Tutorial/Exercises-3.ipynb
RobbieNesmith/PandasTutorial
mit
Link to What's in Scipy.constants: https://docs.scipy.org/doc/scipy/reference/constants.html Library Functions in Maths (and numpy)
x = 4**0.5 print(x) x = np.sqrt(4) print(x)
IntroductiontoPython/UserDefinedFunction.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
User Defined Functions Here we'll practice writing our own functions. Functions start with python def name(input): and must end with a statement to return the value calculated python return x To run a function your code would look like this: ```python import numpy as np def name(input) ``` FUNCTION CODE HERE ```py...
def factorial(n): f = 1.0 for k in range(1,n+1): f *= k return f print("This programme calculates n!") n = int(input("Enter n:")) a = factorial(10) print("n! = ", a)
IntroductiontoPython/UserDefinedFunction.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
Finding distance to the origin in cylindrical co-ordinates:
from math import sqrt, cos, sin def distance(r,theta,z): x = r*cos(theta) y = r*sin(theta) d = sqrt(x**2+y**2+z**2) return d D = distance(2.0,0.1,1.5) print(D)
IntroductiontoPython/UserDefinedFunction.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
Another Example: Prime Factors and Prime Numbers Reminder: prime factors are the numbers which divide another number exactly. Factors of the integer n can be found by dividing by all integers from 2 up to n and checking to see which remainders are zero. Remainder in python calculated using python n % k
def factors(n): factorlist=[] k = 2 while k<=n: while n%k==0: factorlist.append(n) n //= k k += 1 return factorlist list=factors(12) print(list) print(factors(17556)) print(factors(23))
IntroductiontoPython/UserDefinedFunction.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
The reason these are useful are for things like the below, where you want to make the same calculation many times. This finds all the prime numbers (only divided by 1 and themselves) from 2 to 100.
for n in range(2,100): if len(factors(n))==1: print(n)
IntroductiontoPython/UserDefinedFunction.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
Generalization of Taylor FD operators In the last lesson, we learned how to derive a high order FD approximation for the second derivative using Taylor series expansion. In the next step we derive a general equation to compute FD operators, where I use a detailed derivation based on "Derivative Approximation by Finite ...
# import SymPy libraries from sympy import symbols, differentiate_finite, Function # Define symbols x, h = symbols('x h') f = Function('f')
04_FD_stability_dispersion/lecture_notebooks/4_general_fd_taylor_operators.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
☝️ While this code is running, preview the code ahead.
# Read the data into a list of strings. def read_data(filename): """Extract the first file enclosed in a zip file as a list of words""" with zipfile.ZipFile(filename) as f: data = tf.compat.as_str(f.read(f.namelist()[0])).split() return data words = read_data(filename) print('Dataset size: {:,} wor...
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
shareactorIO/pipeline
apache-2.0
Notice: None of the words are capitalized and there is no punctuation Step 2: Build the dictionary
vocabulary_size = 50000 def build_dataset(words): """ Replace rare words with UNK token which stands for "unknown". It is called a dustbin category, aka sweep the small count items into a single group. """ count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1...
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
shareactorIO/pipeline
apache-2.0
Step 3: Function to generate a training batch for the skip-gram model.
data_index = 0 def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_windo...
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
shareactorIO/pipeline
apache-2.0
Step 5: Begin training.
num_steps = 1 # 100001 with tf.Session(graph=graph) as session: # We must initialize all variables before we use them. init.run() print("Initialized") average_loss = 0 for step in range(num_steps): batch_inputs, batch_labels = generate_batch( batch_size, num_skips, skip_window)...
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
shareactorIO/pipeline
apache-2.0
What do you see? How would you describe the relationships? Let's render and save more samples.
def plot_with_labels(low_dim_embs, labels, filename='tsne.png'): assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings" plt.figure(figsize=(18, 18)) #in inches for i, label in enumerate(labels): x, y = low_dim_embs[i,:] plt.scatter(x, y) plt.annotate(label, ...
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
shareactorIO/pipeline
apache-2.0
Take a look at the saved embeddings, are they good? Are there any surprises? What could we do to improve the embeddings? If you are feeling adventurus, check out the complete TensorFlow version of word2vec Summary You need a lot of data to get good word2vec embeddings The training takes some time TensorFlow simplifi...
# %run word2vec_basic.py
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
shareactorIO/pipeline
apache-2.0
Preset based - using CoachInterface The basic method to run Coach directly from python is through a CoachInterface object, which uses the same arguments as the command line invocation but allowes for more flexibility and additional control of the training/inference process. Let's start with some examples. Training a p...
coach = CoachInterface(preset='CartPole_ClippedPPO', # The optional custom_parameter enables overriding preset settings custom_parameter='heatup_steps=EnvironmentSteps(5);improve_steps=TrainingSteps(3)', # Other optional parameters enable easy access ...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
Running each training or inference iteration manually The graph manager (which was instantiated in the preset) can be accessed from the CoachInterface object. The graph manager simplifies the scheduling process by encapsulating the calls to each of the training phases. Sometimes, it can be beneficial to have a more fin...
from rl_coach.environments.gym_environment import GymEnvironment, GymVectorEnvironment from rl_coach.base_parameters import VisualizationParameters from rl_coach.core_types import EnvironmentSteps tf.reset_default_graph() coach = CoachInterface(preset='CartPole_ClippedPPO') # registering an iteration signal before st...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
Sometimes we may want to track the agent's decisions, log or maybe even modify them. We can access the agent itself through the CoachInterface as follows. Note that we also need an instance of the environment to do so. In this case we use instantiate a GymEnvironment object with the CartPole GymVectorEnvironment:
# inference env_params = GymVectorEnvironment(level='CartPole-v0') env = GymEnvironment(**env_params.__dict__, visualization_parameters=VisualizationParameters()) response = env.reset_internal_state() for _ in range(10): action_info = coach.graph_manager.get_agent().choose_action(response.next_state) print("St...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
Non-preset - using GraphManager directly It is also possible to invoke coach directly in the python code without defining a preset (which is necessary for CoachInterface) by using the GraphManager object directly. Using Coach this way won't allow you access functionalities such as multi-threading, but it might be conve...
from rl_coach.agents.clipped_ppo_agent import ClippedPPOAgentParameters from rl_coach.environments.gym_environment import GymVectorEnvironment from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager from rl_coach.graph_managers.graph_manager import SimpleSchedule from rl_coach.architectures.embed...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
Advanced functionality - proprietary exploration policy, checkpoint evaluation Agent modules, such as exploration policy, memory and neural network topology can be replaced with proprietary ones. In this example we'll show how to replace the default exploration policy of the DQN agent with a different one that is defin...
from rl_coach.agents.dqn_agent import DQNAgentParameters from rl_coach.base_parameters import VisualizationParameters, TaskParameters from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps from rl_coach.environments.gym_environment import GymVectorEnvironment from rl_coach.graph_managers.b...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
Next, we'll override the exploration policy with our own policy defined in Resources/exploration.py. We'll also define the checkpoint save directory and interval in seconds. Make sure the first cell at the top of this notebook is run before the following one, such that module_path and resources_path are adding to sys p...
from exploration import MyExplorationParameters # Overriding the default DQN Agent exploration policy with my exploration policy agent_params.exploration = MyExplorationParameters() # Creating a graph manager to train a DQN agent to solve CartPole graph_manager = BasicRLGraphManager(agent_params=agent_params, env_par...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
Last, we'll load the latest checkpoint from the checkpoint directory, and evaluate it.
import tensorflow as tf import shutil # Clearing the previous graph before creating the new one to avoid name conflicts tf.reset_default_graph() # Updating the graph manager's task parameters to restore the latest stored checkpoint from the checkpoints directory task_parameters2 = TaskParameters() task_parameters2.ch...
tutorials/0. Quick Start Guide.ipynb
NervanaSystems/coach
apache-2.0
If you read this article, you will see that Matt used the first 35,000 lyrics of each artist. For the sake of If you read the article, you will see that Matt used the first 35,000 lyrics of each artist. For the sake of simplicity, I am going to use the artist Jay Z as the subject of our analysis. So let's go and collec...
35000/(16 * 3)
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
I proceeded under the false assumption that each lyric was a sentence. Now I can see that each lyric must mean each word instead. This brings me to an essential quality of a good problem solver and Computer Scientist, the ability to embrace failure. More on that later. So lets go back and re-analyze the numbers. I need...
import nltk
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
The nltk library has a lot of functions associated with it. The one we are going to use in particular is called a Corpus reader. If you look up the definition of a corpus, you will see that it is just a collection of written text. The JayZ folder contains a corpora of Jay Z lyrics. The great thing about nltk is that it...
nltk.corpus.gutenberg.fileids() from nltk.corpus import PlaintextCorpusReader corpus_root = 'JayZ' wordlist = PlaintextCorpusReader(corpus_root, '.*')
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
Ah, there is Shakespeare's Macbeth, Hamlet, as well as Julius Cesar. There is also the King James version of the bible as well as Jane Austen's Emma. Let's get on to the business of making our JayZ corpus. Based on my perusal of the nltk book, I know that there is a plaintext corpus reading function named PlaintextCorp...
import re """ This function takes in an object of the type PlaintextCorpusReader, and system path. It returns an nltk corpus It requires the regular expression package re to work """ #In here is where I should get rid of the rap stopwords. def create_corpus(wordlist, some_corpus): #process the files so I know what...
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
The function does something then you type in the following python the_corpus = create_corpus(wordlist, []) Now the_corpus contains all the lyrics. I wrote the create_corpus function in a way that shows what lyrics were read in. You should have a similar output to the one below. The series of words that make up Jay Z's...
the_corpus = create_corpus(wordlist, []) len(the_corpus)
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
We have finally gotten our Jay Z Corpus! The data that we collected has a 119,302 words. You can see for yourself by running the len (Python function for figuring out the length of a list) on the_corpus. This is great, because we are interested in the first 35,000 words of the corpora, in order to recreate the data sci...
the_corpus[:10]
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
Here's a little secret: much of NLP (and data science, for that matter) boils down to counting things. If you've got a bunch of data that needs analyzin' but you don't know where to start, counting things is usually a good place to begin. Sure, you'll need to figure out exactly what you want to count, how to count it, ...
Albums = wordlist.fileids() Albums[:14] [fileid for fileid in Albums[:14]] the_corpus[34990:35000]
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
We can now go ahead and figure out the number of unique words used in Jay Z's first 35,000 lyrics. An astute observer will notice that we have not done any data cleaning. For example, take a look inside a slice of the corpus, the last 10 words the_corpus[34990:35000], ['die', 'And', 'even', 'if', 'Jehovah', 'witness', ...
def lexical_diversity(my_text_data): word_count = len(my_text_data) vocab_size = len(set(my_text_data)) diversity_score = word_count / vocab_size return diversity_score
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
If we call our function on the Jay Z sliced corpus, it should give us a score.
emma = nltk.corpus.gutenberg.words('austen-emma.txt') print "The lexical diversity score for the first 35,000 words in the Jay Z corpus is ", lexical_diversity(the_corpus[:35000]) print "The lexical diversity score for the first 35,000 words in the Emma corpus is ", lexical_diversity(emma[:35000])
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
Remember we had created a wordlist of type PlaintextCorpusReader. This uses the data structure of a nested list. PlaintextCorpusReader type is made up of a list of fileids, which are made up of a list of paragraphs, which are in turn made up of a list of sentences, which are in turn made up of a list of words. words(...
[fileid[5:] for fileid in Albums[:14]]
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
We are ready to start mining the data. So let's do a simple analysis on the occurrence of the concept "basketball" in JayZ's lyrics as represented by a list of 40 terms that are common when we talk about basketball.
basketball_bag_of_words = ['bounce','crossover','technical', 'shooting','double','jump','goal','backdoor','chest','ball', 'team','block','throw','offensive','point','airball','pick', 'assist','shot','layup','break','dribble','roll','cut','forward', 'move','zone','three-pointer','free','post','fast','blocking','back...
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
Lets reduce our investigation of this concept to just the American Gangster album, which is the first 14 songs in the corpus. We do that by using the command Albums[:14]. Remember that Albums is just a list data type, so we can slice it, to its first 14 indexes. The following code converts the words in the basket ball ...
cfd = nltk.ConditionalFreqDist( (target, fileid[5:]) for fileid in Albums[:14] for w in wordlist.words(fileid) for target in basketball_bag_of_words if w.lower().startswith(target)) # have inline graphs #get_ipython().magic(u'matplotlib inline') %pylab inline cfd....
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
From the plot we see that the basketball term "roll" seems to be used extensively in the song Party Life. Let's take a closer look at this phenomenon, and determine if "roll" was used in the "basketball" sense of the term. To do this, we need to see the context in which it was used. What we really need is a concordance...
AmericanGangster_wordlist = PlaintextCorpusReader(corpus_root, 'JayZ_American Gangster_.*') AmericanGangster_corpus = create_corpus(AmericanGangster_wordlist, [])
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
Building a concordance, gets us to the area of elementary information retrieval (IR)<a href="#fn1" id="ref1">1</a>, think, <i> basic search engine</i>. So why do we even need to “normalize” terms? We want to match <b>U.S.A.</b> and <b>USA</b>. Also when we enter <b>roll</b>, we would like to match <b>Roll</b>, and <b>r...
porter = nltk.PorterStemmer() stemmer = nltk.PorterStemmer() stemmed_tokens = [stemmer.stem(t) for t in AmericanGangster_corpus] for token in sorted(set(stemmed_tokens))[860:870]: print token + ' [' + str(stemmed_tokens.count(token)) + ']'
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0
Now we can go ahead and create a concordance to test if "roll" is used in the basketball (pick and roll) sense or not.
AmericanGangster_lyrics = IndexedText(porter, AmericanGangster_corpus) AmericanGangster_lyrics.concordance('roll') print AmericanGangster_wordlist.raw(fileids='JayZ_American Gangster_Party Life.txt')
HipHopDataExploration/HipHopDataExploration.ipynb
omoju/hiphopathy
gpl-3.0