markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Example: normalizing features
from tensorflow.keras.layers import Normalization # Example image data, with values in the [0, 255] range training_data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32") normalizer = Normalization(axis=-1) normalizer.adapt(training_data) normalized_data = normalizer(training_data) print("var: %.4f" % np.var(normalized_data)) print("mean: %.4f" % np.mean(normalized_data))
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Example: rescaling & center-cropping images Both the Rescaling layer and the CenterCrop layer are stateless, so it isn't necessary to call adapt() in this case.
from tensorflow.keras.layers import CenterCrop from tensorflow.keras.layers import Rescaling # Example image data, with values in the [0, 255] range training_data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32") cropper = CenterCrop(height=150, width=150) scaler = Rescaling(scale=1.0 / 255) output_data = scaler(cropper(training_data)) print("shape:", output_data.shape) print("min:", np.min(output_data)) print("max:", np.max(output_data))
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Building models with the Keras Functional API A "layer" is a simple input-output transformation (such as the scaling & center-cropping transformations above). For instance, here's a linear projection layer that maps its inputs to a 16-dimensional feature space: python dense = keras.layers.Dense(units=16) A "model" is a directed acyclic graph of layers. You can think of a model as a "bigger layer" that encompasses multiple sublayers and that can be trained via exposure to data. The most common and most powerful way to build Keras models is the Functional API. To build models with the Functional API, you start by specifying the shape (and optionally the dtype) of your inputs. If any dimension of your input can vary, you can specify it as None. For instance, an input for 200x200 RGB image would have shape (200, 200, 3), but an input for RGB images of any size would have shape (None, None, 3).
# Let's say we expect our inputs to be RGB images of arbitrary size inputs = keras.Input(shape=(None, None, 3))
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
After defining your input(s), you can chain layer transformations on top of your inputs, until your final output:
from tensorflow.keras import layers # Center-crop images to 150x150 x = CenterCrop(height=150, width=150)(inputs) # Rescale images to [0, 1] x = Rescaling(scale=1.0 / 255)(x) # Apply some convolution and pooling layers x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x) x = layers.MaxPooling2D(pool_size=(3, 3))(x) x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x) x = layers.MaxPooling2D(pool_size=(3, 3))(x) x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x) # Apply global average pooling to get flat feature vectors x = layers.GlobalAveragePooling2D()(x) # Add a dense classifier on top num_classes = 10 outputs = layers.Dense(num_classes, activation="softmax")(x)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Once you have defined the directed acyclic graph of layers that turns your input(s) into your outputs, instantiate a Model object:
model = keras.Model(inputs=inputs, outputs=outputs)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
This model behaves basically like a bigger layer. You can call it on batches of data, like this:
data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32") processed_data = model(data) print(processed_data.shape)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
You can print a summary of how your data gets transformed at each stage of the model. This is useful for debugging. Note that the output shape displayed for each layer includes the batch size. Here the batch size is None, which indicates our model can process batches of any size.
model.summary()
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
The Functional API also makes it easy to build models that have multiple inputs (for instance, an image and its metadata) or multiple outputs (for instance, predicting the class of the image and the likelihood that a user will click on it). For a deeper dive into what you can do, see our guide to the Functional API. Training models with fit() At this point, you know: How to prepare your data (e.g. as a NumPy array or a tf.data.Dataset object) How to build a model that will process your data The next step is to train your model on your data. The Model class features a built-in training loop, the fit() method. It accepts Dataset objects, Python generators that yield batches of data, or NumPy arrays. Before you can call fit(), you need to specify an optimizer and a loss function (we assume you are already familiar with these concepts). This is the compile() step: python model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.CategoricalCrossentropy()) Loss and optimizer can be specified via their string identifiers (in this case their default constructor argument values are used): python model.compile(optimizer='rmsprop', loss='categorical_crossentropy') Once your model is compiled, you can start "fitting" the model to the data. Here's what fitting a model looks like with NumPy data: python model.fit(numpy_array_of_samples, numpy_array_of_labels, batch_size=32, epochs=10) Besides the data, you have to specify two key parameters: the batch_size and the number of epochs (iterations on the data). Here our data will get sliced on batches of 32 samples, and the model will iterate 10 times over the data during training. Here's what fitting a model looks like with a dataset: python model.fit(dataset_of_samples_and_labels, epochs=10) Since the data yielded by a dataset is expected to be already batched, you don't need to specify the batch size here. Let's look at it in practice with a toy example model that learns to classify MNIST digits:
# Get the data as Numpy arrays (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Build a simple model inputs = keras.Input(shape=(28, 28)) x = layers.Rescaling(1.0 / 255)(inputs) x = layers.Flatten()(x) x = layers.Dense(128, activation="relu")(x) x = layers.Dense(128, activation="relu")(x) outputs = layers.Dense(10, activation="softmax")(x) model = keras.Model(inputs, outputs) model.summary() # Compile the model model.compile(optimizer="adam", loss="sparse_categorical_crossentropy") # Train the model for 1 epoch from Numpy data batch_size = 64 print("Fit on NumPy data") history = model.fit(x_train, y_train, batch_size=batch_size, epochs=1) # Train the model for 1 epoch using a dataset dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size) print("Fit on Dataset") history = model.fit(dataset, epochs=1)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
The fit() call returns a "history" object which records what happened over the course of training. The history.history dict contains per-epoch timeseries of metrics values (here we have only one metric, the loss, and one epoch, so we only get a single scalar):
print(history.history)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
For a detailed overview of how to use fit(), see the guide to training & evaluation with the built-in Keras methods. Keeping track of performance metrics As you're training a model, you want to keep track of metrics such as classification accuracy, precision, recall, AUC, etc. Besides, you want to monitor these metrics not only on the training data, but also on a validation set. Monitoring metrics You can pass a list of metric objects to compile(), like this:
model.compile( optimizer="adam", loss="sparse_categorical_crossentropy", metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")], ) history = model.fit(dataset, epochs=1)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Passing validation data to fit() You can pass validation data to fit() to monitor your validation loss & validation metrics. Validation metrics get reported at the end of each epoch.
val_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size) history = model.fit(dataset, epochs=1, validation_data=val_dataset)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Using callbacks for checkpointing (and more) If training goes on for more than a few minutes, it's important to save your model at regular intervals during training. You can then use your saved models to restart training in case your training process crashes (this is important for multi-worker distributed training, since with many workers at least one of them is bound to fail at some point). An important feature of Keras is callbacks, configured in fit(). Callbacks are objects that get called by the model at different point during training, in particular: At the beginning and end of each batch At the beginning and end of each epoch Callbacks are a way to make model trainable entirely scriptable. You can use callbacks to periodically save your model. Here's a simple example: a ModelCheckpoint callback configured to save the model at the end of every epoch. The filename will include the current epoch. python callbacks = [ keras.callbacks.ModelCheckpoint( filepath='path/to/my/model_{epoch}', save_freq='epoch') ] model.fit(dataset, epochs=2, callbacks=callbacks) You can also use callbacks to do things like periodically changing the learning of your optimizer, streaming metrics to a Slack bot, sending yourself an email notification when training is complete, etc. For detailed overview of what callbacks are available and how to write your own, see the callbacks API documentation and the guide to writing custom callbacks. Monitoring training progress with TensorBoard Staring at the Keras progress bar isn't the most ergonomic way to monitor how your loss and metrics are evolving over time. There's a better solution: TensorBoard, a web application that can display real-time graphs of your metrics (and more). To use TensorBoard with fit(), simply pass a keras.callbacks.TensorBoard callback specifying the directory where to store TensorBoard logs: python callbacks = [ keras.callbacks.TensorBoard(log_dir='./logs') ] model.fit(dataset, epochs=2, callbacks=callbacks) You can then launch a TensorBoard instance that you can open in your browser to monitor the logs getting written to this location: tensorboard --logdir=./logs What's more, you can launch an in-line TensorBoard tab when training models in Jupyter / Colab notebooks. Here's more information. After fit(): evaluating test performance & generating predictions on new data Once you have a trained model, you can evaluate its loss and metrics on new data via evaluate():
loss, acc = model.evaluate(val_dataset) # returns loss and metrics print("loss: %.2f" % loss) print("acc: %.2f" % acc)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
You can also generate NumPy arrays of predictions (the activations of the output layer(s) in the model) via predict():
predictions = model.predict(val_dataset) print(predictions.shape)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Using fit() with a custom training step By default, fit() is configured for supervised learning. If you need a different kind of training loop (for instance, a GAN training loop), you can provide your own implementation of the Model.train_step() method. This is the method that is repeatedly called during fit(). Metrics, callbacks, etc. will work as usual. Here's a simple example that reimplements what fit() normally does: ``python class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass tofit(). x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured incompile()`) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer='adam', loss='mse', metrics=[...]) Just use fit as usual model.fit(dataset, epochs=3, callbacks=...) ``` For a detailed overview of how you customize the built-in training & evaluation loops, see the guide: "Customizing what happens in fit()". Debugging your model with eager execution If you write custom training steps or custom layers, you will need to debug them. The debugging experience is an integral part of a framework: with Keras, the debugging workflow is designed with the user in mind. By default, your Keras models are compiled to highly-optimized computation graphs that deliver fast execution times. That means that the Python code you write (e.g. in a custom train_step) is not the code you are actually executing. This introduces a layer of indirection that can make debugging hard. Debugging is best done step by step. You want to be able to sprinkle your code with print() statement to see what your data looks like after every operation, you want to be able to use pdb. You can achieve this by running your model eagerly. With eager execution, the Python code you write is the code that gets executed. Simply pass run_eagerly=True to compile(): python model.compile(optimizer='adam', loss='mse', run_eagerly=True) Of course, the downside is that it makes your model significantly slower. Make sure to switch it back off to get the benefits of compiled computation graphs once you are done debugging! In general, you will use run_eagerly=True every time you need to debug what's happening inside your fit() call. Speeding up training with multiple GPUs Keras has built-in industry-strength support for multi-GPU training and distributed multi-worker training, via the tf.distribute API. If you have multiple GPUs on your machine, you can train your model on all of them by: Creating a tf.distribute.MirroredStrategy object Building & compiling your model inside the strategy's scope Calling fit() and evaluate() on a dataset as usual ```python Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() Open a strategy scope. with strategy.scope(): # Everything that creates variables should be under the strategy scope. # In general this is only model construction & compile(). model = Model(...) model.compile(...) Train the model on all available devices. train_dataset, val_dataset, test_dataset = get_dataset() model.fit(train_dataset, epochs=2, validation_data=val_dataset) Test the model on all available devices. model.evaluate(test_dataset) ``` For a detailed introduction to multi-GPU & distributed training, see this guide. Doing preprocessing synchronously on-device vs. asynchronously on host CPU You've learned about preprocessing, and you've seen example where we put image preprocessing layers (CenterCrop and Rescaling) directly inside our model. Having preprocessing happen as part of the model during training is great if you want to do on-device preprocessing, for instance, GPU-accelerated feature normalization or image augmentation. But there are kinds of preprocessing that are not suited to this setup: in particular, text preprocessing with the TextVectorization layer. Due to its sequential nature and due to the fact that it can only run on CPU, it's often a good idea to do asynchronous preprocessing. With asynchronous preprocessing, your preprocessing operations will run on CPU, and the preprocessed samples will be buffered into a queue while your GPU is busy with previous batch of data. The next batch of preprocessed samples will then be fetched from the queue to the GPU memory right before the GPU becomes available again (prefetching). This ensures that preprocessing will not be blocking and that your GPU can run at full utilization. To do asynchronous preprocessing, simply use dataset.map to inject a preprocessing operation into your data pipeline:
# Example training data, of dtype `string`. samples = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]]) labels = [[0], [1]] # Prepare a TextVectorization layer. vectorizer = TextVectorization(output_mode="int") vectorizer.adapt(samples) # Asynchronous preprocessing: the text vectorization is part of the tf.data pipeline. # First, create a dataset dataset = tf.data.Dataset.from_tensor_slices((samples, labels)).batch(2) # Apply text vectorization to the samples dataset = dataset.map(lambda x, y: (vectorizer(x), y)) # Prefetch with a buffer size of 2 batches dataset = dataset.prefetch(2) # Our model should expect sequences of integers as inputs inputs = keras.Input(shape=(None,), dtype="int64") x = layers.Embedding(input_dim=10, output_dim=32)(inputs) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) model.compile(optimizer="adam", loss="mse", run_eagerly=True) model.fit(dataset)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Compare this to doing text vectorization as part of the model:
# Our dataset will yield samples that are strings dataset = tf.data.Dataset.from_tensor_slices((samples, labels)).batch(2) # Our model should expect strings as inputs inputs = keras.Input(shape=(1,), dtype="string") x = vectorizer(inputs) x = layers.Embedding(input_dim=10, output_dim=32)(x) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) model.compile(optimizer="adam", loss="mse", run_eagerly=True) model.fit(dataset)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Quiz Question. How many reviews contain the word perfect?
len(products[products['contains_perfect']==1]) def get_numpy_data(dataframe, features, label): dataframe['constant'] = 1 features = ['constant'] + features features_frame = dataframe[features] features_matrix = features_frame.as_matrix() label_sarray = dataframe[label] label_array = label_sarray.as_matrix().reshape((len(label_sarray), 1)) return (features_matrix, label_array) feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment') print(feature_matrix.shape) print(sentiment.shape)
ml-classification/week-2/Untitled.ipynb
isendel/machine-learning
apache-2.0
Quiz Question: How many features are there in the feature_matrix?
feature_matrix.shape def predict_probability(feature_matrix, coefficients): score = feature_matrix.dot(coefficients) predictions = np.apply_along_axis(lambda x: 1/(1+math.exp(-x)), 1, score) return predictions.reshape((max(predictions.shape), 1)) w = np.ones((194,1)) predict_probability(feature_matrix, w)
ml-classification/week-2/Untitled.ipynb
isendel/machine-learning
apache-2.0
Compute derivative of log likelihood with respect to a single coefficient
def feature_derivative(errors, feature): derivative = errors.transpose().dot(feature) return derivative def compute_log_likelihood(feature_matrix, sentiment, coefficients): indicator = (sentiment==+1) scores = feature_matrix.dot(coefficients) lp = np.sum((indicator-1)*scores - np.log(1 + np.exp(-scores))) return lp def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter): coefficients = np.array(initial_coefficients) for itr in range(max_iter): predictions = predict_probability(feature_matrix, coefficients) indicator = sentiment==+1 errors = indicator - predictions for j in range(len(coefficients)): derivative = feature_derivative(errors, feature_matrix[:,j].transpose()) coefficients[j] += step_size*derivative # Checking whether log likelihood is increasing if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0: lp = compute_log_likelihood(feature_matrix, sentiment, coefficients) print('iteration %*d: log likelihood of observed labels = %.8f' % (int(np.ceil(np.log10(max_iter))), itr, lp)) return coefficients initial_coefficients = np.zeros((feature_matrix.shape[1], 1)) step_size = 1e-7 #max_iter = 301 max_iter = 301 coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter) scores = feature_matrix.dot(coefficients) predictions = np.apply_along_axis(lambda s: 1 if s >= 0 else -1, axis=1, arr=scores) predictions len([x for x in predictions if x == 1]) len([x for x in predictions if x == -1]) products['predictions'] = predictions
ml-classification/week-2/Untitled.ipynb
isendel/machine-learning
apache-2.0
Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy)
accuracy = len(products[products['sentiment']==products['predictions']])/len(products) print('Accuracy: %s' % accuracy)
ml-classification/week-2/Untitled.ipynb
isendel/machine-learning
apache-2.0
Which words contribute most to positive & negative sentiments
coefficients = list(coefficients[1:]) # exclude intercept word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)] word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True) word_coefficient_tuples[:10] sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=False)[:10]
ml-classification/week-2/Untitled.ipynb
isendel/machine-learning
apache-2.0
aggregate_source - NIST XPS DB Example: We want to collect all records from the NIST XPS Database and analyze the binding energies. This database has almost 30,000 records, so we have to use aggregate().
# First, let's aggregate all the nist_xps_db data. all_entries = mdf.aggregate_sources("nist_xps_db") print(len(all_entries)) # Now, let's parse out the enery_uncertainty_ev and print the results for analysis. uncertainties = {} for record in all_entries: if record["mdf"]["resource_type"] == "record": unc = record.get("nist_xps_db_v1", {}).get("energy_uncertainty_ev", 0) if not uncertainties.get(unc): uncertainties[unc] = 1 else: uncertainties[unc] += 1 print(json.dumps(uncertainties, sort_keys=True, indent=4, separators=(',', ': ')))
docs/examples/Example_Aggregations.ipynb
materials-data-facility/forge
apache-2.0
aggregate - Multiple Datasets Example: We want to analyze how often elements are studied with Gallium (Ga), and what the most frequent elemental pairing is. There are more than 10,000 records containing Gallium data.
# First, let's aggregate everything that has "Ga" in the list of elements. all_results = mdf.aggregate("material.elements:Ga") print(len(all_results)) # Now, let's parse out the other elements in each record and keep a running tally to print out. elements = {} for record in all_results: if record["mdf"]["resource_type"] == "record": elems = record["material"]["elements"] for elem in elems: if elem in elements.keys(): elements[elem] += 1 else: elements[elem] = 1 print(json.dumps(elements, sort_keys=True, indent=4, separators=(',', ': ')))
docs/examples/Example_Aggregations.ipynb
materials-data-facility/forge
apache-2.0
Day 5: Introduction to Linear Regression Objective In this challenge, we practice using linear regression techniques. Check out the Resources tab to learn more! Task You are given the Math aptitude test (x) scores for a set of students, as well as their respective scores for a Statistics course (y). The students enrolled in Statistics immediately after taking the math aptitude test. The scores (x, y) for each student are: (95,85) (85,95) (80,70) (70,65) (60,70) If a student scored an 80 on the Math aptitude test, what score would we expect her to achieve in Statistics? Determine the equation of the best-fit line using the least squares method, and then compute the value of y when x=80.
# #Python Import Libraries import sklearn import numpy as np arr_x = [i[0] for i in arr_data] arr_y = [i[1] for i in arr_data] stats.linregress(arr_x, arr_y) m, c, r_val, p_val, err = stats.linregress(arr_x, arr_y) # #y = mx + c m*80 + c
HackerRank/Intro_to_Statistics/Day_05.ipynb
KartikKannapur/Programming_Challenges
mit
Load Images from Disk If the data is too large to put in memory all at once, we can load it batch by batch into memory from disk with tf.data.Dataset. This function can help you build such a tf.data.Dataset for image data. First, we download the data and extract the files.
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" # noqa: E501 local_file_path = tf.keras.utils.get_file( origin=dataset_url, fname="image_data", extract=True ) # The file is extracted in the same directory as the downloaded file. local_dir_path = os.path.dirname(local_file_path) # After check mannually, we know the extracted data is in 'flower_photos'. data_dir = os.path.join(local_dir_path, "flower_photos") print(data_dir)
docs/ipynb/load.ipynb
keras-team/autokeras
apache-2.0
The directory should look like this. Each folder contains the images in the same class. flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/ We can split the data into training and testing as we load them.
batch_size = 32 img_height = 180 img_width = 180 train_data = ak.image_dataset_from_directory( data_dir, # Use 20% data as testing data. validation_split=0.2, subset="training", # Set seed to ensure the same split when loading testing data. seed=123, image_size=(img_height, img_width), batch_size=batch_size, ) test_data = ak.image_dataset_from_directory( data_dir, validation_split=0.2, subset="validation", seed=123, image_size=(img_height, img_width), batch_size=batch_size, )
docs/ipynb/load.ipynb
keras-team/autokeras
apache-2.0
Then we just do one quick demo of AutoKeras to make sure the dataset works.
clf = ak.ImageClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=1) print(clf.evaluate(test_data))
docs/ipynb/load.ipynb
keras-team/autokeras
apache-2.0
Load Texts from Disk You can also load text datasets in the same way.
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" local_file_path = tf.keras.utils.get_file( fname="text_data", origin=dataset_url, extract=True, ) # The file is extracted in the same directory as the downloaded file. local_dir_path = os.path.dirname(local_file_path) # After check mannually, we know the extracted data is in 'aclImdb'. data_dir = os.path.join(local_dir_path, "aclImdb") # Remove the unused data folder. shutil.rmtree(os.path.join(data_dir, "train/unsup"))
docs/ipynb/load.ipynb
keras-team/autokeras
apache-2.0
For this dataset, the data is already split into train and test. We just load them separately.
print(data_dir) train_data = ak.text_dataset_from_directory( os.path.join(data_dir, "train"), batch_size=batch_size ) test_data = ak.text_dataset_from_directory( os.path.join(data_dir, "test"), shuffle=False, batch_size=batch_size ) clf = ak.TextClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=2) print(clf.evaluate(test_data))
docs/ipynb/load.ipynb
keras-team/autokeras
apache-2.0
Load Data with Python Generators If you want to use generators, you can refer to the following code.
N_BATCHES = 30 BATCH_SIZE = 100 N_FEATURES = 10 def get_data_generator(n_batches, batch_size, n_features): """Get a generator returning n_batches random data. The shape of the data is (batch_size, n_features). """ def data_generator(): for _ in range(n_batches * batch_size): x = np.random.randn(n_features) y = x.sum(axis=0) / n_features > 0.5 yield x, y return data_generator dataset = tf.data.Dataset.from_generator( get_data_generator(N_BATCHES, BATCH_SIZE, N_FEATURES), output_types=(tf.float32, tf.float32), output_shapes=((N_FEATURES,), tuple()), ).batch(BATCH_SIZE) clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5) clf.fit(x=dataset, validation_data=dataset, batch_size=BATCH_SIZE) print(clf.evaluate(dataset))
docs/ipynb/load.ipynb
keras-team/autokeras
apache-2.0
Create a requests.Session for holding our oauth token
import requests s = requests.session() s.headers['Authorization'] = 'token ' + gh_token
examples/auth_state/gist-nb.ipynb
jupyter/oauthenticator
bsd-3-clause
Verify that we have the scopes we expect:
r = s.get('https://api.github.com/user') r.raise_for_status() r.headers['X-OAuth-Scopes']
examples/auth_state/gist-nb.ipynb
jupyter/oauthenticator
bsd-3-clause
Now we can make a gist!
import json r = s.post('https://api.github.com/gists', data=json.dumps({ 'files': { 'test.md': { 'content': '# JupyterHub gist\n\nThis file was created from JupyterHub.', }, }, 'description': 'test uploading a gist from JupyterHub', }), ) r.raise_for_status() print("Created gist: %s" % r.json()['html_url'])
examples/auth_state/gist-nb.ipynb
jupyter/oauthenticator
bsd-3-clause
TFP Probabilistic Layers: Regression <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Probabilistic_Layers_Regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
#@title Import { display-mode: "form" } from pprint import pprint import matplotlib.pyplot as plt import numpy as np import seaborn as sns import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp sns.reset_defaults() #sns.set_style('whitegrid') #sns.set_context('talk') sns.set_context(context='talk',font_scale=0.7) %matplotlib inline tfd = tfp.distributions
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
#@title Synthesize dataset. w0 = 0.125 b0 = 5. x_range = [-20, 60] def load_dataset(n=150, n_tst=150): np.random.seed(43) def s(x): g = (x - x_range[0]) / (x_range[1] - x_range[0]) return 3 * (0.25 + g**2.) x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0] eps = np.random.randn(n) * s(x) y = (w0 * x * (1. + np.sin(x)) + b0) + eps x = x[..., np.newaxis] x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32) x_tst = x_tst[..., np.newaxis] return y, x, x_tst y, x, x_tst = load_dataset()
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Case 1: No Uncertainty
# Build model. model = tf.keras.Sequential([ tf.keras.layers.Dense(1), tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) #@title Figure 1: No uncertainty. w = np.squeeze(model.layers[-2].kernel.numpy()) b = np.squeeze(model.layers[-2].bias.numpy()) plt.figure(figsize=[6, 1.5]) # inches #plt.figure(figsize=[8, 5]) # inches plt.plot(x, y, 'b.', label='observed'); plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4); plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Case 2: Aleatoric Uncertainty
# Build model. model = tf.keras.Sequential([ tf.keras.layers.Dense(1 + 1), tfp.layers.DistributionLambda( lambda t: tfd.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) #@title Figure 2: Aleatoric Uncertainty plt.figure(figsize=[6, 1.5]) # inches plt.plot(x, y, 'b.', label='observed'); m = yhat.mean() s = yhat.stddev() plt.plot(x_tst, m, 'r', linewidth=4, label='mean'); plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev'); plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev'); plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Case 3: Epistemic Uncertainty
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`. def posterior_mean_field(kernel_size, bias_size=0, dtype=None): n = kernel_size + bias_size c = np.log(np.expm1(1.)) return tf.keras.Sequential([ tfp.layers.VariableLayer(2 * n, dtype=dtype), tfp.layers.DistributionLambda(lambda t: tfd.Independent( tfd.Normal(loc=t[..., :n], scale=1e-5 + tf.nn.softplus(c + t[..., n:])), reinterpreted_batch_ndims=1)), ]) # Specify the prior over `keras.layers.Dense` `kernel` and `bias`. def prior_trainable(kernel_size, bias_size=0, dtype=None): n = kernel_size + bias_size return tf.keras.Sequential([ tfp.layers.VariableLayer(n, dtype=dtype), tfp.layers.DistributionLambda(lambda t: tfd.Independent( tfd.Normal(loc=t, scale=1), reinterpreted_batch_ndims=1)), ]) # Build model. model = tf.keras.Sequential([ tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]), tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) #@title Figure 3: Epistemic Uncertainty plt.figure(figsize=[6, 1.5]) # inches plt.clf(); plt.plot(x, y, 'b.', label='observed'); yhats = [model(x_tst) for _ in range(100)] avgm = np.zeros_like(x_tst[..., 0]) for i, yhat in enumerate(yhats): m = np.squeeze(yhat.mean()) s = np.squeeze(yhat.stddev()) if i < 25: plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5) avgm += m plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4) plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Case 4: Aleatoric & Epistemic Uncertainty
# Build model. model = tf.keras.Sequential([ tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]), tfp.layers.DistributionLambda( lambda t: tfd.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) #@title Figure 4: Both Aleatoric & Epistemic Uncertainty plt.figure(figsize=[6, 1.5]) # inches plt.plot(x, y, 'b.', label='observed'); yhats = [model(x_tst) for _ in range(100)] avgm = np.zeros_like(x_tst[..., 0]) for i, yhat in enumerate(yhats): m = np.squeeze(yhat.mean()) s = np.squeeze(yhat.stddev()) if i < 15: plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.) plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None); plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None); avgm += m plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4) plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
Case 5: Functional Uncertainty
#@title Custom PSD Kernel class RBFKernelFn(tf.keras.layers.Layer): def __init__(self, **kwargs): super(RBFKernelFn, self).__init__(**kwargs) dtype = kwargs.get('dtype', None) self._amplitude = self.add_variable( initializer=tf.constant_initializer(0), dtype=dtype, name='amplitude') self._length_scale = self.add_variable( initializer=tf.constant_initializer(0), dtype=dtype, name='length_scale') def call(self, x): # Never called -- this is just a layer so it can hold variables # in a way Keras understands. return x @property def kernel(self): return tfp.math.psd_kernels.ExponentiatedQuadratic( amplitude=tf.nn.softplus(0.1 * self._amplitude), length_scale=tf.nn.softplus(5. * self._length_scale) ) # For numeric stability, set the default floating-point dtype to float64 tf.keras.backend.set_floatx('float64') # Build model. num_inducing_points = 40 model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=[1]), tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False), tfp.layers.VariationalGaussianProcess( num_inducing_points=num_inducing_points, kernel_provider=RBFKernelFn(), event_shape=[1], inducing_index_points_initializer=tf.constant_initializer( np.linspace(*x_range, num=num_inducing_points, dtype=x.dtype)[..., np.newaxis]), unconstrained_observation_noise_variance_initializer=( tf.constant_initializer(np.array(0.54).astype(x.dtype))), ), ]) # Do inference. batch_size = 32 loss = lambda y, rv_y: rv_y.variational_loss( y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0]) model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss) model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False) # Profit. yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) #@title Figure 5: Functional Uncertainty y, x, _ = load_dataset() plt.figure(figsize=[6, 1.5]) # inches plt.plot(x, y, 'b.', label='observed'); num_samples = 7 for i in range(num_samples): sample_ = yhat.sample().numpy() plt.plot(x_tst, sample_[..., 0].T, 'r', linewidth=0.9, label='ensemble means' if i == 0 else None); plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
site/en-snapshot/probability/examples/Probabilistic_Layers_Regression.ipynb
tensorflow/docs-l10n
apache-2.0
================================= Decoding sensor space data (MVPA) ================================= Decoding, a.k.a MVPA or supervised machine learning applied to MEG data in sensor space. Here the classifier is applied to every time point.
import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression import mne from mne.datasets import sample from mne.decoding import (SlidingEstimator, GeneralizingEstimator, cross_val_multiscore, LinearModel, get_coef) data_path = sample.data_path() plt.close('all') # sphinx_gallery_thumbnail_number = 4
0.15/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Temporal decoding We'll use a Logistic Regression for a binary classification as machine learning model.
# We will train the classifier on all left visual vs auditory trials on MEG X = epochs.get_data() # MEG signals: n_epochs, n_channels, n_times y = epochs.events[:, 2] # target: Audio left or right clf = make_pipeline(StandardScaler(), LogisticRegression()) time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc') scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1) # Mean scores across cross-validation splits scores = np.mean(scores, axis=0) # Plot fig, ax = plt.subplots() ax.plot(epochs.times, scores, label='score') ax.axhline(.5, color='k', linestyle='--', label='chance') ax.set_xlabel('Times') ax.set_ylabel('AUC') # Area Under the Curve ax.legend() ax.axvline(.0, color='k', linestyle='-') ax.set_title('Sensor space decoding') plt.show() # You can retrieve the spatial filters and spatial patterns if you explicitly # use a LinearModel clf = make_pipeline(StandardScaler(), LinearModel(LogisticRegression())) time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc') time_decod.fit(X, y) coef = get_coef(time_decod, 'patterns_', inverse_transform=True) evoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0]) evoked.plot_joint(times=np.arange(0., .500, .100), title='patterns')
0.15/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The results of these mass univariate analyses can be visualised by plotting :class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image) and masking points for significance. Here, we group channels by Regions of Interest to facilitate localising effects on the head.
# We need an evoked object to plot the image to be masked evoked = mne.combine_evoked([long_words.average(), -short_words.average()], weights='equal') # calculate difference wave time_unit = dict(time_unit="s") evoked.plot_joint(title="Long vs. short words", ts_args=time_unit, topomap_args=time_unit) # show difference wave # Create ROIs by checking channel labels selections = make_1020_channel_selections(evoked.info, midline="12z") # Visualize the results fig, axes = plt.subplots(nrows=3, figsize=(8, 8)) axes = {sel: ax for sel, ax in zip(selections, axes.ravel())} evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False, mask=significant_points, show_names="all", titles=None, **time_unit) plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3, label="µV") plt.show()
0.20/_downloads/2784a8d5822ed9797c0330f973573c10/plot_stats_cluster_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information. Vox Upshot 538 BuzzFeed Upload the image for the visualization to this directory and display the image inline in this notebook.
# Add your filename and uncomment the following line: Image(filename='Quartz_death_penalty_chart.0.png')
assignments/assignment04/TheoryAndPracticeEx01.ipynb
enbanuel/phys202-2015-work
mit
The saver op will enable saving and restoring:
saver = tf.train.Saver()
ch02_basics/Concept06_saving_variables.ipynb
BinRoot/TensorFlow-Book
mit
Loop through the data and update the spike variable when there is a significant increase:
for i in range(1, len(raw_data)): if raw_data[i] - raw_data[i-1] > 5: spikes_val = spikes.eval() spikes_val[i] = True updater = tf.assign(spikes, spikes_val) updater.eval()
ch02_basics/Concept06_saving_variables.ipynb
BinRoot/TensorFlow-Book
mit
Now, save your variable to disk!
save_path = saver.save(sess, "./spikes.ckpt") print("spikes data saved in file: %s" % save_path)
ch02_basics/Concept06_saving_variables.ipynb
BinRoot/TensorFlow-Book
mit
Adieu:
sess.close()
ch02_basics/Concept06_saving_variables.ipynb
BinRoot/TensorFlow-Book
mit
StockTwits Data Collection First we will write a function to query the StockTwits API to get up to 30 tweets at a time for a given ticker symbol. The API allows getting only tweets older than some tweet ID, which we need for repeatedly querying the server to get many recent tweets.
# returns python object representation of JSON in response def get_response(symbol, older_than, retries=5): url = 'https://api.stocktwits.com/api/2/streams/symbol/%s.json?max=%d' % (symbol, older_than-1) for _ in range(retries): response = requests.get(url) if response.status_code == 200: return json.loads(response.content) elif response.status_code == 429: print response.content return None time.sleep(1.0) # couldn't get response return None
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Now we can write a function to build or extend a dataset of tweets for a given symbol. This works by remembering the oldest ID of tweets we have gotten so far, and using that as an option in the API query to get older tweets. By doing this we can iteratively build up a list of recent tweets for a given symbol ordered from most recent to least. The data is stored in JSON form, which is the same format the API returns to us.
# extends the current dataset for a given symbol with more tweets def get_older_tweets(symbol, num_queries): path = './data/%s.json' % symbol if os.path.exists(path): # extending an existing json file with open(path, 'r') as f: data = json.load(f) if len(data) > 0: older_than = data[-1]['id'] else: older_than = 1000000000000 else: # creating a new json file data = [] older_than = 1000000000000 # any huge number for i in range(num_queries): content = get_response(symbol, older_than) if content == None: print 'Error, an API query timed out' break data.extend(content['messages']) older_than = data[-1]['id'] stdout.write('\rSuccessfully made query %d' % (i+1)) stdout.flush() # sleep to make sure we don't get throttled time.sleep(0.5) # write the new data to the JSON file with open(path, 'w') as f: json.dump(data, f) print print 'Done'
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Now we fetch data for several ticker symbols. Note that to get all the data, you will have to rerun this cell once an hour multiple times because of API rate limiting. The JSON files will be distributed with this notebook so this cell is only here to show how we originally got the data.
# get some data # apparently a client can only make 200 requests an hour, so we can't get all the data at once # make data directory if needed if not os.path.exists('./data'): os.mkdir('./data') symbols = symbols = ['AAPL', 'NVDA', 'TSLA', 'AMD', 'JNUG', 'JDST', 'LABU', 'QCOM', 'INTC', 'DGAZ'] tweets_per_symbol = 3000 for symbol in symbols: path = './data/%s.json' % symbol if os.path.exists(path): with open(path, 'r') as f: num_tweets = len(json.load(f)) else: num_tweets = 0 num_queries = (tweets_per_symbol - num_tweets - 1)/30 + 1 if num_queries > 0: print 'Getting tweets for symbol %s' % symbol get_older_tweets(symbol, num_queries)
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
The next cell is mainly just for debugging purposes. There is no need to run it.
# check that we're doing the querying and appending correctly without getting duplicates # and that message IDs are in descending order symbol = 'NVDA' with open('./data/%s.json' % symbol, 'r') as f: data = json.load(f) S = set() old_id = 1000000000000 for message in data: message_id = message['id'] assert message_id not in S assert message_id < old_id old_id = message_id S.add(message_id) print 'Passed'
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Stock Market Data Comparison Next, we'll extract stock market data for the symbols we're interested in. For the purpose of our experiment, we'll use Yahoo Finance's daily stock data. The API takes in a start date, end date, and stock symbol.
enddate=datetime.now() startdate=datetime(2015,1,1) stock_data = get_data_yahoo('AAPL',startdate,enddate) stock_data['Volume'].plot(legend=True,figsize=(10,4)); stock_data.head() stock_data['Adj Close'].plot(legend=True,figsize=(10,4));
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
As you can see, we can quickly and easily pull both volume and closing prices for the dates of interest. This data was useful in exploring the possibility of predicting market performance. Data Visualization & Exploration Next, we parsed the JSON data we've collected into a Pandas DataFrame to more easily work with it.
# Function takes in a JSON and returns a Pandas DataFrame for easier operation. def stocktwits_json_to_df(data, verbose=False): #data = json.loads(results) columns = ['id','created_at','username','name','user_id','body','basic_sentiment','reshare_count'] db = pd.DataFrame(index=range(len(data)),columns=columns) for i, message in enumerate(data): db.loc[i,'id'] = message['id'] db.loc[i,'created_at'] = message['created_at'] db.loc[i,'username'] = message['user']['username'] db.loc[i,'name'] = message['user']['name'] db.loc[i,'user_id'] = message['user']['id'] db.loc[i,'body'] = message['body'] #We'll classify bullish as +1 and bearish as -1 to make it ready for classification training try: if (message['entities']['sentiment']['basic'] == 'Bullish'): db.loc[i,'basic_sentiment'] = 1 elif (message['entities']['sentiment']['basic'] == 'Bearish'): db.loc[i,'basic_sentiment'] = -1 else: db.loc[i,'basic_sentiment'] = 0 except: db.loc[i,'basic_sentiment'] = 0 db.loc[i,'reshare_count'] = message['reshares']['reshared_count'] for j, symbol in enumerate(message['symbols']): db.loc[i,'symbol'+str(j)] = symbol['symbol'] if verbose: #print message print db.loc[i,:] db['created_at'] = pd.to_datetime(db['created_at']) return db
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
We're going to use \$TSLA to visualize data since we have data going back the furthest. We'll now combine these two data sources, so we can generate useful metrics for understanding how StockTwits relates to the stock market over time.
# Load tweets for visualizing data filename = 'TSLA.json' path = './tsla_data/%s' % filename with open(path, 'r') as f: data = json.load(f) db = stocktwits_json_to_df(data) print '%d examples extracted ' % db.shape[0] enddate = db['created_at'].max() startdate = db['created_at'].min() print startdate, enddate stock_data = get_data_yahoo('TSLA', startdate, enddate)
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
We now will combine our datasets. In the process, we also generate statistics related to the total number of bullish/bearish tweets. This is accomplished by grouping tweets by day. We pay attention to the totals and their ratios to each other. - Mentions: Total number of mentions with our without bullish/bearish labels - Total Bullish/Bearish: Number of tweets with the bullish/bearish labels on the given date. - Total Predictions: The sum of bullish and bearish tweets on the given date. - Bull Ratio: The ratio of Total Bullish Tweets to Total Predictions. - Bear Ratio: The ratio of Total Bearish Tweets to Total Predictions.
#Counts mentions and bullish/bearish ratio of stock tweets collected def tweet_metrics(stock_data, stock_tweets): stock_data['mentions'] = np.zeros(stock_data.shape[0]) stock_data['total_bullish'] = np.zeros(stock_data.shape[0]) stock_data['total_bearish'] = np.zeros(stock_data.shape[0]) stock_data['total_predictions'] = np.zeros(stock_data.shape[0]) stock_data['bull_ratio'] = np.zeros(stock_data.shape[0]) stock_data['bear_ratio'] = np.zeros(stock_data.shape[0]) for i, d in enumerate(stock_data.index): tweets_on_d = stock_tweets[stock_tweets['created_at'].dt.date==d.date()] stock_data.loc[d,'mentions'] = tweets_on_d.shape[0] stock_data.loc[d,'total_bullish'] = tweets_on_d[tweets_on_d['basic_sentiment']==1].shape[0] stock_data.loc[d,'total_bearish'] = tweets_on_d[tweets_on_d['basic_sentiment']==-1].shape[0] stock_data.loc[d,'total_predictions'] = stock_data.loc[d,'total_bearish'] + stock_data.loc[d,'total_bullish'] stock_data.loc[d,'bull_ratio'] = stock_data.loc[d,'total_bullish']/float(stock_data.loc[d,'total_predictions']) stock_data.loc[d,'bear_ratio'] = stock_data.loc[d,'total_bearish']/float(stock_data.loc[d,'total_predictions']) return stock_data
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Now we can now visualize the results of our analysis.
stock_metrics = tweet_metrics(stock_data, db) print stock_metrics[['mentions','total_bullish','total_bearish','bull_ratio']]
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Note that Yahoo's Finance data is "delayed" (i.e. It won't show the current day unless the market has closed). Next, we'll compare our metrics to gain an understand of StockTwits's correlation to the stock market. In our first comparison, we see a clear correlation between the number of mentions and the trading volume. Our first comparison is between the total number of mentions and the trading volume. In the two graphs below you will see a clear correlation between the number of mentions and the trading volume. However, what we don't see is any predictive trend in the data.
stock_metrics[['mentions']].plot(legend=True,figsize=(10,4)); stock_metrics[['Volume']].plot(legend=True,figsize=(10,4));
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Finally, we'll compare the total closing price to the bullish/bearish predictions of Stock Twits. Here, we see the strong correlation between market and StockTwits begin to breakdown. There seems to be an abundance of optimism: The majority of labelled tweets are "bullish". Additionally, not all peaks and valleys appear to be forcasted by the market. At this time, we have an insignificant sample size to say with any certainty.
stock_metrics[['total_bullish','total_bearish','total_predictions']].plot(legend=True,figsize=(10,4)); stock_metrics[['bull_ratio','bear_ratio']].plot(legend=True,figsize=(10,4)); stock_metrics[['Adj Close']].plot(legend=True,figsize=(10,4));
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
We will next explore the connections between symbols mentioned in the tweets. The function below counts the co-occurrences of symbols mentioned in StockTwits' tweets.
def countcomentions(df): def getsymbolset(df): symbols = [] for i, row in df.iterrows(): for symbol in row: if (pd.notnull(symbol)): symbols.append(symbol) return set(symbols) def getallsymbols(df): columns = df.columns symbolcolumns = [] for col in columns: if col.startswith('symbol'): symbolcolumns.append(col) return df[symbolcolumns] def count(df, stock_symbol): cnt = Counter() for i, row in df.iterrows(): for sym in row: if (sym!=stock_symbol) & pd.notnull(sym): cnt[sym] += 1 return cnt df = getallsymbols(df) symbolset = getsymbolset(df) print len(symbolset), "total symbols found." co = np.zeros((len(symbolset), len(symbolset))) co = pd.DataFrame(co, index=symbolset, columns=symbolset) for i, row in df.iterrows(): for stock_symbol in row: for sym in row: if (sym!=stock_symbol) & pd.notnull(stock_symbol) & pd.notnull(sym): co.loc[stock_symbol,sym]+=1 return co #return pd.DataFrame(co) co = countcomentions(db)
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
We see a clear power-law distribution when viewing the histogram of mentions related to Tesla Motors. The vast majority of tweets are mentioned only a few times.
plt.figure(figsize=(10,4)) The sns.distplot(co.loc['TSLA',:], kde=False)
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
The histogram above tells us that the some stocks will have a disproportionate relationship to TSLA. The twenty most commonly co-mentioned tweets are given below. Unsurprisingly, SolarCity Corporation (SCTY) was listed most commonly. SolarCity was recently in the news due to its decision to merge with Tesla Motors following a long, and well publicized lead up. Investors and stock traders clearly believed the merger was important to Tesla's stock price, as \$SCTY has by far the greatest number of co-mentions with \$TSLA. Additionally, other top co-mentions include large tech companies, as we as Ford and General Motors.
co.loc['TSLA',co.loc['TSLA',:]>0].sort_values(ascending=False)[:20]
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Very few others even come close to the density of TSLA and SCTY. Those that are mentioned are often in the same segment as Tesla. These are big-name tech stocks: Apple, Amazon, Netflix, Facebook, Google, and Alibaba. Below is a heatmap of the co-mentions matrix. It is 788 x 788 and focused between 0 & 5. Note that the vast majority of stocks are only mentioned a few times. We see a very clear axes representing TSLA. A slighly more fuzzy intersection below and to the right represents SCTY. You can guess the rest from the list above.
plt.figure(figsize=(45,10)) sns.heatmap(co, xticklabels=False, vmin=0, vmax=5, yticklabels=False, square=True);
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Having successful datamined StockTwits, we explored some of the relationships found in the data. However, we did not find sufficient evidence that stock performance could be predicted by tweets. For these reasons, we shift our focus. StockTwits provides a remarkably large set of labeled data for training. We explored sentiment prediction using 'Bullish' and 'Bearish.' More on this is covered in the section below. Sentiment Prediction Some of the tweets have bullish or bearish labels, indicating if the poster thinks the mentioned stock will go up or down in price respectively. We will extract only those tweets which have such sentiment labels, and convert the labels into either 0 for bearish or 1 for bullish.
def get_tweets_and_labels(data): # filter out messages without a bullish/bearish tag data = filter(lambda m: m['entities']['sentiment'] != None, data) # get tweets tweets = map(lambda m: m['body'], data) # get labels def create_label(message): sentiment = message['entities']['sentiment']['basic'] if sentiment == 'Bearish': return 0 elif sentiment == 'Bullish': return 1 else: raise Exception('Got unexpected sentiment') labels = map(create_label, data) return tweets, labels # get all tweets and labels available tweets = [] labels = [] all_tweets = [] for filename in os.listdir('./data'): path = './data/%s' % filename with open(path, 'r') as f: data = json.load(f) all_tweets.extend(map(lambda m: m['body'], data)) t, l = get_tweets_and_labels(data) tweets.extend(t) labels.extend(l) assert len(tweets) == len(labels) print '%d labeled examples extracted ' % len(tweets)
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
The next two cells make functions to create a TF-IDF vectorizer for the tweets and to train a linear SVM classifier to predict bearish or bullish sentiment.
def tfidf_vectorizer(tweets, all_tweets=None): vectorizer = TfidfVectorizer() if all_tweets != None: # use all tweets, including unlabeled, to learn vocab and tfidf weights vectorizer.fit(all_tweets) else: vectorizer.fit(tweets) return vectorizer def train_svm(X, y): model = svm.LinearSVC(penalty='l2', loss='hinge', C=1.0) #model = svm.SVC(C=1.0, kernel='rbf') model.fit(X, y) return model
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
We first create the TF-IDF feature matrix for all of our labeled data. Then we randomly permute it and split 10% off into a held out test set. We also print out the percentage of labeled tweets that are bullish, because the 2 classes are likely not balanced. We want to know how well a classifier that only predicts the most common class would do.
vectorizer = tfidf_vectorizer(tweets, all_tweets) X = vectorizer.transform(tweets) words = vectorizer.get_feature_names() y = np.array(labels) print X.shape print y.shape N = X.shape[0] num_train = int(math.floor(N*0.9)) P = np.random.permutation(N) X_tr = X[P[:num_train]] y_tr = y[P[:num_train]] X_te = X[P[num_train:]] y_te = y[P[num_train:]] print 'Training set size is %d' % num_train print 'Percent bullish = %f%%' % (100*y.mean())
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Now it is simple to train the SVM and print our the accuracy for both the training and testing data.
model = train_svm(X_tr, y_tr) print 'Training set accuracy = %f' % model.score(X_tr, y_tr) print 'Test set accuracy = %f' % model.score(X_te, y_te)
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
We can see that the classifier does several percent better than just guessing the most common class. Now that we have a trained SVM, we can use the weights to print out words most indicative of bearish or bullish sentiment. This is because we used a linear SVM, so each weight coefficient corresponds to a column in the TF-IDF matrix, which itself corresponds to a word. We get the indices of the weight coefficients with the highest and lowest values, and use them to print the most bullish and bearish words.
weights = np.squeeze(model.coef_) sorted_weight_indices = np.argsort(weights) num_words = 30 bearish_indices = sorted_weight_indices[:num_words] bullish_indices = sorted_weight_indices[-num_words:][::-1] words = np.array(words) print 'Bearish words:' for w in words[bearish_indices]: print w print print 'Bullish words:' for w in words[bullish_indices]: print w
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
The results are actually pretty interesting. I'll give a bit of explanation for some of the terms for people who are not familiar with the stock market. If you expect the price of a stock to fall, you can try to make money off it by shorting it or buying a type of option called puts. If the price is falling, you could say it is tanking, crashing, or on a downtrend. The classifier picked up on all of these terms, and many more, as correctly indicating bearish sentiment. Likewise, if you expect the price to rise, you can make money by buying the stock and going 'long' on it, or by buying a type of option known as calls. And obviously terms like 'higher' or 'buy' clearly indicated bullishness. The classifier picked up on all these terms and others that are similar as correctly being indicative of bullish sentiment. Finally, let's use a trained classifier to predict sentiment on a held-out dataset of Tesla Motors tweets, and plot it along with the actual sentiment. We use logistic regression instead of SVM for this experiment because we find it gives better results.
model = linear_model.LogisticRegression(penalty='l2', C=10.0, class_weight='balanced') #model = svm.LinearSVC(penalty='l2', loss='hinge', C=1.0, class_weight='balanced') model.fit(X, y) with open('./tsla_data/TSLA.json', 'r') as f: data = json.load(f)[::-1] def extract_body(m): return m['body'] def extract_date(m): return m['created_at'] def extract_sentiment(m): if m['entities']['sentiment'] != None: sentiment = m['entities']['sentiment']['basic'] if sentiment == 'Bearish': return 0 else: return 1 else: return np.nan d = {'body': map(extract_body, data), 'date': pd.to_datetime(map(extract_date, data)), 'sentiment': map(extract_sentiment, data)} df = pd.DataFrame(data=d) # use classifier to predict sentiment for unlabeled examples features = vectorizer.transform(df['body']) predictions = model.predict(features) predicted_sentiment = [] for i, sentiment in enumerate(df['sentiment']): if np.isnan(sentiment): predicted_sentiment.append(predictions[i]) else: predicted_sentiment.append(sentiment) df['predicted_sentiment'] = pd.Series(predictions) print df.dtypes print df.head() grouped_df = df.groupby(pd.Grouper(key='date', freq='1D')).aggregate(np.mean) print grouped_df.head() plt.plot(grouped_df['sentiment'], label='Actual sentiment') plt.plot(grouped_df['predicted_sentiment'], label='Predicted sentiment') plt.legend(loc='lower right') coef = np.corrcoef(grouped_df['sentiment'], grouped_df['predicted_sentiment'])[0,1] print print 'Correlation coefficient = %f' % coef
stocktwits_analysis.ipynb
tdrussell/stocktwits_analysis
mit
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int.
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ def get_sentences(text): return text.split('\n') def get_ids(words, vocab_to_int): return [vocab_to_int[word.strip()] for word in words.split()] def add_eos(sentences, vocab_to_int): eos_id = vocab_to_int['<EOS>'] return [sentence + [eos_id] for sentence in sentences] source_id_text = [get_ids(sentence, source_vocab_to_int) for sentence in get_sentences(source_text)] target_id_text = [get_ids(sentence, target_vocab_to_int) for sentence in get_sentences(target_text)] target_id_text = add_eos(target_id_text, target_vocab_to_int) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ input_ = tf.placeholder(shape=(None, None), dtype=tf.int32, name='input') targets = tf.placeholder(shape=(None, None), dtype=tf.int32, name='targets') learning_rate = tf.placeholder(dtype=tf.float32, name='lr') keep_prob = tf.placeholder(dtype=tf.float32, name='keep_prob') return input_, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ go_id = target_vocab_to_int.get('<GO>') gos = tf.fill(dims=(batch_size,1), value=go_id) return tf.concat((gos, target_data[:,:-1]), axis=1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) lstm_layers = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) lstm_layers = tf.contrib.rnn.DropoutWrapper(lstm_layers, keep_prob) _, state = tf.nn.dynamic_rnn(lstm_layers, rnn_inputs, dtype=tf.float32) return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob) decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs, final_state, final_context_state = \ tf.contrib.seq2seq.dynamic_rnn_decoder( cell=dec_cell, decoder_fn=decoder_fn, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) return output_fn(outputs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
from tensorflow.contrib.seq2seq import simple_decoder_fn_inference, \ dynamic_rnn_decoder def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: Maximum length of :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ decoder_fn = simple_decoder_fn_inference( output_fn=output_fn, encoder_state=encoder_state, embeddings=dec_embeddings, start_of_sequence_id=start_of_sequence_id, end_of_sequence_id=end_of_sequence_id, maximum_length=maximum_length, num_decoder_symbols=vocab_size) outputs, final_state, final_context_state = dynamic_rnn_decoder( cell=dec_cell, decoder_fn=decoder_fn, scope=decoding_scope) return outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
from tensorflow.contrib.rnn import BasicLSTMCell, MultiRNNCell from tensorflow.contrib.layers import linear def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ lstm = BasicLSTMCell( rnn_size ) dec_cell = MultiRNNCell( [lstm] * num_layers ) output_fn = lambda x: linear(x, vocab_size) with tf.variable_scope("decoder") as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] maximum_length = sequence_length+1 with tf.variable_scope(decoding_scope): decoding_scope.reuse_variables() infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # 1: Embedd input data embedded_input = tf.contrib.layers.embed_sequence( input_data, source_vocab_size, enc_embedding_size) # 2: Encode input encoder_state = encoding_layer(embedded_input, rnn_size, num_layers, keep_prob) # 3: Process target data preprocessed_target = process_decoding_input(target_data, target_vocab_to_int, batch_size) # 4: Embedd target data dec_embeddings = tf.Variable(tf.truncated_normal(shape=(target_vocab_size, dec_embedding_size), dtype=tf.float32)) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, preprocessed_target) # 5: Decode encoded input train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability
# Number of Epochs epochs = 10 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_target_sentence_length, shape=None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
%pdb off """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target_batch, [(0,0),(0,max_seq - target_batch.shape[1]),(0,0)], 'constant') if max_seq - batch_train_logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability} ) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved')
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentences = sentence.lower() words = sentences.split() unk_id = vocab_to_int['<UNK>'] return [vocab_to_int.get(word, unk_id) for word in words] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq)
4_dlnd_language_translation/dlnd_language_translation.ipynb
NagyAttila/Udacity_DLND_Assigments
gpl-3.0
NOTE on notation * _x, _y, _z, ...: NumPy 0-d or 1-d arrays * _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays * x, y, z, ...: 0-d or 1-d tensors * X, Y, Z, ...: 2-d or higher dimensional tensors Scan Q1. Compute the cumulative sum of X along the second axis.
_X = np.array([[1,2,3], [4,5,6]]) X = tf.convert_to_tensor(_X) out = tf.cumsum(X, axis=1) print(out.eval()) _out = np.cumsum(_X, axis=1) assert np.array_equal(out.eval(), _out) # tf.cumsum == np.cumsum
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q2. Compute the cumulative product of X along the second axis.
_X = np.array([[1,2,3], [4,5,6]]) X = tf.convert_to_tensor(_X) out = tf.cumprod(X, axis=1) print(out.eval()) _out = np.cumprod(_X, axis=1) assert np.array_equal(out.eval(), _out) # tf.cumprod == np.cumprod
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Segmentation Q3. Compute the sum along the first two elements and the last two elements of X separately.
_X = np.array( [[1,2,3,4], [-1,-2,-3,-4], [-10,-20,-30,-40], [10,20,30,40]]) X = tf.convert_to_tensor(_X) out = tf.segment_sum(X, [0, 0, 1, 1]) print(out.eval())
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q4. Compute the product along the first two elements and the last two elements of X separately.
_X = np.array( [[1,2,3,4], [1,1/2,1/3,1/4], [1,2,3,4], [-1,-1,-1,-1]]) X = tf.convert_to_tensor(_X) out = tf.segment_prod(X, [0, 0, 1, 1]) print(out.eval())
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q5. Compute the minimum along the first two elements and the last two elements of X separately.
_X = np.array( [[1,4,5,7], [2,3,6,8], [1,2,3,4], [-1,-2,-3,-4]]) X = tf.convert_to_tensor(_X) out = tf.segment_min(X, [0, 0, 1, 1]) print(out.eval())
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q6. Compute the maximum along the first two elements and the last two elements of X separately.
_X = np.array( [[1,4,5,7], [2,3,6,8], [1,2,3,4], [-1,-2,-3,-4]]) X = tf.convert_to_tensor(_X) out = tf.segment_max(X, [0, 0, 1, 1]) print(out.eval())
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q7. Compute the mean along the first two elements and the last two elements of X separately.
_X = np.array( [[1,2,3,4], [5,6,7,8], [-1,-2,-3,-4], [-5,-6,-7,-8]]) X = tf.convert_to_tensor(_X) out = tf.segment_mean(X, [0, 0, 1, 1]) print(out.eval())
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q8. Compute the sum along the second and fourth and the first and third elements of X separately in the order.
_X = np.array( [[1,2,3,4], [-1,-2,-3,-4], [-10,-20,-30,-40], [10,20,30,40]]) X = tf.convert_to_tensor(_X) out = tf.unsorted_segment_sum(X, [1, 0, 1, 0], 2) print(out.eval())
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Sequence Comparison and Indexing Q9. Get the indices of maximum and minimum values of X along the second axis.
_X = np.random.permutation(10).reshape((2, 5)) print("_X =", _X) X = tf.convert_to_tensor(_X) out1 = tf.argmax(X, axis=1) out2 = tf.argmin(X, axis=1) print(out1.eval()) print(out2.eval()) _out1 = np.argmax(_X, axis=1) _out2 = np.argmin(_X, axis=1) assert np.allclose(out1.eval(), _out1) assert np.allclose(out2.eval(), _out2) # tf.argmax == np.argmax # tf.argmin == np.argmin
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q10. Find the unique elements of x that are not present in y.
_x = np.array([0, 1, 2, 5, 0]) _y = np.array([0, 1, 4]) x = tf.convert_to_tensor(_x) y = tf.convert_to_tensor(_y) out = tf.setdiff1d(x, y)[0] print(out.eval()) _out = np.setdiff1d(_x, _y) assert np.array_equal(out.eval(), _out) # Note that tf.setdiff1d returns a tuple of (out, idx), # whereas np.setdiff1d returns out only.
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q11. Return the elements of X, if X < 4, otherwise X*10.
_X = np.arange(1, 10).reshape(3, 3) X = tf.convert_to_tensor(_X) out = tf.where(X < 4, X, X*10) print(out.eval()) _out = np.where(_X < 4, _X, _X*10) assert np.array_equal(out.eval(), _out) # tf.where == np.where
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q12. Get unique elements and their indices from x.
_x = np.array([1, 2, 6, 4, 2, 3, 2]) x = tf.convert_to_tensor(_x) out, indices = tf.unique(x) print(out.eval()) print(indices.eval()) _out, _indices = np.unique(_x, return_inverse=True) print("sorted unique elements =", _out) print("indices =", _indices) # Note that tf.unique keeps the original order, whereas # np.unique sorts the unique members.
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q13. Compute the edit distance between hypothesis and truth.
# Check the documentation on tf.SparseTensor if you are not # comfortable with sparse tensor. hypothesis = tf.SparseTensor( [[0, 0],[0, 1],[0, 2],[0, 4]], ["a", "b", "c", "a"], (1, 5)) # Note that this is equivalent to the dense tensor. # [["a", "b", "c", 0, "a"]] truth = tf.SparseTensor( [[0, 0],[0, 2],[0, 4]], ["a", "c", "b"], (1, 6)) # This is equivalent to the dense tensor. # [["a", 0, "c", 0, "b", 0]] out1 = tf.edit_distance(hypothesis, truth, normalize=False) out2 = tf.edit_distance(hypothesis, truth, normalize=True) print(out1.eval()) # 2 <- one deletion ("b") and one substitution ("a" to "b") print(out2.eval()) # 0.6666 <- 2 / 6
programming/Python/tensorflow/exercises/Math_Part3_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Import the data pmdarima contains an embedded datasets submodule that allows us to try out models on common datasets. We can load the MSFT stock data from pmdarima 1.3.0+:
from pmdarima.datasets.stocks import load_msft df = load_msft() df.head()
examples/stock_market_example.ipynb
alkaline-ml/pmdarima
mit
Split the data As in the blog post, we'll use 80% of the samples as training data. Note that a time series' train/test split is different from that of a dataset without temporality; order must be preserved if we hope to discover any notable trends.
train_len = int(df.shape[0] * 0.8) train_data, test_data = df[:train_len], df[train_len:] y_train = train_data['Open'].values y_test = test_data['Open'].values print(f"{train_len} train samples") print(f"{df.shape[0] - train_len} test samples")
examples/stock_market_example.ipynb
alkaline-ml/pmdarima
mit
Pre-modeling analysis TDS fixed p at 5 based on some lag plot analysis:
from pandas.plotting import lag_plot fig, axes = plt.subplots(3, 2, figsize=(12, 16)) plt.title('MSFT Autocorrelation plot') # The axis coordinates for the plots ax_idcs = [ (0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1) ] for lag, ax_coords in enumerate(ax_idcs, 1): ax_row, ax_col = ax_coords axis = axes[ax_row][ax_col] lag_plot(df['Open'], lag=lag, ax=axis) axis.set_title(f"Lag={lag}") plt.show()
examples/stock_market_example.ipynb
alkaline-ml/pmdarima
mit
All lags look fairly linear, so it's a good indicator that an auto-regressive model is a good choice. Therefore, we'll allow the auto_arima to select the lag term for us, up to 6. Estimating the differencing term We can estimate the best lag term with several statistical tests:
from pmdarima.arima import ndiffs kpss_diffs = ndiffs(y_train, alpha=0.05, test='kpss', max_d=6) adf_diffs = ndiffs(y_train, alpha=0.05, test='adf', max_d=6) n_diffs = max(adf_diffs, kpss_diffs) print(f"Estimated differencing term: {n_diffs}")
examples/stock_market_example.ipynb
alkaline-ml/pmdarima
mit
Use auto_arima to fit a model on the data.
auto = pm.auto_arima(y_train, d=n_diffs, seasonal=False, stepwise=True, suppress_warnings=True, error_action="ignore", max_p=6, max_order=None, trace=True) print(auto.order) from sklearn.metrics import mean_squared_error from pmdarima.metrics import smape model = auto def forecast_one_step(): fc, conf_int = model.predict(n_periods=1, return_conf_int=True) return ( fc.tolist()[0], np.asarray(conf_int).tolist()[0]) forecasts = [] confidence_intervals = [] for new_ob in y_test: fc, conf = forecast_one_step() forecasts.append(fc) confidence_intervals.append(conf) # Updates the existing model with a small number of MLE steps model.update(new_ob) print(f"Mean squared error: {mean_squared_error(y_test, forecasts)}") print(f"SMAPE: {smape(y_test, forecasts)}") fig, axes = plt.subplots(2, 1, figsize=(12, 12)) # --------------------- Actual vs. Predicted -------------------------- axes[0].plot(y_train, color='blue', label='Training Data') axes[0].plot(test_data.index, forecasts, color='green', marker='o', label='Predicted Price') axes[0].plot(test_data.index, y_test, color='red', label='Actual Price') axes[0].set_title('Microsoft Prices Prediction') axes[0].set_xlabel('Dates') axes[0].set_ylabel('Prices') axes[0].set_xticks(np.arange(0, 7982, 1300).tolist(), df['Date'][0:7982:1300].tolist()) axes[0].legend() # ------------------ Predicted with confidence intervals ---------------- axes[1].plot(y_train, color='blue', label='Training Data') axes[1].plot(test_data.index, forecasts, color='green', label='Predicted Price') axes[1].set_title('Prices Predictions & Confidence Intervals') axes[1].set_xlabel('Dates') axes[1].set_ylabel('Prices') conf_int = np.asarray(confidence_intervals) axes[1].fill_between(test_data.index, conf_int[:, 0], conf_int[:, 1], alpha=0.9, color='orange', label="Confidence Intervals") axes[1].set_xticks(np.arange(0, 7982, 1300).tolist(), df['Date'][0:7982:1300].tolist()) axes[1].legend() df["Date"]
examples/stock_market_example.ipynb
alkaline-ml/pmdarima
mit
Date picker
widgets.DatePicker( description='Pick a Date' )
docs/source/examples/Widget List.ipynb
cornhundred/ipywidgets
bsd-3-clause