markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
.on specifies when something should be executed. In our case when the project has a number of 20 trajectories. This is not yet the case so this event will not do anything unless we simulation more trajectories. .do specifies the function to be called. The concept is borrowed from event based languages like often used in JavaScript. You can build quite complex execution patterns with this. An event for example also knows when it is finished and this can be used as another trigger.
def hello(): print 'DONE!!!' return [] # todo: allow for None here finished = Event().on(ev.on_done).do(hello) scheduler.add_event(ev) scheduler.add_event(finished)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
All events and tasks run parallel or at least get submitted and queue for execution in parallel. RP takes care of the actual execution.
print '# of files', len(project.files)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
So for now lets run more trajectories and schedule computation of models in regular intervals.
ev1 = Event().on(project.on_ntraj(range(30, 70, 4))).do(task_generator) ev2 = Event().on(project.on_ntraj(38)).do(lambda: modeller.execute(list(project.trajectories))).repeat().until(ev1.on_done) scheduler.add_event(ev1) scheduler.add_event(ev2) len(project.trajectories) len(project.models)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
.repeat means to redo the same task when the last is finished (it will just append an infinite list of conditions to keep on running). .until specifies a termination condition. The event will not be executed once this condition is met. Makes most sense if you use .repeat or if the trigger condition and stopping should be independent. You might say, run 100 times unless you have a good enough model.
print project.files
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Strategies (aka the brain) The brain is just a collection of events. This makes it reuseable and easy to extend.
project.close()
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
with open('jinpingmei.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
text[:100]
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
And we can see the characters encoded as integers.
encoded[:100]
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
len(vocab)
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/sequence_batching@1x.png" width=500px> <br> We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep. After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this: python y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] where x is the input batch and y is the target batch. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch and number of batches we can make characters_per_batch = n_seqs * n_steps n_batches = len(arr)//characters_per_batch # Keep only enough characters to make full batches arr = arr[:n_batches * characters_per_batch] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y = np.zeros_like(x) y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] yield x, y
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10])
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, keep_prob
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)]) ``` Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell. We also need to create an initial cell state of all zeros. This can be done like so python initial_state = cell.zero_state(batch_size, tf.float32) Below, we implement the build_lstm function to create these LSTM cells and the initial state.
def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' ### Build the LSTM Cell def build_cell(lstm_size, keep_prob): # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)]) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- x: Input tensor in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # That is, the shape should be batch_size*num_steps rows by lstm_size columns seq_output = tf.concat(lstm_output, axis=1) x = tf.reshape(seq_output, [-1, in_size]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(out_size)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name='predictions') return out, logits
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per batch_size per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape(y_one_hot, logits.get_shape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) loss = tf.reduce_mean(loss) return loss
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN and collect the outputs outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Hyperparameters Here I'm defining the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
batch_size = 128 # Sequences per batch num_steps = 100 # Number of sequence steps per batch lstm_size = 512 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.0003 # Learning rate keep_prob = 0.5 # Dropout keep probability
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt
epochs = 50 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
tf.train.get_checkpoint_state('checkpoints')
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples)
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Here, pass in the path to a checkpoint and sample from the network.
tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 7000, lstm_size, len(vocab), prime="浪") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp)
intro-to-rnns/Anna_KaRNNa.ipynb
oscarmore2/deep-learning-study
mit
Add py files
sc.addPyFile('pyFiles/my_module.py') SparkFiles.get('my_module.py')
notebooks/04-miscellaneous/add-python-files-to-spark-cluster.ipynb
MingChen0919/learning-apache-spark
mit
Use my_module.py We can import my_module as a python module
from my_module import * addPyFiles_is_successfull() sum_two_variables(4,5)
notebooks/04-miscellaneous/add-python-files-to-spark-cluster.ipynb
MingChen0919/learning-apache-spark
mit
Single files
!h5ls fxe_control_example.h5 from karabo_data import H5File f = H5File('fxe_control_example.h5') f.control_sources f.instrument_sources
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Get data by train
for tid, data in f.trains(): print("Processing train", tid) print("beam iyPos:", data['SA1_XTD2_XGM/DOOCS/MAIN']['beamPosition.iyPos.value']) break tid, data = f.train_from_id(10005) data['FXE_XAD_GEC/CAM/CAMERA:daqOutput']['data.image.dims']
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
These are just a few of the ways to access data. The attributes and methods described below for run directories also work with individual files. We expect that it will normally make sense to access a run directory as a single object, rather than working with the files separately. Run directories An experimental run is recorded as a collection of files in a directory. Another dummy example:
!ls fxe_example_run/ from karabo_data import RunDirectory run = RunDirectory('fxe_example_run/') run.files[:3] # The objects for the individual files (see above)
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
What devices were recording in this run? Control devices are slow data, recording once per train. Instrument devices includes detector data, but also some other data sources such as cameras. They can have more than one reading per train.
run.control_sources run.instrument_sources
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Which trains are in this run?
print(run.train_ids[:10])
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
See the available keys for a given source:
run.keys_for_source('SPB_XTD9_XGM/DOOCS/MAIN:output')
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
This collects data from across files, including detector data:
for tid, data in run.trains(): print("Processing train", tid) print("Detctor data module 0 shape:", data['FXE_DET_LPD1M-1/DET/0CH0:xtdf']['image.data'].shape) break # Stop after the first train to keep the demo short
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Train IDs are meant to be globally unique (although there were some glitches with this in the past). A train index is only within this run.
tid, data = run.train_from_id(10005) tid, data = run.train_from_index(5)
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Series data to pandas Data which holds a single number per train (or per pulse) can be extracted to as series (individual columns) and dataframes (tables) for pandas, a widely-used tool for data manipulation. karabo_data chains sequence files, which contain successive data from the same source. In this example, trains 10000–10399 are in one sequence file (...DA01-S00000.h5), and 10400–10479 are in another (...DA01-S00001.h5). They are concatenated into one series:
ixPos = run.get_series('SA1_XTD2_XGM/DOOCS/MAIN', 'beamPosition.ixPos.value') ixPos.tail(10)
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
To extract a dataframe, you can select interesting data fields with glob syntax, as often used for selecting files on Unix platforms. [abc]: one character, a/b/c ?: any one character *: any sequence of characters
run.get_dataframe(fields=[("*_XGM/*", "*.i[xy]Pos")])
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Labelled arrays Data with extra dimensions can be handled as xarray labelled arrays. These are a wrapper around Numpy arrays with indexes which can be used to align them and select data.
xtd2_intensity = run.get_array('SA1_XTD2_XGM/DOOCS/MAIN:output', 'data.intensityTD', extra_dims=['pulseID']) xtd2_intensity
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Here's a brief example of using xarray to align the data and select by train ID. See the examples in the xarray docs for more on what it can do. In this example data, all the data sources have the same range of train IDs, so aligning them doesn't change anything. In real data, devices may miss some trains that other devices did record.
import xarray as xr xtd9_intensity = run.get_array('SPB_XTD9_XGM/DOOCS/MAIN:output', 'data.intensityTD', extra_dims=['pulseID']) # Align two arrays, keep only trains which they both have data for: xtd2_intensity, xtd9_intensity = xr.align(xtd2_intensity, xtd9_intensity, join='inner') # Select data for a single train by train ID: xtd2_intensity.sel(trainId=10004) # Select data from a range of train IDs. # This includes the end value, unlike normal Python indexing xtd2_intensity.loc[10004:10006]
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
You can also specify a region of interest from an array to load only part of the data:
from karabo_data import by_index # Select the first 5 trains in this run: sel = run.select_trains(by_index[:5]) # Get the whole of this array: arr = sel.get_array('FXE_XAD_GEC/CAM/CAMERA:daqOutput', 'data.image.pixels') print("Whole array shape:", arr.shape) # Get a region of interest arr2 = sel.get_array('FXE_XAD_GEC/CAM/CAMERA:daqOutput', 'data.image.pixels', roi=by_index[100:200, :512]) print("ROI array shape:", arr2.shape)
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
General information karabo_data provides a few ways to get general information about what's in data files. First, from Python code:
run.info() run.detector_info('FXE_DET_LPD1M-1/DET/0CH0:xtdf')
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
The lsxfel command provides similar information at the command line:
!lsxfel fxe_example_run/RAW-R0450-LPD00-S00000.h5 !lsxfel fxe_example_run/RAW-R0450-DA01-S00000.h5 !lsxfel fxe_example_run
docs/Demo.ipynb
European-XFEL/h5tools-py
bsd-3-clause
<table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/tensorflow/collections/object_detection/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a> </td> </table> TensorFlow Hub Object Detection Colab Welcome to the TensorFlow Hub Object Detection Colab! This notebook will take you through the steps of running an "out-of-the-box" object detection model on images. More models This collection contains TF2 object detection models that have been trained on the COCO 2017 dataset. Here you can find all object detection models that are currently hosted on tfhub.dev. Imports and Setup Let's start with the base imports.
# This Colab requires TF 2.5. !pip install -U "tensorflow>=2.5" import os import pathlib import matplotlib import matplotlib.pyplot as plt import io import scipy.misc import numpy as np from six import BytesIO from PIL import Image, ImageDraw, ImageFont from six.moves.urllib.request import urlopen import tensorflow as tf import tensorflow_hub as hub tf.get_logger().setLevel('ERROR')
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Utilities Run the following cell to create some utils that will be needed later: Helper method to load an image Map of Model Name to TF Hub handle List of tuples with Human Keypoints for the COCO 2017 dataset. This is needed for models with keypoints.
# @title Run this!! def load_image_into_numpy_array(path): """Load an image from file into a numpy array. Puts image into numpy array to feed into tensorflow graph. Note that by convention we put it into a numpy array with shape (height, width, channels), where channels=3 for RGB. Args: path: the file path to the image Returns: uint8 numpy array with shape (img_height, img_width, 3) """ image = None if(path.startswith('http')): response = urlopen(path) image_data = response.read() image_data = BytesIO(image_data) image = Image.open(image_data) else: image_data = tf.io.gfile.GFile(path, 'rb').read() image = Image.open(BytesIO(image_data)) (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (1, im_height, im_width, 3)).astype(np.uint8) ALL_MODELS = { 'CenterNet HourGlass104 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1', 'CenterNet HourGlass104 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1', 'CenterNet HourGlass104 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1', 'CenterNet HourGlass104 Keypoints 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1', 'CenterNet Resnet50 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1', 'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1', 'CenterNet Resnet101 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1', 'CenterNet Resnet50 V2 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1', 'CenterNet Resnet50 V2 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1', 'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1', 'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1', 'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1', 'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1', 'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1', 'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1', 'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1', 'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1', 'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2', 'SSD MobileNet V1 FPN 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1', 'SSD MobileNet V2 FPNLite 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1', 'SSD MobileNet V2 FPNLite 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1', 'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1', 'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1', 'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1', 'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1', 'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1', 'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1', 'Faster R-CNN ResNet50 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1', 'Faster R-CNN ResNet50 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1', 'Faster R-CNN ResNet50 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1', 'Faster R-CNN ResNet101 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1', 'Faster R-CNN ResNet101 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1', 'Faster R-CNN ResNet101 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1', 'Faster R-CNN ResNet152 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1', 'Faster R-CNN ResNet152 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1', 'Faster R-CNN ResNet152 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1', 'Faster R-CNN Inception ResNet V2 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1', 'Faster R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1', 'Mask R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1' } IMAGES_FOR_TEST = { 'Beach' : 'models/research/object_detection/test_images/image2.jpg', 'Dogs' : 'models/research/object_detection/test_images/image1.jpg', # By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg 'Naxos Taverna' : 'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg', # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg 'Beatles' : 'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg', # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg 'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg', # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg 'Birds' : 'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg', } COCO17_HUMAN_POSE_KEYPOINTS = [(0, 1), (0, 2), (1, 3), (2, 4), (0, 5), (0, 6), (5, 7), (7, 9), (6, 8), (8, 10), (5, 6), (5, 11), (6, 12), (11, 12), (11, 13), (13, 15), (12, 14), (14, 16)]
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Visualization tools To visualize the images with the proper detected boxes, keypoints and segmentation, we will use the TensorFlow Object Detection API. To install it we will clone the repo.
# Clone the tensorflow models repository !git clone --depth 1 https://github.com/tensorflow/models
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Intalling the Object Detection API
%%bash sudo apt install -y protobuf-compiler cd models/research/ protoc object_detection/protos/*.proto --python_out=. cp object_detection/packages/tf2/setup.py . python -m pip install .
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Now we can import the dependencies we will need later
from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as viz_utils from object_detection.utils import ops as utils_ops %matplotlib inline
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Load label map data (for plotting). Label maps correspond index numbers to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine. We are going, for simplicity, to load from the repository that we loaded the Object Detection API code
PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Build a detection model and load pre-trained model weights Here we will choose which Object Detection model we will use. Select the architecture and it will be loaded automatically. If you want to change the model to try other architectures later, just change the next cell and execute following ones. Tip: if you want to read more details about the selected model, you can follow the link (model handle) and read additional documentation on TF Hub. After you select a model, we will print the handle to make it easier.
#@title Model Selection { display-mode: "form", run: "auto" } model_display_name = 'CenterNet HourGlass104 Keypoints 512x512' # @param ['CenterNet HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 1024x1024','Mask R-CNN Inception ResNet V2 1024x1024'] model_handle = ALL_MODELS[model_display_name] print('Selected model:'+ model_display_name) print('Model Handle at TensorFlow Hub: {}'.format(model_handle))
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Loading the selected model from TensorFlow Hub Here we just need the model handle that was selected and use the Tensorflow Hub library to load it to memory.
print('loading model...') hub_model = hub.load(model_handle) print('model loaded!')
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Loading an image Let's try the model on a simple image. To help with this, we provide a list of test images. Here are some simple things to try out if you are curious: * Try running inference on your own images, just upload them to colab and load the same way it's done in the cell below. * Modify some of the input images and see if detection still works. Some simple things to try out here include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels). Be careful: when using images with an alpha channel, the model expect 3 channels images and the alpha will count as a 4th.
#@title Image Selection (don't forget to execute the cell!) { display-mode: "form"} selected_image = 'Beach' # @param ['Beach', 'Dogs', 'Naxos Taverna', 'Beatles', 'Phones', 'Birds'] flip_image_horizontally = False #@param {type:"boolean"} convert_image_to_grayscale = False #@param {type:"boolean"} image_path = IMAGES_FOR_TEST[selected_image] image_np = load_image_into_numpy_array(image_path) # Flip horizontally if(flip_image_horizontally): image_np[0] = np.fliplr(image_np[0]).copy() # Convert image to grayscale if(convert_image_to_grayscale): image_np[0] = np.tile( np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8) plt.figure(figsize=(24,32)) plt.imshow(image_np[0]) plt.show()
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Doing the inference To do the inference we just need to call our TF Hub loaded model. Things you can try: * Print out result['detection_boxes'] and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]). * inspect other output keys present in the result. A full documentation can be seen on the models documentation page (pointing your browser to the model handle printed earlier)
# running inference results = hub_model(image_np) # different object detection models have additional results # all of them are explained in the documentation result = {key:value.numpy() for key,value in results.items()} print(result.keys())
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Visualizing the results Here is where we will need the TensorFlow Object Detection API to show the squares from the inference step (and the keypoints when available). the full documentation of this method can be seen here Here you can, for example, set min_score_thresh to other values (between 0 and 1) to allow more detections in or to filter out more detections.
label_id_offset = 0 image_np_with_detections = image_np.copy() # Use keypoints if available in detections keypoints, keypoint_scores = None, None if 'detection_keypoints' in result: keypoints = result['detection_keypoints'][0] keypoint_scores = result['detection_keypoint_scores'][0] viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_detections[0], result['detection_boxes'][0], (result['detection_classes'][0] + label_id_offset).astype(int), result['detection_scores'][0], category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, min_score_thresh=.30, agnostic_mode=False, keypoints=keypoints, keypoint_scores=keypoint_scores, keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS) plt.figure(figsize=(24,32)) plt.imshow(image_np_with_detections[0]) plt.show()
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
[Optional] Among the available object detection models there's Mask R-CNN and the output of this model allows instance segmentation. To visualize it we will use the same method we did before but adding an aditional parameter: instance_masks=output_dict.get('detection_masks_reframed', None)
# Handle models with masks: image_np_with_mask = image_np.copy() if 'detection_masks' in result: # we need to convert np.arrays to tensors detection_masks = tf.convert_to_tensor(result['detection_masks'][0]) detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0]) # Reframe the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image_np.shape[1], image_np.shape[2]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) result['detection_masks_reframed'] = detection_masks_reframed.numpy() viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_mask[0], result['detection_boxes'][0], (result['detection_classes'][0] + label_id_offset).astype(int), result['detection_scores'][0], category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, min_score_thresh=.30, agnostic_mode=False, instance_masks=result.get('detection_masks_reframed', None), line_thickness=8) plt.figure(figsize=(24,32)) plt.imshow(image_np_with_mask[0]) plt.show()
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Permutation
from numpy.random import permutation def permutate(X): new_array = permutation(X) return sum(new_array[0:2435]) - sum(new_array[2435::]) # calculate difference between first group and second difference = [] for i in range(0,100000): difference.append(permutate(data.call)) # Confidence interval 95%, # our result is very much outside the confidence interval of the difference between two groups print(np.percentile(difference, [2.5, 97.5])) sns.distplot(difference) # permuted data, normally distributed (CLT applies on this) # Margin of error with Z-table # Critical value is 1.96 in the Z-statistic for 0.95% (more than 30 samples and not skewed, hence normally distributed) print(1.96 * np.std(difference)) # hence our value is outside the margin of error # margin of error is the difference between the border of the confidence interval and the mean, # which is 0 in this case, hence margin of error that calculated that way is 38. np.percentile(difference, [2.5, 97.5])[1] - np.mean(difference) diff = sum(data[data.race=='w'].call) - sum(data[data.race=='b'].call) times = sum(difference > diff) + sum(difference < -diff) # times the difference is bigger than the found difference print(times) print(times / 100000) # p-value, hence clearly significant # Alle measurements lead to the conclusion that it's very unlikely that our value would come from the permuted distribution. # Therefore race is concluded to have an effect on callback.
exercises/statistics project 2/sliderule_dsi_inferential_statistics_exercise_2.ipynb
RoebideBruijn/datascience-intensive-course
mit
T-test
nw = sum(data.race == 'w') nb = sum(data.race == 'b') kw = sum(data[data.race=='w'].call) kb = sum(data[data.race=='b'].call) pw = kw/nw pb = kb/nb pw - pb # difference in means # You should actually use the Z-test, since it's normally distributed and over 30 samples, # but T-test gives similar results. from scipy.stats import ttest_ind ttest_ind(data[data.race=='w'].call, data[data.race=='b'].call) # p-value clearly significant # 95% confidence interval print((pw - pb) - 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb))) # lower limit print((pw - pb) + 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb))) # upper limit # No difference: 0, lies outside the confidence interval, hence race seems to have an effect # margin of error (pw - pb) + 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb)) - (pw - pb) # Our mean is 0.03, hence outside the margin of error, if the true mean would be 0. # Also from this calculation it's clear that it's very likely that race has an effect on callback.
exercises/statistics project 2/sliderule_dsi_inferential_statistics_exercise_2.ipynb
RoebideBruijn/datascience-intensive-course
mit
Running You'll need a celery worker instance and a flower instance running. In one terminal window run celery worker --loglevel INFO -A proj -E --autoscale 10,3 and in another terminal run celery flower -A proj Tasks API The tasks API is async, meaning calls will return immediatly and you'll need to poll on task status.
# Done once for the whole docs import requests, json api_root = 'http://localhost:5555/api' task_api = '{}/task'.format(api_root)
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
async-apply
args = {'args': [1, 2]} url = '{}/async-apply/tasks.add'.format(task_api) print(url) resp = requests.post(url, data=json.dumps(args)) reply = resp.json() reply
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
We can see that we created a new task and it's pending. Note that the API is async, meaning it won't wait until the task finish. apply For create task and wait results you can use 'apply' API.
args = {'args': [1, 2]} url = '{}/apply/tasks.add'.format(task_api) print(url) resp = requests.post(url, data=json.dumps(args)) reply = resp.json() reply
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
result Gets the task result. This is async and will return immediatly even if the task didn't finish (with state 'PENDING')
url = '{}/result/{}'.format(task_api, reply['task-id']) print(url) resp = requests.get(url) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
revoke Revoke a running task.
# Run a task args = {'args': [1, 2]} resp = requests.post('{}/async-apply/tasks.sub'.format(task_api), data=json.dumps(args)) reply = resp.json() # Now revoke it url = '{}/revoke/{}'.format(task_api, reply['task-id']) print(url) resp = requests.post(url, data='terminate=True') resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
rate-limit Update rate limit for a task.
worker = 'miki-manjaro' # You'll need to get the worker name from the worker API (seel below) url = '{}/rate-limit/{}'.format(task_api, worker) print(url) resp = requests.post(url, params={'taskname': 'tasks.add', 'ratelimit': '10'}) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
timeout Set timeout (both hard and soft) for a task.
url = '{}/timeout/{}'.format(task_api, worker) print(url) resp = requests.post(url, params={'taskname': 'tasks.add', 'hard': '3.14', 'soft': '3'}) # You can omit soft or hard resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
Worker API
# Once for the documentation worker_api = '{}/worker'.format(api_root)
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
workers List workers.
url = '{}/workers'.format(api_root) # Only one not under /worker print(url) resp = requests.get(url) workers = resp.json() workers
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
pool/shutdown Shutdown a worker.
worker = workers.keys()[0] url = '{}/shutdown/{}'.format(worker_api, worker) print(url) resp = requests.post(url) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
pool/restart Restart a worker pool, you need to have CELERYD_POOL_RESTARTS enabled in your configuration).
pool_api = '{}/pool'.format(worker_api) url = '{}/restart/{}'.format(pool_api, worker) print(url) resp = requests.post(url) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
pool/grow Grows worker pool.
url = '{}/grow/{}'.format(pool_api, worker) print(url) resp = requests.post(url, params={'n': '10'}) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
pool/shrink Shrink worker pool.
url = '{}/shrink/{}'.format(pool_api, worker) print(url) resp = requests.post(url, params={'n': '3'}) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
pool/autoscale Autoscale a pool.
url = '{}/autoscale/{}'.format(pool_api, worker) print(url) resp = requests.post(url, params={'min': '3', 'max': '10'}) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
queue/add-consumer Add a consumer to a queue.
queue_api = '{}/queue'.format(worker_api) url = '{}/add-consumer/{}'.format(queue_api, worker) print(url) resp = requests.post(url, params={'queue': 'jokes'}) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
queue/cancel-consumer Cancel a consumer queue.
url = '{}/cancel-consumer/{}'.format(queue_api, worker) print(url) resp = requests.post(url, params={'queue': 'jokes'}) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
Queue API We assume that we've two queues; the default one 'celery' and 'all'
url = '{}/queues/length'.format(api_root) print(url) resp = requests.get(url) resp.json()
docs/api.ipynb
alexmojaki/flower
bsd-3-clause
We open up a NANOGrav par/tim file combination with libstempo, and plot the residuals.
psr = T.tempopulsar(parfile = T.data + 'B1953+29_NANOGrav_dfg+12.par', timfile = T.data + 'B1953+29_NANOGrav_dfg+12.tim') LP.plotres(psr)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
We now remove the computed residuals from the TOAs, obtaining (in effect) a perfect realization of the deterministic timing model. The pulsar parameters will have changed somewhat, so make_ideal calls fit() on the pulsar object.
LT.make_ideal(psr) LP.plotres(psr)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
We now add a single line of noise at $10^{6.5}$ Hz, with an amplitude of 10 us. We also put back radiometer noise, with rms amplitude equal to 1x the nominal TOA errors. All the noise-generating commands take an optional argument seed that will reseed the numpy pseudorandom-number generator, so you are able to reproduce the same instance of noise. However, if you issue several noise-generating commands in sequence, you should use different seeds.
#LT.add_line(psr,f=10**6.5,A=1e-5) LT.add_efac(psr,efac=1.0,seed=1234) LP.plotres(psr)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
We could also add EQUAD quadrature noise (with add_equad) or its coarse-grained version (with add_jitter), but instead we prefer some red noise of "GW-like" amplitude $10^{-12}$ and spectral slope $\gamma = -3$.
LT.add_rednoise(psr,1e-12,3) LP.plotres(psr)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
Or, we may add a GW background as simulated by the tempo2 GWbkgrd plugin (see the docstring below).
LT.add_gwb(psr,flow=1e-8,gwAmp=5e-12) LP.plotres(psr) help(LT.add_gwb) LT.createGWB([psr],Amp=5e-15,gam=13./3.) LP.plotres(psr)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
Refitting will remove some of the power.
psr.fit() LP.plotres(psr)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
All done! We can save the resulting par and tim file, and analyze them with a favorite pipeline.
psr.savepar('B1953+29-simulate.par') psr.savetim('B1953+29-simulate.tim')
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
Note that currently the tim file that is output by tempo2 has a spurious "MODE 1" line that tempo2 does not like upon reloading. To erase it, you can do
T.purgetim('B1953+29-simulate.tim')
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
And if we reload the files we get pack the same thing...
psr2 = T.tempopulsar(parfile = 'B1953+29-simulate.par', timfile = 'B1953+29-simulate.tim') LP.plotres(psr2)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
It's also possible to obtain a perfect realization of the timing model described in a par file without a tim file, by specifying a new set of observation times (in MJD) and errors (in us). The observation frequency, observatory, and flags can also be specified (see the docstring below).
psr = LT.fakepulsar(parfile=T.data+'B1953+29_NANOGrav_dfg+12.par', obstimes=N.arange(53000,54800,30)+N.random.randn(60), # observe every 30+-1 days toaerr=0.1) LT.add_efac(psr,efac=1.0,seed=1234) LP.plotres(psr) help(LT.fakepulsar)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
Rather than generating fake TOAs you might want to calculate a pulsar's phase at a particular set of times. Using the tempopulsar object you can input an arbitrary set of observation times and use the residuals to get the pulsar's relative phase. For example:
# create a set of times (in MJD) obstimes = N.arange(53000, 54800, 10, dtype=N.float128) toaerr = 1e-3 # set the (probably arbitrary) errors in the times (us) observatory = "ao" # the observatory obsfreq = 1440.0 # the observation frequency (MHz) psr = T.tempopulsar( parfile="B1953+29-simulate.par", toas=obstimes, toaerrs=toaerr, observatory=observatory, obsfreq=obsfreq, dofit=False, ) # get the phases in cycles (mod 1) referenced to the initial observation time phases = psr.phaseresiduals(removemean=False)
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
The observation times can be input as an array of astropy Time objects. The TOA error values, observatory values, and observation frequencies, can also be arrays of the same length as array of observation times. If you want to extract phases referenced to a particular epoch, observatory and frequency, you can use the refphs argument to the residuals (or phaseresiduals) method of the tempopulsar object. For example, to reference the phase to an epoch of 52973 at the solar system barycentre you could use:
phaseref = psr.phaseresiduals(removemean="refphs", epoch=52973.0, site="@")
demo/libstempo-toasim-demo.ipynb
vallis/libstempo
mit
Convolution: Naive forward pass The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear. You can test your implementation by running the following:
x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781], [-0.18387192, -0.2109216 ]], [[ 0.21027089, 0.21661097], [ 0.22847626, 0.23004637]], [[ 0.50813986, 0.54309974], [ 0.64082444, 0.67101435]]], [[[-0.98053589, -1.03143541], [-1.19128892, -1.24695841]], [[ 0.69108355, 0.66880383], [ 0.59480972, 0.56776003]], [[ 2.36270298, 2.36904306], [ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around 2e-8 print('Testing conv_forward_naive') print('difference: ', rel_error(out, correct_out))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Aside: Image processing via convolutions As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
from scipy.misc import imread, imresize kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d//2:-d//2, :] img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1)) # Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3)) # The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]] # Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] # Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128]) # Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1}) def imshow_noax(img, normalize=True): """ Tiny helper to show images as uint8 and remove axis labels """ if normalize: img_max, img_min = np.max(img), np.min(img) img = 255.0 * (img - img_min) / (img_max - img_min) plt.imshow(img.astype('uint8')) plt.gca().axis('off') # Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show()
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Convolution: Naive backward pass Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency. When you are done, run the following to check your backward pass with a numeric gradient check.
np.random.seed(231) x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1} dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout) out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) # Your errors should be around 1e-8' print('Testing conv_backward_naive function') print('dx error: ', rel_error(dx, dx_num)) print('dw error: ', rel_error(dw, dw_num)) print('db error: ', rel_error(db, db_num))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Max pooling: Naive forward Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency. Check your implementation by running the following:
x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], [[-0.14526316, -0.13052632], [-0.08631579, -0.07157895]], [[-0.02736842, -0.01263158], [ 0.03157895, 0.04631579]]], [[[ 0.09052632, 0.10526316], [ 0.14947368, 0.16421053]], [[ 0.20842105, 0.22315789], [ 0.26736842, 0.28210526]], [[ 0.32631579, 0.34105263], [ 0.38526316, 0.4 ]]]]) # Compare your output with ours. Difference should be around 1e-8. print('Testing max_pool_forward_naive function:') print('difference: ', rel_error(out, correct_out))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Max pooling: Naive backward Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency. Check your implementation with numeric gradient checking by running the following:
np.random.seed(231) x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache) # Your error should be around 1e-12 print('Testing max_pool_backward_naive function:') print('dx error: ', rel_error(dx, dx_num))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory: bash python setup.py build_ext --inplace The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights. NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation. You can compare the performance of the naive and fast versions of these layers by running the following:
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time np.random.seed(231) x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time() print('Testing conv_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('Difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time() print('\nTesting conv_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) print('dw difference: ', rel_error(dw_naive, dw_fast)) print('db difference: ', rel_error(db_naive, db_fast)) from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast np.random.seed(231) x = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time() print('Testing pool_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('fast: %fs' % (t2 - t1)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time() print('\nTesting pool_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Convolutional "sandwich" layers Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward np.random.seed(231) x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout) print('Testing conv_relu_pool') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) from cs231n.layer_utils import conv_relu_forward, conv_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout) print('Testing conv_relu:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Three-layer ConvNet Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network. Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug: Sanity check loss After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss)
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Gradient check After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.
num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 np.random.seed(231) X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, dtype=np.float64) loss, grads = model.loss(X, y) for param_name in sorted(grads): f = lambda _: model.loss(X, y)[0] param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6) e = rel_error(param_grad_num, grads[param_name]) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Overfit small data A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
np.random.seed(231) num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=15, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=1) solver.train()
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss') plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show()
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Train the net By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20) solver.train()
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Visualize Filters You can visualize the first-layer convolutional filters from the trained network by running the following:
from cs231n.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show()
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Spatial Batch Normalization We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization." Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map. If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W. Spatial batch normalization: forward In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean(axis=(0, 2, 3))) print(' Stds: ', x.std(axis=(0, 2, 3))) # Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization:') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) # Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization (nontrivial gamma, beta):') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) np.random.seed(231) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, C, H, W = 10, 4, 11, 12 bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in range(50): x = 2.3 * np.random.randn(N, C, H, W) + 13 spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After spatial batch normalization (test-time):') print(' means: ', a_norm.mean(axis=(0, 2, 3))) print(' stds: ', a_norm.std(axis=(0, 2, 3)))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Spatial batch normalization: backward In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
np.random.seed(231) N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta))
CS231n/assignment2/ConvolutionalNetworks.ipynb
ALEXKIRNAS/DataScience
mit
Viewing a RAW FITS File Assume you have a file: fits_data/raw_fits/single_ccd.fits ...containing an unmodified FITS full frame image. To get started, open this file and extract a httm.data_structures.raw_converter.SingleCCDRawConverter object. This is done by calling httm.fits_utilities.raw_fits.raw_converter_from_fits.
import httm from httm.fits_utilities.raw_fits import raw_converter_from_fits raw_data = raw_converter_from_fits('fits_data/raw_fits/single_ccd.fits')
test/notebooks/tutorial.ipynb
TESScience/httm
gpl-3.0
Each raw image contains the data for a single CCD. It contains 4 slices if it was taken by the instrument, and either 1 or 4 if it was created synthetically. Below, we visualize the first slice of the image.
matplotlib.pyplot.imshow(raw_data.slices[0].pixels) matplotlib.pyplot.gca().invert_yaxis()
test/notebooks/tutorial.ipynb
TESScience/httm
gpl-3.0
Viewing an Electron Flux FITS Image
from httm.fits_utilities.electron_flux_fits import electron_flux_converter_from_fits electron_flux_data = electron_flux_converter_from_fits('fits_data/electron_flux_fits/small_simulated_data.fits') matplotlib.pyplot.imshow(electron_flux_data.slices[0].pixels) matplotlib.pyplot.gca().invert_yaxis()
test/notebooks/tutorial.ipynb
TESScience/httm
gpl-3.0