markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
A la vista de la gráfica comentar (modificando el texto de este apartado) los siguientes puntos: valor mínimo de la reflectancia y la longitud de onda en la que ocurre. ¿Tiene algo que ver esta longitud de onda con la seleccionada en la Tarea 2, es decir con $\lambda_0$?, explicar la relación. valor máximo de la reflectancia y la longitud de onda en la que ocurre. A continuación prestamos atención a los valores de la reflectancia a lo largo de todo el visible, es decir, a toda la curva. Comparando dichos valores con el valor de la reflectancia de la lente calculada en la Tarea 1: comentar razonadamente si el tratamiento antirreflejantes es eficaz. Tarea 4. Caracterización del tratamiento antirreflejante con el ángulo de incidencia Vamos a estudiar como se comporta nuestro tratamiento cuando consideramos ángulos de incidencia distintos de cero. Para ello calculamos la reflectancia de la lente con el tratamiento para un ángulo de incidencia $\theta_i$. Usando este ángulo, el código que aparece en la siguiente celda calcula los ángulos incidentes y transmitidos en la distintas interfases y calcula la reflectancia del sistema para las dos componentes de polarización. Al considerar luz despolarizada la reflectancia será el promedio de las dos. En la gráfica se muestra la reflectancia para el ángulo de incidencia seleccionado junto con el caso de incidencia normal. Antes de ejecutar la siguiente celda, habremos tenido que ejecutar durante esta sesión de trabajo (al menos una vez) la celda de código correspondiente a la Tarea 3 (correspondiente a incidencia normal y espesor mínimo de la monocapa).
# MODIFICAR EL PARAMETRO. LUEGO EJECUTAR ####################################################################################################### angulo_incidente = 50 # Incluir el ángulo de incidencia (en grados) a la interfase aire-monocapa # DESDE AQUÍ NO TOCAR. ############################################################################################################################## angulo_incidente = angulo_incidente*pi/180 # Pasamos el ángulo a radianes angulo_transmitido_1 = arcsin(sin(angulo_incidente)/nc) # El ángulo transmitido en la interfase aire-monocapa (en radianes) angulo_incidente_2 = angulo_transmitido_1 # El ángulo incidente a la interfase monocapa-lente o a la interfase monocapa-aire # es igual al ángulo transmitido en la interfase aire-monocapa (en radianes) angulo_transmitido_2 = arcsin(nc*sin(angulo_incidente_2)/nL) # El ángulo transmitido en la interfase monocapa-lente (en radianes) # Coeficientes de reflexion y transmision de las dos componentes de polarización rAs = (1*cos(angulo_incidente)-nc*cos(angulo_transmitido_1))/(1*cos(angulo_incidente)+nc*cos(angulo_transmitido_1)) # reflexión aire --> monocapa tAs = 2*1*cos(angulo_incidente)/(1*cos(angulo_incidente)+nc*cos(angulo_transmitido_1)) # transmisión aire --> monocapa rBs = (nc*cos(angulo_incidente_2)-nL*cos(angulo_transmitido_2))/(nc*cos(angulo_incidente_2)+nL*cos(angulo_transmitido_2)) # reflexión monocapa --> lente tCs = 2*nc*cos(angulo_incidente_2)/(nc*cos(angulo_incidente_2)+1*cos(angulo_incidente)) # transmissión monocapa --> aire rAp = (nc*cos(angulo_incidente)-1*cos(angulo_transmitido_1))/(nc*cos(angulo_incidente)+1*cos(angulo_transmitido_1)) # reflexión aire --> monocapa tAp = 2*1*cos(angulo_incidente)/(nc*cos(angulo_incidente)+1*cos(angulo_transmitido_1)) # transmisión aire --> monocapa rBp = (nL*cos(angulo_incidente_2)-nc*cos(angulo_transmitido_2))/(nL*cos(angulo_incidente_2)+nc*cos(angulo_transmitido_2)) # reflexión monocapa --> lente tCp = 2*nc*cos(angulo_incidente_2)/(1*cos(angulo_incidente_2)+nc*cos(angulo_incidente)) # transmissión monocapa --> aire # Desfase y Reflectancia desfase1_angulo = (2*pi/longitud_de_onda)*2*nc*espesor1*cos(angulo_transmitido_1)+0*pi # desfase geométrico + desfase debido a las reflexiones Reflectancia_tratamiento1_s = 100*( rAs**2 + (tAs*rBs*tCs)**2 + 2*sqrt((rAs**2)*(tAs*rBs*tCs)**2)*cos(desfase1_angulo) ) # componente s (%) Reflectancia_tratamiento1_p = 100*( rAp**2 + (tAp*rBp*tCp)**2 + 2*sqrt((rAp**2)*(tAp*rBp*tCp)**2)*cos(desfase1_angulo) ) # componente p (%) Reflectancia_tratamiento1_angulo=(Reflectancia_tratamiento1_s+Reflectancia_tratamiento1_p)/2 # Dibujamos la reflectancia en función de la longitud de onda plot(longitud_de_onda,Reflectancia_tratamiento1,longitud_de_onda,Reflectancia_tratamiento1_angulo,lw=2) # Pintamos la reflectancia xlabel('$\lambda$ (nm)',fontsize=16);ylabel('Reflectancia (%)',fontsize=16) # Escribimos los nombres de los ejes legend(('incidencia normal','variando el angulo')) # Escribimos la leyenda;
TratamientoAntirreflejante/Tratamiento_Antirreflejante_Ejercicio.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
A la vista de la gráfica comentar (modificando el texto de este apartado) los siguientes puntos: Para un ángulo de 30 grados dar el valor de la reflectancia en $\lambda_0$. Para dicho ángulo de incidencia ¿cuál es el valor mínimo de la reflectancia y la longitud de onda a la que ocurre? Al aumentar el ángulo de incidencia describir hacia donde se desplaza el valor de la longitud de onda que presenta el mínimo valor de la reflectancia. Determinar a partir de qué ángulo de incidencia la reflectancia en alguna zona del visible alcanza valores próximos a la reflectancia de la lente calculada en la Tarea 1. Tarea 5. Caracterización del tratamiento antirreflejante con el espesor de la monocapa Hasta ahora hemos caracterizado el tratamiento para el espesor más pequeño posible de la monocapa. A continuación vamos a caracterizar el tratamiento para otros posibles espesores de la monocapa. Consideramos las mismas condiciones que empleamos para optimizar el tratamiento, es decir, incidencia normal y la longitud de onda $\lambda_0$. Escribir aquí los dos siguientes espesores de la monocapa escribir aquí su valor numérico modificando este texto espesor2 = nm espesor3 = nm Finalmente vamos a estudiar como se comporta nuestro tratamiento con los dos espesores calculados. También se muestra en la misma gráfica la reflectancia correspondiente al espesor mínimo. Antes de ejecutar la siguiente celda, habremos tenido que ejecutar durante esta sesión de trabajo (al menos una vez) la celda de código correspondientes a la Tarea 3.
# MODIFICAR LOS DOS PARAMETROS. LUEGO EJECUTAR ######################################################## espesor2 = 99*3 # Incluir el valor del segundo espesor más pequeño de la monocapa (en nm) espesor3 = 99*5 # Incluir el valor del tercer espesor más pequeño de la monocapa (en nm) # DESDE AQUÍ NO TOCAR. ############################################################################################################################## # Desfase y Reflectancia para el espesor mínimo desfase1 = (2*pi/longitud_de_onda)*2*nc*espesor1 + 0*pi # desfase geométrico + desfase debido a las reflexiones Reflectancia_tratamiento1 = 100*( rA**2 + (tA*rB*tC)**2 + 2*sqrt( (rA**2)*(tA*rB*tC)**2 )*cos(desfase1) ) # Reflectancia (%) # Desfase y Reflectancia para el segundo espesor mínimo desfase2 = (2*pi/longitud_de_onda)*2*nc*espesor2 + 0*pi # desfase geométrico + desfase debido a las reflexiones Reflectancia_tratamiento2 = 100*( rA**2 + (tA*rB*tC)**2 + 2*sqrt( (rA**2)*(tA*rB*tC)**2 )*cos(desfase2) ) # Reflectancia (%) # Desfase y Reflectancia para el segundo espesor mínimo desfase3 = (2*pi/longitud_de_onda)*2*nc*espesor3 + 0*pi # desfase geométrico + desfase debido a las reflexiones Reflectancia_tratamiento3 = 100*( rA**2 + (tA*rB*tC)**2 + 2*sqrt( (rA**2)*(tA*rB*tC)**2 )*cos(desfase3) ) # Reflectancia (%) # Dibujamos la reflectancia en función de la longitud de onda plot(longitud_de_onda,Reflectancia_tratamiento1,longitud_de_onda,Reflectancia_tratamiento2,longitud_de_onda,Reflectancia_tratamiento3,lw=2) # Pintamos la reflectancia xlabel('$\lambda$ (nm)',fontsize=16);ylabel('Reflectancia (%)',fontsize=16) # Escribimos los nombres de los ejes legend(('espesor1','espesor2','espesor3'),loc = 'upper right', bbox_to_anchor = (0.5, 0.5)) # Escribimos la leyenda;
TratamientoAntirreflejante/Tratamiento_Antirreflejante_Ejercicio.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
So the maximum weight of a truck crossing the bridges can only be 1000 units (kg, lbs, etc.). Now what about this example where we have two routes we can take? <img src="https://raw.githubusercontent.com/pbeens/ICS-Computer-Studies/master/Python/Class%20Demos/files/cscircles_two_roads.png", width=50%, height=50%> In this case we want to take the <b>maximum</b> of the two mimimums. Let's start by setting the values of all the weight limits:
a = 1000 b = 2340 c = 3246 d = 1400 e = 5000
Python/Class Demos/CS Circles 2 (Functions) - Bridges).ipynb
pbeens/ICS-Computer-Studies
mit
Then let's create two variables to represent the maximum weight for each of the two paths, which, don't forget, has to be the <b>minimum</b> of the values of the two paths.
path_1_limit = min(a, b, c) path_2_limit = min(d, e) print ('The 1st path limit is', path_1_limit, 'and the 2nd path limit is', path_2_limit)
Python/Class Demos/CS Circles 2 (Functions) - Bridges).ipynb
pbeens/ICS-Computer-Studies
mit
The maximum weight limit would obviously be 1400 units, which is the <b>maximum</b> of our two values. Using the max() function we have:
print ('The maximum weight that be carried is', max(path_1_limit, path_2_limit), 'units.')
Python/Class Demos/CS Circles 2 (Functions) - Bridges).ipynb
pbeens/ICS-Computer-Studies
mit
Policy Evaluation by Dynamic Programming For the MDP represented above we define the state transition probability matrix $\mathcal{P}^a_{ss'}=p(S_{t+1}=s'\mid S_{t}=s, A_t=a)$. In this MDP we assume that when we choose to move to state $s_i$, $i={1,2,3}$ we always end up in that state, meaning that $\mathcal{P}^a_{ss'}=p(S_{t+1}=s'\mid S_{t}=s, A_t=a)=1$. In this case, $\mathcal{P}^{\pi}=\mathcal{P}^a_{ss'}\pi(a\mid s) = \pi(a\mid s)$ the Bellman Expectation equation becomes (Check page 14 and 16 from the lecture slides.): $$ V_{\pi}(s) = \sum_{a\in\mathcal{A}} \pi(a\mid s)\left( \mathcal{R}^a_s + \gamma \sum_{s'\in \mathcal{S}}\mathcal{P}^a_{ss'}V_{\pi}(s')\right) = \mathcal{R}^{\pi}+ \gamma \sum_{s'\in \mathcal{S}}\pi(a\mid s)V_{\pi}(s') $$
import numpy as np policy = np.array([[0.3, 0.2, 0.5], [0.5, 0.4, 0.1], [0.8, 0.1, 0.1]]) print("This is represents the policy with 3 states and 3 actions p(row=a|col=s):\n", np.matrix(policy)) # 'raw_rewards' variable contains rewards obtained after transition to each state # In our example it doesn't depend on source state raw_rewards = np.array([1.5, -1.833333333, 19.833333333]) # 'rewards' variable contains expected values of the next reward for each state rewards = np.matmul(policy, raw_rewards) assert np.allclose(rewards, np.array([10., 2., 3.])) gamma = 0.1 print('This are the rewards for each action:\n', rewards) state_value_function = np.array([0 for i in range(3)]) print('Policy evaluation:') for i in range(20): print('V_{}={}'.format(i, state_value_function)) state_value_function = rewards + gamma * (np.matmul(policy, state_value_function)) print('\nV={}'.format(state_value_function))
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
Policy Evaluation by Linear Programming The state-value-function can be directly solved through linear programming (as shown on page 15 from the lecture slides): $$ V_{\pi}(s)=\left(I-\gamma\mathcal{P}^{\pi}\right)^{-1}\mathcal{R}^{\pi} $$
solution=np.matmul(np.linalg.inv(np.eye(3)-0.1*policy), rewards) print('Solution by inversion:\nV={}'.format(state_value_function))
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
The result stays the same. Policy Evaluation by Monte Carlo Sampling We can design yet another way of evaluating the value of a given policy $\pi$, see lecture slides pag.20. The intuition is to incrementally the expected return from sampled episodes, sequences of triplets ${(s_i,a_i,r_{i})}{i=1}^N$. The function $\color{blue}{gt}$ computes the total discounted reward from a list of sequential rewards obtained by sampling the policy: $G_t=r_t+\gamma r{t+1}+\gamma^2 r_{t+2}+\dots+\gamma^N r_{t+N}$. The value of a policy can also be computed by looking at its empirical expected cumulative discounted return: $$ V_{\pi}(s) = \mathbb{E}_{\pi}\left[G_t\mid S_t=s\right] $$
import random from collections import defaultdict reward_counter = np.array([0., 0., 0.]) visit_counter = np.array([0., 0., 0.]) nIterations = 400 def gt(rewardlist, gamma=0.1): ''' Function to calculate the total discounted reward >>> gt([10, 2, 3], gamma=0.1) 10.23 ''' total_disc_return = 0 for (i, value) in enumerate(rewardlist): total_disc_return += (gamma ** i) * value return total_disc_return for i in range(nIterations): start_state = random.randint(0, 2) next_state = start_state rewardlist = [] occurence = defaultdict(list) for i in range(250): #draw samples from the policy recursively over horizon of N=250 rewardlist.append(rewards[next_state]) occurence[next_state].append(len(rewardlist) - 1) action = np.random.choice(np.arange(0, 3), p=policy[next_state]) next_state = action for state in occurence: for value in occurence[state]: #update state value function E[G_t|s]=S(s)/N(s) rew = gt(rewardlist[value:]) reward_counter[state] += rew # S(s) visit_counter[state] += 1 # N(s) print("MC policy evaluation V=", reward_counter / visit_counter)
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
As can be seen the result is nearly the same as the state-value-function calculated above. So far we have seen different ways of given a known policy $\pi(a\mid s)$ how to comput its value $V_{\pi}(s)$. Next, we wish to find the optimal policy $\pi^\ast(s)$ for the MDP in the example. Policy Optimization by Q-Learning This code solves a very easy problem: using the rewards it calculates the optimal action-value-function (page 26 on slides). It samples a state-action pair randomly, so that all state-action pairs can be seen.
q_table = np.zeros((3, 3)) #state action value function Q-table gamma = 0.1 alpha = 1.0 eps = 0.1 def get_eps_greedy_action(state): if random.uniform(0, 1) < eps: return random.randint(0, 2) return np.argmax(q_table[state]).item() for i in range(1001): state = random.randint(0, 2) action = get_eps_greedy_action(state) next_state = action reward = raw_rewards[next_state] next_q = max(q_table[next_state]) #s.a. value evaluation at the next state q_table[state, action] = q_table[state, action] + alpha* ( reward + gamma * (next_q) - q_table[state, action]) #Q-Table update if i % 200 == 0: print("Q_{}(s,a)=".format(i),q_table)
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
Value Iteration
import numpy as np raw_rewards = np.array([1.5, -1.833333333, 19.833333333]) gamma = 0.1 state_value_function = np.zeros(3) print('V_{} = {}'.format(0, state_value_function)) for i in range(1000): for s in range(3): Q_s = [raw_rewards[s_next] + gamma * state_value_function[s_next] for s_next in range(3)] state_value_function[s] = max(Q_s) if i % 100 == 99: print('V_{} = {}'.format(i + 1, state_value_function))
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
Preparing dataset:
TRAIN = "train/" TEST = "test/" # Load "X" (the neural network's training and testing inputs) def load_X(X_signals_paths): X_signals = [] for signal_type_path in X_signals_paths: file = open(signal_type_path, 'rb') # Read dataset from disk, dealing with text files' syntax X_signals.append( [np.array(serie, dtype=np.float32) for serie in [ row.replace(' ', ' ').strip().split(' ') for row in file ]] ) file.close() return np.transpose(np.array(X_signals), (1, 2, 0)) X_train_signals_paths = [ DATASET_PATH + TRAIN + "Inertial Signals/" + signal + "train.txt" for signal in INPUT_SIGNAL_TYPES ] X_test_signals_paths = [ DATASET_PATH + TEST + "Inertial Signals/" + signal + "test.txt" for signal in INPUT_SIGNAL_TYPES ] X_train = load_X(X_train_signals_paths) X_test = load_X(X_test_signals_paths) # Load "y" (the neural network's training and testing outputs) def load_y(y_path): file = open(y_path, 'rb') # Read dataset from disk, dealing with text file's syntax y_ = np.array( [elem for elem in [ row.replace(' ', ' ').strip().split(' ') for row in file ]], dtype=np.int32 ) file.close() # Substract 1 to each output class for friendly 0-based indexing return y_ - 1 y_train_path = DATASET_PATH + TRAIN + "y_train.txt" y_test_path = DATASET_PATH + TEST + "y_test.txt" y_train = load_y(y_train_path) y_test = load_y(y_test_path)
LSTM.ipynb
KennyCandy/HAR
mit
Additionnal Parameters: Here are some core parameter definitions for the training. The whole neural network's structure could be summarised by enumerating those parameters and the fact an LSTM is used.
# Input Data training_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie) test_data_count = len(X_test) # 2947 testing series n_steps = len(X_train[0]) # 128 timesteps per series n_input = len(X_train[0][0]) # 9 input parameters per timestep # LSTM Neural Network's internal structure n_hidden = 32 # Hidden layer num of features n_classes = 6 # Total classes (should go up, or should go down) # Training learning_rate = 0.0025 lambda_loss_amount = 0.0015 training_iters = training_data_count * 300 # Loop 300 times on the dataset batch_size = 1500 display_iter = 30000 # To show test set accuracy during training # Some debugging info print "Some useful info to get an insight on dataset's shape and normalisation:" print "(X shape, y shape, every X's mean, every X's standard deviation)" print (X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test)) print "The dataset is therefore properly normalised, as expected, but not yet one-hot encoded."
LSTM.ipynb
KennyCandy/HAR
mit
Utility functions for training:
def LSTM_RNN(_X, _weights, _biases): # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters. # Moreover, two LSTM cells are stacked which adds deepness to the neural network. # Note, some code of this notebook is inspired from an slightly different # RNN architecture used on another dataset: # https://tensorhub.com/aymericdamien/tensorflow-rnn # (NOTE: This step could be greatly optimised by shaping the dataset once # input shape: (batch_size, n_steps, n_input) _X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size # Reshape to prepare input to hidden activation _X = tf.reshape(_X, [-1, n_input]) # new shape: (n_steps*batch_size, n_input) # Linear activation _X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden']) # Split data because rnn cell needs a list of inputs for the RNN inner loop _X = tf.split(0, n_steps, _X) # new shape: n_steps * (batch_size, n_hidden) # Define two stacked LSTM cells (two recurrent layers deep) with tensorflow lstm_cell_1 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True) lstm_cell_2 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True) lstm_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True) # Get LSTM cell output outputs, states = tf.nn.rnn(lstm_cells, _X, dtype=tf.float32) # Get last time step's output feature for a "many to one" style classifier, # as in the image describing RNNs at the top of this page lstm_last_output = outputs[-1] # Linear activation return tf.matmul(lstm_last_output, _weights['out']) + _biases['out'] def extract_batch_size(_train, step, batch_size): # Function to fetch a "batch_size" amount of data from "(X|y)_train" data. shape = list(_train.shape) shape[0] = batch_size batch_s = np.empty(shape) for i in range(batch_size): # Loop index index = ((step-1)*batch_size + i) % len(_train) batch_s[i] = _train[index] return batch_s def one_hot(y_): # Function to encode output labels from number indexes # e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]] y_ = y_.reshape(len(y_)) n_values = np.max(y_) + 1 return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
LSTM.ipynb
KennyCandy/HAR
mit
Let's get serious and build the neural network:
# Graph input/output x = tf.placeholder(tf.float32, [None, n_steps, n_input]) y = tf.placeholder(tf.float32, [None, n_classes]) # Graph weights weights = { 'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights 'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0)) } biases = { 'hidden': tf.Variable(tf.random_normal([n_hidden])), 'out': tf.Variable(tf.random_normal([n_classes])) } pred = LSTM_RNN(x, weights, biases) # Loss, optimizer and evaluation l2 = lambda_loss_amount * sum( tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables() ) # L2 loss prevents this overkill neural network to overfit the data cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) + l2 # Softmax loss optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
LSTM.ipynb
KennyCandy/HAR
mit
Hooray, now train the neural network:
# To keep track of training's performance test_losses = [] test_accuracies = [] train_losses = [] train_accuracies = [] # Launch the graph sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True)) init = tf.initialize_all_variables() sess.run(init) # Perform Training steps with "batch_size" amount of example data at each loop step = 1 while step * batch_size <= training_iters: batch_xs = extract_batch_size(X_train, step, batch_size) batch_ys = one_hot(extract_batch_size(y_train, step, batch_size)) # Fit training using batch data _, loss, acc = sess.run( [optimizer, cost, accuracy], feed_dict={ x: batch_xs, y: batch_ys } ) train_losses.append(loss) train_accuracies.append(acc) # Evaluate network only at some steps for faster training: if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters): # To not spam console, show training accuracy/loss in this "if" print "Training iter #" + str(step*batch_size) + \ ": Batch Loss = " + "{:.6f}".format(loss) + \ ", Accuracy = {}".format(acc) # Evaluation on the test set (no learning made here - just evaluation for diagnosis) loss, acc = sess.run( [cost, accuracy], feed_dict={ x: X_test, y: one_hot(y_test) } ) test_losses.append(loss) test_accuracies.append(acc) print "PERFORMANCE ON TEST SET: " + \ "Batch Loss = {}".format(loss) + \ ", Accuracy = {}".format(acc) step += 1 print "Optimization Finished!" # Accuracy for test data one_hot_predictions, accuracy, final_loss = sess.run( [pred, accuracy, cost], feed_dict={ x: X_test, y: one_hot(y_test) } ) test_losses.append(final_loss) test_accuracies.append(accuracy) print "FINAL RESULT: " + \ "Batch Loss = {}".format(final_loss) + \ ", Accuracy = {}".format(accuracy)
LSTM.ipynb
KennyCandy/HAR
mit
Training is good, but having visual insight is even better: Okay, let's plot this simply in the notebook for now.
# (Inline plots: ) %matplotlib inline font = { 'family' : 'Bitstream Vera Sans', 'weight' : 'bold', 'size' : 18 } matplotlib.rc('font', **font) width = 12 height = 12 plt.figure(figsize=(width, height)) indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size)) plt.plot(indep_train_axis, np.array(train_losses), "b--", label="Train losses") plt.plot(indep_train_axis, np.array(train_accuracies), "g--", label="Train accuracies") indep_test_axis = np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1] + [training_iters]) plt.plot(indep_test_axis, np.array(test_losses), "b-", label="Test losses") plt.plot(indep_test_axis, np.array(test_accuracies), "g-", label="Test accuracies") plt.title("Training session's progress over iterations") plt.legend(loc='upper right', shadow=True) plt.ylabel('Training Progress (Loss or Accuracy values)') plt.xlabel('Training iteration') plt.show()
LSTM.ipynb
KennyCandy/HAR
mit
And finally, the multi-class confusion matrix and metrics!
# Results predictions = one_hot_predictions.argmax(1) print "Testing Accuracy: {}%".format(100*accuracy) print "" print "Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted")) print "Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted")) print "f1_score: {}%".format(100*metrics.f1_score(y_test, predictions, average="weighted")) print "" print "Confusion Matrix:" confusion_matrix = metrics.confusion_matrix(y_test, predictions) print confusion_matrix normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100 print "" print "Confusion matrix (normalised to % of total test data):" print normalised_confusion_matrix print ("Note: training and testing data is not equally distributed amongst classes, " "so it is normal that more than a 6th of the data is correctly classifier in the last category.") # Plot Results: width = 12 height = 12 plt.figure(figsize=(width, height)) plt.imshow( normalised_confusion_matrix, interpolation='nearest', cmap=plt.cm.rainbow ) plt.title("Confusion matrix \n(normalised to % of total test data)") plt.colorbar() tick_marks = np.arange(n_classes) plt.xticks(tick_marks, LABELS, rotation=90) plt.yticks(tick_marks, LABELS) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() sess.close()
LSTM.ipynb
KennyCandy/HAR
mit
Conclusion Outstandingly, the accuracy is of 91%! This means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so those predictions are extremely accurate. I specially did not expect such good results for guessing between "WALKING" "WALKING_UPSTAIRS" and "WALKING_DOWNSTAIRS" as a cellphone. Thought, it is still possible to see a little cluster on the matrix between those 3 classes. This is great. It is also possible to see that it was hard to do the difference between "SITTING" and "STANDING". Those are seemingly almost the same thing from the point of view of a device placed on the belly, according to how the dataset was gathered. I also tried my code without the gyroscope, using only the two 3D accelerometer's features (and not changing the training hyperparameters), and got an accuracy of 87%. Improvements In another repo of mine, the accuracy is pushed up to 94% using a special deep bidirectional architecture, and this architecture is tested on another dataset. If you want to learn more about deep learning, I have built a list of ressources that I found to be useful here. References The dataset can be found on the UCI Machine Learning Repository. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013. If you want to cite my work, you can point to the URL of the GitHub repository: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition Connect with me https://ca.linkedin.com/in/chevalierg https://twitter.com/guillaume_che https://github.com/guillaume-chevalier/
# Let's convert this notebook to a README as the GitHub project's title page: !jupyter nbconvert --to markdown LSTM.ipynb !mv LSTM.md README.md
LSTM.ipynb
KennyCandy/HAR
mit
Load data HJCFIT depends on DCPROGS/DCPYPS module for data input and setting kinetic mechanism:
from dcpyps.samples import samples from dcpyps import dataset, mechanism, dcplots fname = "CH82.scn" # binary SCN file containing simulated idealised single-channel open/shut intervals tr = 1e-4 # temporal resolution to be imposed to the record tc = 4e-3 # critical time interval to cut the record into bursts conc = 100e-9 # agonist concentration
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Initialise Single-Channel Record from dcpyps. Note that SCRecord takes a list of file names; several SCN files from the same patch can be loaded.
# Initaialise SCRecord instance. rec = dataset.SCRecord([fname], conc, tres=tr, tcrit=tc) rec.printout()
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Plot dwell-time histograms for inspection. In single-channel analysis field it is common to plot these histograms with x-axis in log scale and y-axis in square-root scale. After such transformation exponential pdf has a bell-shaped form.
fig, ax = plt.subplots(1, 2, figsize=(12,5)) dcplots.xlog_hist_data(ax[0], rec.opint, rec.tres, shut=False) dcplots.xlog_hist_data(ax[1], rec.shint, rec.tres) fig.tight_layout()
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Load demo mechanism (C&H82 numerical example)
mec = samples.CH82() mec.printout() # PREPARE RATE CONSTANTS. # Fixed rates mec.Rates[7].fixed = True # Constrained rates mec.Rates[5].is_constrained = True mec.Rates[5].constrain_func = mechanism.constrain_rate_multiple mec.Rates[5].constrain_args = [4, 2] mec.Rates[6].is_constrained = True mec.Rates[6].constrain_func = mechanism.constrain_rate_multiple mec.Rates[6].constrain_args = [8, 2] # Rates constrained by microscopic reversibility mec.set_mr(True, 9, 0) # Update rates mec.update_constrains() #Propose initial guesses different from recorded ones #initial_guesses = [100, 3000, 10000, 100, 1000, 1000, 1e+7, 5e+7, 6e+7, 10] initial_guesses = mec.unit_rates() mec.set_rateconstants(initial_guesses) mec.update_constrains() mec.printout() # Extract free parameters theta = mec.theta() print ('\ntheta=', theta)
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Prepare likelihood function
def dcprogslik(x, lik, m, c): m.theta_unsqueeze(np.exp(x)) l = 0 for i in range(len(c)): m.set_eff('c', c[i]) l += lik[i](m.Q) return -l * math.log(10) # Import HJCFIT likelihood function from dcprogs.likelihood import Log10Likelihood # Get bursts from the record bursts = rec.bursts.intervals() # Initiate likelihood function with bursts, number of open states, # temporal resolution and critical time interval likelihood = Log10Likelihood(bursts, mec.kA, tr, tc) lik = dcprogslik(np.log(theta), [likelihood], mec, [conc]) print ("\nInitial likelihood = {0:.6f}".format(-lik))
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Run optimisation
from scipy.optimize import minimize print ("\nScyPy.minimize (Nelder-Mead) Fitting started: " + "%4d/%02d/%02d %02d:%02d:%02d"%time.localtime()[0:6]) start = time.clock() start_wall = time.time() result = minimize(dcprogslik, np.log(theta), args=([likelihood], mec, [conc]), method='Nelder-Mead') t3 = time.clock() - start t3_wall = time.time() - start_wall print ("\nScyPy.minimize (Nelder-Mead) Fitting finished: " + "%4d/%02d/%02d %02d:%02d:%02d"%time.localtime()[0:6]) print ('\nCPU time in ScyPy.minimize (Nelder-Mead)=', t3) print ('Wall clock time in ScyPy.minimize (Nelder-Mead)=', t3_wall) print ('\nResult ==========================================\n', result) print ("\nFinal likelihood = {0:.16f}".format(-result.fun)) mec.theta_unsqueeze(np.exp(result.x)) print ("\nFinal rate constants:") mec.printout()
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Plot experimental histograms and predicted pdfs
from dcprogs.likelihood import QMatrix from dcprogs.likelihood import missed_events_pdf, ideal_pdf, IdealG, eig qmatrix = QMatrix(mec.Q, 2) idealG = IdealG(qmatrix)
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Note that to properly overlay ideal and missed-event corrected pdfs ideal pdf has to be scaled (need to renormailse to 1 the area under pdf from $\tau_{res}$).
# Scale for ideal pdf def scalefac(tres, matrix, phiA): eigs, M = eig(-matrix) N = inv(M) k = N.shape[0] A = np.zeros((k, k, k)) for i in range(k): A[i] = np.dot(M[:, i].reshape(k, 1), N[i].reshape(1, k)) w = np.zeros(k) for i in range(k): w[i] = np.dot(np.dot(np.dot(phiA, A[i]), (-matrix)), np.ones((k, 1))) return 1 / np.sum((w / eigs) * np.exp(-tres * eigs)) fig, ax = plt.subplots(1, 2, figsize=(12,5)) # Plot apparent open period histogram ipdf = ideal_pdf(qmatrix, shut=False) iscale = scalefac(tr, qmatrix.aa, idealG.initial_occupancies) epdf = missed_events_pdf(qmatrix, tr, nmax=2, shut=False) dcplots.xlog_hist_HJC_fit(ax[0], rec.tres, rec.opint, epdf, ipdf, iscale, shut=False) # Plot apparent shut period histogram ipdf = ideal_pdf(qmatrix, shut=True) iscale = scalefac(tr, qmatrix.ff, idealG.final_occupancies) epdf = missed_events_pdf(qmatrix, tr, nmax=2, shut=True) dcplots.xlog_hist_HJC_fit(ax[1], rec.tres, rec.shint, epdf, ipdf, iscale, tcrit=rec.tcrit) fig.tight_layout()
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
1. Read in the groundtrack data
lats,lons,date_times,prof_times,dem_elevation=get_geo(lidar_file)
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
2. use the modis corner lats and lons to clip the cloudsat lats and lons to the same region
from a301utils.modismeta_read import parseMeta metadict=parseMeta(rad_file) corner_keys = ['min_lon','max_lon','min_lat','max_lat'] min_lon,max_lon,min_lat,max_lat=[metadict[key] for key in corner_keys]
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
Find all the cloudsat points that are between the min/max by construting a logical True/False vector. As with matlab, this vector can be used as an index to pick out those points at the indices where it evaluates to True. Also as in matlab if a logical vector is passed to a numpy function like sum, the True values are cast to 1 and the false values are cast to 0, so summing a logical vector tells you the number of true values.
lon_hit=np.logical_and(lons>min_lon,lons<max_lon) lat_hit = np.logical_and(lats>min_lat,lats< max_lat) in_box=np.logical_and(lon_hit,lat_hit) print("ground track has {} points, we've selected {}".format(len(lon_hit),np.sum(in_box)) ) box_lons,box_lats=lons[in_box],lats[in_box]
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
3. Reproject MYD021KM channel 1 to a lambert azimuthal projection If we are on OSX we can run the a301utils.modis_to_h5 script to turn the h5 level 1b files into a pyresample projected file for channel 1 by running python using the os.system command If we are on windows, a201utils.modis_to_h5 needs to be run in the pyre environment in a separate shell
from a301lib.modis_reproject import make_projectname reproject_name=make_projectname(rad_file) reproject_path = Path(reproject_name) if reproject_path.exists(): print('using reprojected h5 file {}'.format(reproject_name)) else: #need to create reproject.h5 for channel 1 channels='-c 1 4 3 31' template='python -m a301utils.modis_to_h5 {} {} {}' command=template.format(rad_file,geom_file,channels) if 'win' in sys.platform[:3]: print('platform is {}, need to run modis_to_h5.py in new environment' .format(sys.platform)) print('open an msys terminal and run \n{}\n'.format(command)) else: #osx, so presample is available print('running \n{}\n'.format(command)) out=os.system(command) the_size=reproject_path.stat().st_size print('generated reproject file for 4 channels, size is {} bytes'.format(the_size))
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads
with h5py.File(reproject_name,'r') as h5_file: basemap_args=json.loads(h5_file.attrs['basemap_args']) chan1=h5_file['channels']['1'][...] geo_string = h5_file.attrs['geotiff_args'] geotiff_args = json.loads(geo_string) print('basemap_args: \n{}\n'.format(basemap_args)) print('geotiff_args: \n{}\n'.format(geotiff_args)) %matplotlib inline from matplotlib import cm from matplotlib.colors import Normalize cmap=cm.autumn #see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps cmap.set_over('w') cmap.set_under('b',alpha=0.2) cmap.set_bad('0.75') #75% grey plt.close('all') fig,ax = plt.subplots(1,1,figsize=(14,14)) # # set up the Basemap object # basemap_args['ax']=ax basemap_args['resolution']='c' bmap = Basemap(**basemap_args) # # transform the ground track lons/lats to x/y # cloudsatx,cloudsaty=bmap(box_lons,box_lats) # # plot as blue circles # bmap.plot(cloudsatx,cloudsaty,'bo') # # now plot channel 1 # num_meridians=180 num_parallels = 90 col = bmap.imshow(chan1, origin='upper',cmap=cmap, vmin=0, vmax=0.4) lon_sep, lat_sep = 5,5 parallels = np.arange(-90, 90, lat_sep) meridians = np.arange(0, 360, lon_sep) bmap.drawparallels(parallels, labels=[1, 0, 0, 0], fontsize=10, latmax=90) bmap.drawmeridians(meridians, labels=[0, 0, 0, 1], fontsize=10, latmax=90) bmap.drawcoastlines() colorbar=fig.colorbar(col, shrink=0.5, pad=0.05,extend='both') colorbar.set_label('channel1 reflectivity',rotation=-90,verticalalignment='bottom') _=ax.set(title='vancouver')
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
write the groundtrack out for future use
groundtrack_name = reproject_name.replace('reproject','groundtrack') print('writing groundtrack to {}'.format(groundtrack_name)) box_times=date_times[in_box] # # h5 files can't store dates, but they can store floating point # seconds since 1970, which is called POSIX timestamp # timestamps = [item.timestamp() for item in box_times] timestamps= np.array(timestamps) with h5py.File(groundtrack_name,'w') as groundfile: groundfile.attrs['cloudsat_filename']=lidar_file groundfile.attrs['modis_filename']=rad_file groundfile.attrs['reproject_filename']=reproject_name dset=groundfile.create_dataset('cloudsat_lons',box_lons.shape,box_lons.dtype) dset[...] = box_lons[...] dset.attrs['long_name']='cloudsat longitude' dset.attrs['units']='degrees East' dset=groundfile.create_dataset('cloudsat_lats',box_lats.shape,box_lats.dtype) dset[...] = box_lats[...] dset.attrs['long_name']='cloudsat latitude' dset.attrs['units']='degrees North' dset= groundfile.create_dataset('cloudsat_times',timestamps.shape,timestamps.dtype) dset[...] = timestamps[...] dset.attrs['long_name']='cloudsat UTC datetime timestamp' dset.attrs['units']='seconds since Jan. 1, 1970'
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
Now we set some key variables for the simulation, $\theta$ is the contact angle in each phase and without contact hysteresis sums to 180. The fiber radius is 5 $\mu m$ for this particular material and this used in the pore-scale capillary pressure models.
theta_w = 110 theta_a = 70 fiber_rad = 5e-6
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Experimental Data The experimental data we are matching is taken from the 2009 paper for uncompressed Toray 090D which has had some treatment with PTFE to make it non-wetting to water. However, the material also seems to be non-wetting to air, once filled with water as reducing the pressure once invaded with water does not lead to spontaneous uptake of air.
data = np.array([[-1.95351934e+04, 0.00000000e+00], [-1.79098945e+04, 1.43308300e-03], [-1.63107500e+04, 1.19626000e-03], [-1.45700654e+04, 9.59437000e-04], [-1.30020859e+04, 7.22614000e-04], [-1.14239746e+04, 4.85791000e-04], [-9.90715234e+03, 2.48968000e-04], [-8.45271973e+03, 1.68205100e-03], [-7.01874170e+03, 1.44522800e-03], [-5.61586768e+03, 2.87831100e-03], [-4.27481055e+03, 4.44633600e-03], [-3.52959229e+03, 5.81363400e-03], [-2.89486523e+03, 5.51102700e-03], [-2.25253784e+03, 8.26249200e-03], [-1.59332751e+03, 9.32718400e-03], [-9.93971252e+02, 1.03918750e-02], [-3.52508118e+02, 1.31433410e-02], [ 2.55833755e+02, 1.90500850e-02], [ 8.10946533e+02, 1.12153247e-01], [ 1.44181152e+03, 1.44055799e-01], [ 2.02831689e+03, 1.58485811e-01], [ 2.56954688e+03, 1.68051842e-01], [ 3.22414917e+03, 1.83406543e-01], [ 3.81607397e+03, 2.00111675e-01], [ 4.35119043e+03, 2.20173487e-01], [ 4.93044141e+03, 2.50698356e-01], [ 5.44759180e+03, 2.70760168e-01], [ 5.97326611e+03, 3.02663131e-01], [ 6.49410010e+03, 3.83319515e-01], [ 7.05238232e+03, 5.06499276e-01], [ 7.54107031e+03, 6.63817501e-01], [ 8.08143408e+03, 7.67864788e-01], [ 8.54633203e+03, 8.26789866e-01], [ 9.03138965e+03, 8.62470191e-01], [ 9.53165723e+03, 8.84504516e-01], [ 1.00119375e+04, 9.01529123e-01], [ 1.19394492e+04, 9.32130571e-01], [ 1.37455771e+04, 9.43415425e-01], [ 1.54468594e+04, 9.54111932e-01], [ 1.71077578e+04, 9.59966386e-01], [ 1.87670996e+04, 9.66241521e-01], [ 2.02733223e+04, 9.70728677e-01], [ 2.17321895e+04, 9.75215832e-01], [ 2.30644336e+04, 9.79820651e-01], [ 2.44692598e+04, 9.81254145e-01], [ 2.56992520e+04, 9.88778094e-01], [ 2.69585078e+04, 9.93080716e-01], [ 2.81848105e+04, 9.92843893e-01], [ 2.93189434e+04, 9.99000955e-01], [ 3.04701816e+04, 1.00180134e+00], [ 2.94237266e+04, 1.00323442e+00], [ 2.82839531e+04, 1.00132769e+00], [ 2.70130059e+04, 1.00109128e+00], [ 2.57425723e+04, 1.00085404e+00], [ 2.43311738e+04, 1.00047148e+00], [ 2.29761172e+04, 1.00023466e+00], [ 2.15129902e+04, 9.99997838e-01], [ 2.00926621e+04, 9.98091109e-01], [ 1.85019902e+04, 9.97854286e-01], [ 1.70299883e+04, 9.95947557e-01], [ 1.53611387e+04, 9.95710734e-01], [ 1.36047275e+04, 9.93804005e-01], [ 1.18231387e+04, 9.93567182e-01], [ 9.87990430e+03, 9.91660453e-01], [ 9.40066016e+03, 9.89671072e-01], [ 8.89503516e+03, 9.89368465e-01], [ 8.39770508e+03, 9.89065857e-01], [ 7.89161768e+03, 9.88763250e-01], [ 7.37182080e+03, 9.86790737e-01], [ 6.87028369e+03, 9.86488130e-01], [ 6.28498584e+03, 9.85882915e-01], [ 5.80695361e+03, 9.85580308e-01], [ 5.23104834e+03, 9.85277701e-01], [ 4.68521338e+03, 9.84975094e-01], [ 4.11333887e+03, 9.84672487e-01], [ 3.59290625e+03, 9.84369879e-01], [ 2.96803101e+03, 9.84067272e-01], [ 2.41424536e+03, 9.82094759e-01], [ 1.82232153e+03, 9.81792152e-01], [ 1.22446594e+03, 9.79819639e-01], [ 6.63709351e+02, 9.79517032e-01], [ 7.13815610e+01, 9.79214424e-01], [-5.23247498e+02, 9.75437063e-01], [-1.19633813e+03, 9.73464550e-01], [-1.81142188e+03, 9.66162844e-01], [-2.46475146e+03, 9.42637411e-01], [-3.08150562e+03, 8.98736764e-01], [-3.72976978e+03, 7.06808493e-01], [-4.36241846e+03, 3.18811069e-01], [-5.10291357e+03, 2.13867093e-01], [-5.77698242e+03, 1.76544863e-01], [-6.47121728e+03, 1.62546665e-01], [-7.23913574e+03, 1.49192478e-01], [-7.89862988e+03, 1.45550059e-01], [-8.60248633e+03, 1.43577546e-01], [-9.35398340e+03, 1.39800185e-01], [-1.00623330e+04, 1.37827671e-01], [-1.15617539e+04, 1.37590848e-01], [-1.31559434e+04, 1.37354025e-01], [-1.48024961e+04, 1.35430429e-01], [-1.63463340e+04, 1.33523700e-01], [-1.80782656e+04, 1.33286877e-01], [-1.98250000e+04, 1.31380148e-01], [-2.15848105e+04, 1.31143325e-01], [-2.34678457e+04, 1.29236596e-01]]) #NBVAL_IGNORE_OUTPUT plt.figure(); plt.plot(data[:, 0], data[:, 1], 'g--'); plt.xlabel('Capillary Pressure \n (P_water - P_air) [Pa]'); plt.ylabel('Saturation \n Porous Volume Fraction occupied by water');
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
New Geometric Parameters The following code block cleans up the data a bit. The conduit_lengths are a new addition to openpnm to be able to apply different conductances along the length of the conduit for each section. Conduits in OpenPNM are considered to be comprised of a throat and the two half-pores either side and the length of each element is somewhat subjective for a converging, diverging profile such as a sphere pack or indeed fibrous media such as the GDL. We will effectively apply the conductance of the throat to the entire conduit length by setting the pore sections to be very small. For these highly porous materials the cross-sectional area of a throat is similar to that of the pore and so this is a reasonable assumption. It also helps to account for anisotropy of the material as the throats have vectors whereas pores do not. Boundary pores also need to be handled with care. These are placed on the faces of the domain and have zero volume but need other properties for the conductance models to work. They are mainly used for defining the inlets and outlets of the percolation simulations and effective transport simulations. However, they are kind of fictitious so we do not want them contributing resistance to flow and therefore set their areas to be the highest in the network. The boundary pores are aligned with the planar faces of the domain, which is necessary for the effective transport property calculations which consider the transport through an effective medium of defined size and shape.
net_health = pn.check_network_health() if len(net_health['trim_pores']) > 0: op.topotools.trim(network=pn, pores=net_health['trim_pores']) Ps = pn.pores() Ts = pn.throats() geom = op.geometry.GenericGeometry(network=pn, pores=Ps, throats=Ts, name='geometry') geom['throat.conduit_lengths.pore1'] = 1e-12 geom['throat.conduit_lengths.pore2'] = 1e-12 geom['throat.conduit_lengths.throat'] = geom['throat.length'] - 2e-12 # Handle Boundary Pores - Zero Volume for saturation but not zero diam and area # For flow calculations pn['pore.diameter'][pn['pore.diameter'] == 0.0] = pn['pore.diameter'].max() pn['pore.area'][pn['pore.area'] == 0.0] = pn['pore.area'].max()
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Phase Setup Now we set up the phases and apply the contact angles.
air = op.phases.Air(network=pn, name='air') water = op.phases.Water(network=pn, name='water') air['pore.contact_angle'] = theta_a air["pore.surface_tension"] = water["pore.surface_tension"] water['pore.contact_angle'] = theta_w water["pore.temperature"] = 293.7 water.regenerate_models()
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Physics Setup Now we set up the physics for each phase. The default capillary pressure model from the Standard physics class is the Washburn model which applies to straight capillary tubes and we must override it here with the Purcell model. We add the model to both phases and also add a value for pore.entry_pressure making sure that it is less than any of the throat.entry_pressure values. This is done because the MixedInvasionPercolation model invades pores and throats separately and for now we just want to consider the pores to be invaded as soon as their connecting throats are.
phys_air = op.physics.Standard(network=pn, phase=air, geometry=geom, name='phys_air') phys_water = op.physics.Standard(network=pn, phase=water, geometry=geom, name='phys_water') throat_diam = 'throat.diameter' pore_diam = 'pore.indiameter' pmod = pm.capillary_pressure.purcell phys_water.add_model(propname='throat.entry_pressure', model=pmod, r_toroid=fiber_rad, diameter=throat_diam) phys_air.add_model(propname='throat.entry_pressure', model=pmod, r_toroid=fiber_rad, diameter=throat_diam) # Ignore the pore entry pressures phys_air['pore.entry_pressure'] = -999999 phys_water['pore.entry_pressure'] = -999999 print("Mean Water Throat Pc:",str(np.mean(phys_water["throat.entry_pressure"]))) print("Mean Air Throat Pc:",str(np.mean(phys_air["throat.entry_pressure"])))
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
We apply the following late pore filling model: $ S_{res} = S_{wp}^\left(\frac{P_c^}{P_c}\right)^{\eta}$ This is a heuristic model that adjusts the phase ocupancy inside an individual after is has been invaded and reproduces the gradual expansion of the phases into smaller sub-pore scale features such as cracks fiber intersections.
lpf = 'pore.late_filling' phys_water.add_model(propname='pore.pc_star', model=op.models.misc.from_neighbor_throats, throat_prop='throat.entry_pressure', mode='min') phys_water.add_model(propname=lpf, model=pm.multiphase.late_filling, pressure='pore.pressure', Pc_star='pore.pc_star', Swp_star=0.25, eta=2.5)
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Finally we add the meniscus model for cooperative pore filling. The model mechanics are explained in greater detail in part c of this tutorial but the process is shown in the animation below. The brown fibrous cage structure represents the fibers surrounding and defining a single pore in the network. The shrinking spheres represent the invading phase present at each throat. The cooperative pore filling sequence for a single pore in the network then goes as follow: As pressure increases the phase is squeezed futher into the pores and the curvature of each meniscus increases. If there are no meniscii overlapping inside the pore they are coloured blue and when menisci spheres begin to intersect (inside the pore) they are coloured green. When the spheres curvature reaches the maximum required to transition through the throats they are coloured red. Larger throats allows for smaller curvature and lower pressure. Not all spheres transition from blue to green before going red and represent a burst before coalescence regardless of phase occupancy. Meniscii interactions are assessed for every throat and all the neighboring throats for each pore as a pre-processing step to determine the coalsecence pressure. Then once the percolation algorithm is running the coalescence is triggered if the phase is present at the corresponding throat pairs and the coalescence pressure is lower than the burst pressure.
phys_air.add_model(propname='throat.meniscus', model=op.models.physics.meniscus.purcell, mode='men', r_toroid=fiber_rad, target_Pc=5000)
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Percolation Algorithms Now all the physics is defined we can setup and run two algorithms for water injection and withdrawal and compare to the experimental data. NOTE: THIS NEXT STEP MIGHT TAKE SEVERAL MINUTES.
#NBVAL_IGNORE_OUTPUT inv_points = np.arange(-15000, 15100, 10) IP_injection = op.algorithms.MixedInvasionPercolation(network=pn, name='injection') IP_injection.setup(phase=water) IP_injection.set_inlets(pores=pn.pores('bottom_boundary')) IP_injection.settings['late_pore_filling'] = 'pore.late_filling' IP_injection.run() injection_data = IP_injection.get_intrusion_data(inv_points=inv_points) IP_withdrawal = op.algorithms.MixedInvasionPercolationCoop(network=pn, name='withdrawal') IP_withdrawal.setup(phase=air) IP_withdrawal.set_inlets(pores=pn.pores('top_boundary')) IP_withdrawal.setup(cooperative_pore_filling='throat.meniscus') coop_points = np.arange(0, 1, 0.1)*inv_points.max() IP_withdrawal.setup_coop_filling(inv_points=coop_points) IP_withdrawal.run() IP_withdrawal.set_outlets(pores=pn.pores(['bottom_boundary'])) IP_withdrawal.apply_trapping() withdrawal_data = IP_withdrawal.get_intrusion_data(inv_points=inv_points) plt.figure() plt.plot(injection_data.Pcap, injection_data.S_tot, 'r*-') plt.plot(-withdrawal_data.Pcap, 1-withdrawal_data.S_tot, 'b*-') plt.plot(data[:, 0], data[:, 1], 'g--') plt.xlabel('Capillary Pressure \n (P_water - P_air) [Pa]') plt.ylabel('Saturation \n Porous Volume Fraction occupied by water') plt.show()
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Let's take a look at the data plotted in the above cell:
print(f"Injection - capillary pressure (Pa):\n {injection_data.Pcap}") print(f"Injection - Saturation:\n {injection_data.S_tot}") print(f"Withdrawal - capillary pressure (Pa):\n {-withdrawal_data.Pcap}") print(f"Withdrawal - Saturation:\n {1-withdrawal_data.S_tot}")
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Saving the output OpenPNM manages the simulation projects with the Workspace manager class which is a singleton and instantied when OpenPNM is first imported. We can print it to take a look at the contents
#NBVAL_IGNORE_OUTPUT print(ws)
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
The project is saved for part b of this tutorial
ws.save_project(prj, '../../fixtures/hysteresis_paper_project')
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Create a RnR Cluster
# Either re-use an existing solr cluster id by over riding the below, or leave as None to create a new cluster cluster_id = None # If you choose to leave it as None, it'll use these details to request a new cluster cluster_name = 'Test Cluster' cluster_size = '2' bluemix_wrapper = RetrieveAndRankProxy(solr_cluster_id=cluster_id, cluster_name=cluster_name, cluster_size=cluster_size, config=config)
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Create a Solr collection Here we create a Solr document collection in the previously created cluster and upload the InsuranceLibV2 documents (i.e. answers) to the collection.
collection_id = 'TestCollection' config_id = 'TestConfig' zipped_solr_config = path.join(insurance_lib_data_dir, 'config.zip') bluemix_wrapper.setup_cluster_and_collection(collection_id=collection_id, config_id=config_id, config_zip=zipped_solr_config)
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Upload documents The InsuranceLibV2 had to be pre-processed and formatted into the Solr format for adding documents. TODO: show the scripts for how to do this conversion to solr format from the raw data provided at https://github.com/shuzi/insuranceQA.
documents = path.join(insurance_lib_data_dir, 'document_corpus.solr.xml') print('Uploading from: %s' % documents) bluemix_wrapper.upload_documents_to_collection(collection_id=collection_id, corpus_file=documents, content_type='application/xml') print('Uploaded %d documents to the collection' % bluemix_wrapper.get_num_docs_in_collection(collection_id=collection_id))
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Train a Ranker Since we already have the annotated queries with the document ids that are relevant in this case, we can use that to train a ranker. TODO: show the scripts for how to do this conversion to the relevance file format from the raw data provided at https://github.com/shuzi/insuranceQA. Generate a feature file The ranker trains on top of a features derived between the questions and the answers; so we need to use the service to generate such a feature file first. During this feature file generation process, we need to decide on the num_rows parameter. Will go into this in more detail in a separate example, for now, we set this to 50.
collection_id = 'TestCollection' cluster_id = 'sc40bbecbd_362a_4388_b61b_e3a90578d3b3' temporary_output_dir = mkdtemp() feature_file = path.join(temporary_output_dir, 'ranker_feature_file.csv') print('Saving file to: %s' % feature_file) num_rows = 50 with smart_file_open(path.join(insurance_lib_data_dir, 'validation_gt_relevance_file.csv')) as infile: query_stream = RankerRelevanceFileQueryStream(infile) with smart_file_open(feature_file, mode='w') as outfile: stats = generate_rnr_features(collection_id=collection_id, cluster_id=cluster_id, num_rows=num_rows, in_query_stream=query_stream, outfile=outfile, config=config) print(json.dumps(stats, sort_keys=True, indent=4))
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Call Train with the Feature File WARNING: Each RnR account gives you 8 rankers to be active at any given time, since I experiment a lot, I have a convenience flag to delete rankers in case the quota is full. You obviously want to switch this flag off if you have rankers you don't want deleted.
ranker_api_wrapper = RankerProxy(config=config) ranker_name = 'TestRanker' ranker_id = ranker_api_wrapper.train_ranker(train_file_location=feature_file, train_file_has_answer_id=True, is_enabled_make_space=True, ranker_name=ranker_name) ranker_api_wrapper.wait_for_training_to_complete(ranker_id=ranker_id) # Delete local feature file since ranker training is done from shutil import rmtree rmtree(temporary_output_dir)
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Query the cluster with questions
query_string = 'can i add my brother to my health insurance ' def print_results(response, num_to_print=3): results = json.loads(response)['response']['docs'] for i, doc in enumerate(results[0:num_to_print]): print('Result {}:\n\tid: {}\n\tbody:{}...'.format(i+1,doc['id'], " ".join(doc['body'])[0:100])) bluemix_wrapper = RetrieveAndRankProxy(solr_cluster_id="sc40bbecbd_362a_4388_b61b_e3a90578d3b3", config=config) print('Querying with: {}'.format(query_string)) # without the ranker pysolr_client = bluemix_wrapper.get_pysolr_client(collection_id=collection_id) response = pysolr_client._send_request("GET", path="/fcselect?q=%s&wt=json&rows=3" % query_string) print("\nWithout Ranker") print_results(response) # with ranker pysolr_client = bluemix_wrapper.get_pysolr_client(collection_id=collection_id) response = pysolr_client._send_request("GET", path="/fcselect?q=%s&wt=json&rows=%d&ranker_id=%s" % (query_string, num_rows, ranker_id)) print("\nWith Ranker") print_results(response)
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Get the Data
fertility_df = pd.read_csv('data/fertility.csv', index_col='Country') life_expectancy_df = pd.read_csv('data/life_expectancy.csv', index_col='Country') population_df = pd.read_csv('data/population.csv', index_col='Country') regions_df = pd.read_csv('data/regions.csv', index_col='Country')
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Make the column names ints not strings for handling
columns = list(fertility_df.columns) years = list(range(int(columns[0]), int(columns[-1]))) rename_dict = dict(zip(columns, years)) fertility_df = fertility_df.rename(columns=rename_dict) life_expectancy_df = life_expectancy_df.rename(columns=rename_dict) population_df = population_df.rename(columns=rename_dict) regions_df = regions_df.rename(columns=rename_dict)
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Turn population into bubble sizes. Use min_size and factor to tweak.
scale_factor = 200 population_df_size = np.sqrt(population_df/np.pi)/scale_factor min_size = 3 population_df_size = population_df_size.where(population_df_size >= min_size).fillna(min_size)
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Use pandas categories and categorize & color the regions
regions_df.Group = regions_df.Group.astype('category') regions = list(regions_df.Group.cat.categories) def get_color(r): index = regions.index(r.Group) return Spectral6[regions.index(r.Group)] regions_df['region_color'] = regions_df.apply(get_color, axis=1) zip(regions, Spectral6)
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Build the plot Setting up the data The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot. We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side. This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
sources = {} region_color = regions_df['region_color'] region_color.name = 'region_color' for year in years: fertility = fertility_df[year] fertility.name = 'fertility' life = life_expectancy_df[year] life.name = 'life' population = population_df_size[year] population.name = 'population' new_df = pd.concat([fertility, life, population, region_color], axis=1) sources['_' + str(year)] = ColumnDataSource(new_df)
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Add the slider and callback Last, but not least, we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback. It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our python-made ColumnDataSource.
# Add the slider code = """ var year = slider.get('value'), sources = %s, new_source_data = sources[year].get('data'); renderer_source.set('data', new_source_data); renderer_source.trigger('change'); text_source.set('data', {'year': [String(year)]}); text_source.trigger('change'); """ % js_source_array callback = Callback(args=sources, code=code) slider = Slider(start=years[0], end=years[-1], value=1, step=1, title="Year", callback=callback) callback.args["slider"] = slider callback.args["renderer_source"] = renderer_source callback.args["text_source"] = text_source
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Embed in a template and render Last but not least, we use vplot to stick togethre the chart and the slider. And we embed that in a template we write using the script, div output from components. We display it in IPython and save it as an html file.
# Stick the plot and the slider together layout = vplot(plot, hplot(slider)) with open('gapminder_template_simple.html', 'r') as f: template = Template(f.read()) script, div = components(layout) html = template.render( title="Bokeh - Gapminder demo", plot_script=script, plot_div=div, ) with open('gapminder_simple.html', 'w') as f: f.write(html) display(HTML(html))
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Good. Here's the code that is being run, inside the "XrayData" class: ```python def evaluate_log_prior(self): # Uniform in all parameters... return 0.0 # HACK def evaluate_log_likelihood(self): self.make_mean_image() # Return un-normalized Poisson sampling distribution: # log (\mu^N e^{-\mu} / N!) = N log \mu - \mu + constant return np.sum(self.im * np.log(self.mu) - self.mu) def evaluate_unnormalised_log_posterior(self,x0,y0,S0,rc,beta,b): self.set_pars(x0,y0,S0,rc,beta,b) return self.evaluate_log_likelihood() + self.evaluate_log_prior() ``` It's worth starting at, and thinking about, this code for a few minutes. Recall from the PGM discussion above that we have ${\rm Pr}(\,\theta\,|{N_k}\,H) = \frac{1}{Z} \prod_k \; {\rm Pr}(N_k\;|\;\mu_k(\theta),{\rm ex}_k,{\rm pb}_k,H) \; {\rm Pr}(\,\theta\,|H)$ where $Z = {\rm Pr}({N_k}\,|H)$ The product over (assumed) independent pixel values' Poisson sampling distribution terms becomes a sum in the log likelihood. If the prior PDF for all parameters is uniform, then the log prior (and the prior) is just a constant (whose actual value is unimportant). In other problems we will need to be more careful than this! Now let's try evaluating the 2D posterior PDF for cluster position, conditioned on reasonable values of the cluster and background flux, cluster size and beta:
npix = 15 # Initial guess at the interesting range of cluster position parameters: xmin,xmax = 310,350 ymin,ymax = 310,350 # Refinement, found by fiddling around a bit: # xmin,xmax = 327.7,328.3 # ymin,ymax = 346.4,347.0 x0grid = np.linspace(xmin,xmax,npix) y0grid = np.linspace(ymin,ymax,npix) logprob = np.zeros([npix,npix]) for i,x0 in enumerate(x0grid): for j,y0 in enumerate(y0grid): logprob[j,i] = lets.evaluate_unnormalised_log_posterior(x0,y0,S0,rc,beta,b) print ("Done column",i) print (logprob[0:5,0])
examples/XrayImage/Inference.ipynb
wmorning/StatisticalMethods
gpl-2.0
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ words = set(text) words_to_key = {w: i for i, w in enumerate(words)} key_to_words = {i: w for i, w in enumerate(words)} return words_to_key, key_to_words """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.':'||PERIOD||', ',':'||COMMA||', '"':'||QUOTATION_MARK||', ';':'||SEMICOLON||', '!':'||EXCLAMATION_MARK||', '?':'||QUESTION_MARK||', '(':'||LEFT_PARENTHESES||', ')':'||RIGHT_PARENTHESES||', '--':'||DASH||', '\n':'||NEWLINE||' } """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple (Input, Targets, LearingRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ input = tf.placeholder(tf.int32, shape=(None, None), name="input") targets = tf.placeholder(tf.int32, shape=(None, None), name="targets") learning_rate = tf.placeholder(tf.float32, name="learning_rate") return input, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstm_layer_count = 2 lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layer_count) init_state = cell.zero_state(batch_size, tf.float32) init_state = tf.identity(init_state, name='initial_state') return cell, init_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ random = tf.Variable(tf.random_normal((vocab_size, embed_dim))) return tf.nn.embedding_lookup(random, input_data) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) return output, tf.identity(final_state, name='final_state') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_size, rnn_size) output, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn=None) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ```
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ batch_count = int(len(int_text) / (batch_size * seq_length)) x_data = np.array(int_text[: batch_count * batch_size * seq_length]) x_batches = np.split(x_data.reshape(batch_size, -1), batch_count, 1) y_data = np.array(int_text[1: batch_count * batch_size * seq_length + 1]) y_batches = np.split(y_data.reshape(batch_size, -1), batch_count, 1) return np.array(list(zip(x_batches, y_batches))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 180 # Batch Size batch_size = 100 # RNN Size rnn_size = 256 # Sequence Length seq_length = 16 # Learning Rate learning_rate = 0.001 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ input = loaded_graph.get_tensor_by_name('input:0') initial_state = loaded_graph.get_tensor_by_name('initial_state:0') final_state = loaded_graph.get_tensor_by_name('final_state:0') probabilities = loaded_graph.get_tensor_by_name('probs:0') return input, initial_state, final_state, probabilities """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ probabilities = [(i,p) for i, p in enumerate(probabilities)] probabilities.sort(key=lambda x: x[1], reverse=True) choice = np.random.choice([i[0] for i in probabilities[:10]]) return int_to_vocab[choice] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Table 1 - VOTable with all source properties
tbl1 = ascii.read("http://iopscience.iop.org/0004-637X/758/1/31/suppdata/apj443828t1_mrt.txt") tbl1.columns tbl1[0:5] len(tbl1)
notebooks/Luhman2012.ipynb
BrownDwarf/ApJdataFrames
mit
Cross match with SIMBAD
from astroquery.simbad import Simbad import astropy.coordinates as coord import astropy.units as u customSimbad = Simbad() customSimbad.add_votable_fields('otype', 'sptype') query_list = tbl1["Name"].data.data result = customSimbad.query_objects(query_list, verbose=True) result[0:3] print "There were {} sources queried, and {} sources found.".format(len(query_list), len(result)) if len(query_list) == len(result): print "Hooray! Everything matched" else: print "Which ones were not found?" def add_input_column_to_simbad_result(self, input_list, verbose=False): """ Adds 'INPUT' column to the result of a Simbad query Parameters ---------- object_names : sequence of strs names of objects from most recent query verbose : boolean, optional When `True`, verbose output is printed Returns ------- table : `~astropy.table.Table` Query results table """ error_string = self.last_parsed_result.error_raw fails = [] for error in error_string.split("\n"): start_loc = error.rfind(":")+2 fail = error[start_loc:] fails.append(fail) successes = [s for s in input_list if s not in fails] if verbose: out_message = "There were {} successful Simbad matches and {} failures." print out_message.format(len(successes), len(fails)) self.last_parsed_result.table["INPUT"] = successes return self.last_parsed_result.table result_fix = add_input_column_to_simbad_result(customSimbad, query_list, verbose=True) tbl1_pd = tbl1.to_pandas() result_pd = result_fix.to_pandas() tbl1_plusSimbad = pd.merge(tbl1_pd, result_pd, how="left", left_on="Name", right_on="INPUT")
notebooks/Luhman2012.ipynb
BrownDwarf/ApJdataFrames
mit
Save the data table locally.
tbl1_plusSimbad.head() ! mkdir ../data/Luhman2012/ tbl1_plusSimbad.to_csv("../data/Luhman2012/tbl1_plusSimbad.csv", index=False)
notebooks/Luhman2012.ipynb
BrownDwarf/ApJdataFrames
mit
Load the HRPC Mailing List Now let's load the email data for analysis.
wg = "hrpc" urls = [wg] archives = [Archive(url,mbox=True) for url in urls] activities = [arx.get_activity(resolved=False) for arx in archives] activity = activities[0]
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Load IETF Draft Data Next, we will use the ietfdata tracker to look at the frequency of drafts for this working group.
from ietfdata.datatracker import * from ietfdata.datatracker_ext import * import pandas as pd dt = DataTracker() g = dt.group_from_acronym("hrpc") drafts = [draft for draft in dt.documents(group=g, doctype=dt.document_type_from_slug("draft"))] draft_df = pd.DataFrame.from_records([ {'time' : draft.time, 'title' : draft.title, 'id' : draft.id} for draft in drafts] )
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
We will want to use the data of the drafts. Time resolution is too small.
draft_df['date'] = draft_df['time'].dt.date
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Plotting Some preprocessing is necessary to get the drafts data ready for plotting.
from matplotlib import cm viridis = cm.get_cmap('viridis') drafts_per_day = draft_df.groupby('date').count()['title']
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
For each of the mailing lists we are looking at, plot the rolling average (over window) of number of emails sent per day. Then plot a vertical line with the height of the drafts count and colored by the gender tendency.
window = 100 plt.figure(figsize=(12, 6)) for i, gender in enumerate(gender_activity.columns): colors = [viridis(0), viridis(.5), viridis(.99)] ta = gender_activity[gender] rmta = ta.rolling(window).mean() rmtadna = rmta.dropna() plt.plot_date(np.array(rmtadna.index), np.array(rmtadna.values), color = colors[i], linestyle = '-', marker = None, label='%s email activity - %s' % (wg, gender), xdate=True) vax = plt.vlines(drafts_per_day.index, 0, drafts_per_day, colors = 'r', # draft_gt_per_day, cmap = 'viridis', label=f'{wg} drafts ({drafts_per_day.sum()} total)') plt.legend() plt.title(f"{wg} working group emails and drafts") #plt.colorbar(vax, label = "more womanly <-- Gender Tendency --> more manly") #plt.savefig("activites-marked.png") #plt.show()
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Is gender diversity correlated with draft output?
from scipy.stats import pearsonr import pandas as pd def calculate_pvalues(df): df = df.dropna()._get_numeric_data() dfcols = pd.DataFrame(columns=df.columns) pvalues = dfcols.transpose().join(dfcols, how='outer') for r in df.columns: for c in df.columns: pvalues[r][c] = round(pearsonr(df[r], df[c])[1], 4) return pvalues drafts_per_ordinal_day = pd.Series({x[0].toordinal(): x[1] for x in drafts_per_day.items()}) drafts_per_ordinal_day ta.rolling(window).mean() garm = np.log1p(gender_activity.rolling(window).mean())
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Measuring diversity As a rough measure of gender diversity, we sum the mailing list activity of women and those of unidentified gender, and divide by the activity of men.
garm['diversity'] = (garm['unknown'] + garm['women']) / garm['men'] garm['drafts'] = drafts_per_ordinal_day garm['drafts'] = garm['drafts'].fillna(0) garm.corr(method='pearson') calculate_pvalues(garm)
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is known at a single interior point: $f(0,0)=1.0$. The function $f$ is not known at any other points. Create arrays x, y, f: x should be a 1d array of the x coordinates on the boundary and the 1 interior point. y should be a 1d array of the y coordinates on the boundary and the 1 interior point. f should be a 1d array of the values of f at the corresponding x and y coordinates. You might find that np.hstack is helpful.
x1=np.arange(-5,6) y1=5*np.ones(11) f1=np.zeros(11) x2=np.arange(-5,6) y2=-5*np.ones(11) f2=np.zeros(11) y3=np.arange(-4,5) x3=5*np.ones(9) f3=np.zeros(9) y4=np.arange(-4,5) x4=-5*np.ones(9) f4=np.zeros(9) x5=np.array([0]) y5=np.array([0]) f5=np.array([1]) x=np.hstack((x1,x2,x3,x4,x5)) y=np.hstack((y1,y2,y3,y4,y5)) f=np.hstack((f1,f2,f3,f4,f5)) print (x) print (y) print (f)
assignments/assignment08/InterpolationEx02.ipynb
ajhenrikson/phys202-2015-work
mit
Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain: xnew and ynew should be 1d arrays with 100 points between $[-5,5]$. Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid. Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew). Use cubic spline interpolation.
xnew=np.linspace(-5.0,6.0,100) ynew=np.linspace(-5,6,100) Xnew,Ynew=np.meshgrid(xnew,ynew) Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic',fill_value=0.0) assert xnew.shape==(100,) assert ynew.shape==(100,) assert Xnew.shape==(100,100) assert Ynew.shape==(100,100) assert Fnew.shape==(100,100)
assignments/assignment08/InterpolationEx02.ipynb
ajhenrikson/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
plt.figure(figsize=(10,8)) plt.contourf(Fnew,cmap='cubehelix_r') plt.title('2D Interpolation'); assert True # leave this to grade the plot
assignments/assignment08/InterpolationEx02.ipynb
ajhenrikson/phys202-2015-work
mit
Numpy's ndarray One of the reasons that makes numpy a great tool for computations on arrays is it ndarray calls. This class allows to declare arrays with a number of convenient methods and attributes that makes our life easier when programming complex algorithms on large arrays.
#Now let's see what one of its instances looks like: a = np.ndarray(4) b = np.ndarray([3,4]) print(type(b)) print('a: ', a) print('b: ', b)
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
There is a wide range of numpy functions that allow to declare ndarrays filled with your favourite flavours: https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html
# zeros z = np.zeros(5) print(type(z)) print(z) # ones o = np.ones((4,2)) print(type(o)) print(o) # ordered integers oi = np.arange(10) #Only one-dimensional print(type(oi)) print(oi)
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Operations on ndarrays Arithmetic operations on ndarrays are possible using python's symbols. It is important to notice that these operations are performed term by term on arrays of same size and dimensions. It is also possible to make operations between ndarrays and numbers, in which case, the same operation is performed on all the elements of the array. This is more generally true for operations on arrays where one array lacks one or several dimensions.
#An array of ones x = np.arange(5) #An array of random values drawn uniformly between 0 and 1 y = np.random.rand(5) print('x: ', x) print('y: ', y) print('addition: ', x + y) print('mutliplication: ', x * y) print('power: ', x ** y) #Operation with numbers print('subtraction: ', x - 3) print('fraction: ', x / 2) print('power: ', x ** 0.5) #Beware incompatible shapes: (play with the dimensions of y) y = np.ones((6)) print('addition: ', x + y) print('mutliplication: ', x * y) print('power: ', x ** y)
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
ndarrays and numpy also have methods or functions to perform matrix operations:
#Let's just declare some new arrays x = (np.random.rand(4,5)*10).astype(int) # note, astype is a method that allows to change the type of all the elements in the ndarray y = np.ones((5))+1 # Note: here, show addition of non-matching shapes #np.ones((5,3,4))+np.random.randn(4) #transpose print('the array x: \n', x) print('its transpose: \n', x.T) #Matrix multiplication (play with the dimensions of y to see how this impact the results) z1 = np.dot(x,y) z2 = x.dot(y) print(z1) print(z2)
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
array shapes It is possible to access the shape and size (there is a difference!) of an array, and even to alter its shape in various different ways.
print('Shape of x: ',x.shape) # From ndarray attributes print('Shape of y: ',np.shape(y)) # From numpy function print('Size of x: ', x.size) # From ndarray attributes print('Size of y: ', np.size(y)) # From numpy function
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Now this is how we can change an array's size:
print('the original array: \n', x) print('change of shape: \n', x.reshape((10,2)))#reshape 4x5 into 10x2 print('change of shape and number of dimensions: \n', x.reshape((5,2,2)))#reshape 4x5 into 5x2x2 print('the size has to be conserved: \n', x.reshape((10,2)).size) #flattenning an array: xflat = x.flatten() print('flattened array: \n {} \n with shape {}'.format(xflat, xflat.shape))
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Indexing with numpy For the most part, indexing in numpy works exactly as we saw in python. We are going to use this section to introduce a couple of features for indexing (some native from python) that can significantly improve your coding skills. In particular, numpy introduces a particularly useful object: np.newaxis.
#conventional indexing print(x) print('first line of x: {}'.format(x[0,:])) print('second column of x: {}'.format(x[:,1])) print('last element of x: {}'.format(x[-1,-1])) #selection print('One element in 3 between the second and 13th element: ', xflat[1:14:3]) #This selection writes as array[begin:end:step] #Equivalent to: print('One element in 3 between the second and 13th element: ', xflat[slice(1,14,3)]) #Both notations are strictly equivalent, but slice allows to declare slices that can be used in different arrays: sl1 = slice(1,3,1) sl2 = slice(0,-1,2) print('sliced array: ', x[sl1, sl2]) # Inverting the order in an array print(xflat) print(xflat[::-1]) #conditional indexing print('all numbers greater that 3: ', x[x>3]) bool_array = (x == 8) print('bool arrray is an array of booleans that can be used as indices: \n',bool_array) print('all numbers greater that 3: ', x[bool_array]) #Ellipsis: select all across all missing dimensions x_multi = np.arange(32).reshape(2,2,4,2) print(x_multi) print(x_multi[0,...,1]) print(x_multi[0,:,:,1])
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
ndarray method for simple operations on array elements Here I list a small number of ndarray methods that are very convenient and often used in astronomy and image processing. It is always a good thing to have them in mind to simplfy your code. Of course, we only take a look at a few of them, but there is plenty more where it comes from.
a = np.linspace(1,6,3) # 3 values evenly spaced between 1 and 6 b = np.arange(16).reshape(4,4) c = np.random.randn(3,4)*10 # random draws from a normal distribution with standard deviation 10 print(f'Here are 3 new arrays, a:\n {a}, \nb:\n {b}\nand c:\n {c}') #Sum the elements of an array print('Sum over all of the array\'s elements: ', a.sum()) print('Sum along the lines: ', b.sum(axis = 1)) print('Sum along the columns: ', b.sum(axis = 0)) #The axis option will be available for most numpy functions/methods #Compute the mean and standard deviation: print('mean of an array: ', b.mean()) print('std of an array: ', c.std()) #min and max of an array and teir positions print('the minimum value of array b is {} and it is at position {}'.format(b.min(), b.argmin())) print('the maximum value of array c is {} and it is at position {}'.format(c.max(), c.argmax())) #sort an array's elements along one axis or return the indexes of the sorted array's element: print('c', c) argc = c.argsort() print('The indexes that sort c and a sorted verison of c: \n \n {}\nand \n {} \n'.format(argc,c.sort()))
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Oups, not what we were expecting, but what happened is that c was replaced by its sorted version. This is an in-place computation.
print(c) #Your turn now: give me the ALL the elements of c sorted (not just along one axis). #Your answer.... #Then, sort the array in decreasing order #Your answer....
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Now, we are going to see an important feature in numpy. While one can live without nowing this trick, one cannot be a good python coder without using it. I am talking about the mighty: Newaxis!! Newaxis allows to add a dimension to an array. This allows to expand arrays in a cheap way, which leads to faster operations on large arrays.
import numpy as np #A couple of arrays first: x_arr = np.arange(10) y_arr = np.arange(10) print(x_arr.shape) x = x_arr[np.newaxis,:] print(x.shape) print(x_arr) print(x) print(x+x_arr) #Now let's index these with newaxes: print('Newaxis indexed array \n {} and its shape \n {}'.format(x_arr[:,np.newaxis],x_arr[:,np.newaxis].shape)) print('None leads to the same result: array \n {} and shape \n {}'.format(y_arr[None,:],y_arr[None,:].shape)) #Sum of elements print('sum of the arrays:', (x_arr + y_arr)) #Sum of elements with newaxes print('sum of the arrays: \n', (x_arr[None, :] + y_arr[:, None])) #This is because we have been summing these arrays: print(' ',x_arr[None, :]) print(y_arr[:, None])
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
A quick intro to matplotlib When wrinting complex algorithms, it is important to be able to chack that calculations are done properly, but also to be able to display results in a clear manner. When dimaensionality and size are small, it is still possible to rely on printing, but more generally and for better clarity, drawing graphs will come handy
import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(0,5,100) #Plotting a curve plt.plot(np.exp(x)) plt.show() #The same curve with the right x-axis in red dashed line plt.plot(x, np.exp(x), '--r') plt.show() #The same curve with the right x-axis and only the points in the data as dots plt.plot(x[::4], np.exp(x[::4]), 'or') plt.show()
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Data The data are measurements of the atmospheric CO2 concentration made at Mauna Loa, Hawaii (Keeling & Whorf 2004). Data can be found at http://scrippsco2.ucsd.edu/data/atmospheric_co2/primary_mlo_co2_record. We use the [statsmodels version](http://statsmodels.sourceforge.net/devel/datasets/generated/co2.html].
import numpy as np import matplotlib.pyplot as plt from statsmodels.datasets import co2 data = co2.load_pandas().data t = 2000 + (np.array(data.index.to_julian_date()) - 2451545.0) / 365.25 y = np.array(data.co2) m = np.isfinite(t) & np.isfinite(y) & (t < 1996) t, y = t[m][::4], y[m][::4] plt.plot(t, y, ".k") plt.xlim(t.min(), t.max()) plt.xlabel("year") _ = plt.ylabel("CO$_2$ in ppm") plt.savefig("gp-mauna-loa-data.pdf")
deprecated/gp_mauna_loa.ipynb
probml/pyprobml
mit
Kernel In this figure, you can see that there is periodic (or quasi-periodic) signal with a year-long period superimposed on a long term trend. We will follow R&W and model these effects non-parametrically using a complicated covariance function. The covariance function that we’ll use is: $$k(r) = k_1(r) + k_2(r) + k_3(r) + k_4(r)$$ where $$ \begin{eqnarray} k_1(r) &=& \theta_0^2 \, \exp \left(-\frac{r^2}{2\,\theta_1^2} \right) \ k_2(r) &=& \theta_2^2 \, \exp \left(-\frac{r^2}{2\,\theta_3^2} -\theta_5\,\sin^2\left( \frac{\pi\,r}{\theta_4}\right) \right) \ k_3(r) &=& \theta_6^2 \, \left [ 1 + \frac{r^2}{2\,\theta_7^2\,\theta_8} \right ]^{-\theta_8} \ k_4(r) &=& \theta_{9}^2 \, \exp \left(-\frac{r^2}{2\,\theta_{10}^2} \right) + \theta_{11}^2\,\delta_{ij} \end{eqnarray} $$ We can implement this kernel in tinygp as follows (we'll use the R&W results as the hyperparameters for now):
import jax import jax.numpy as jnp from tinygp import kernels, transforms, GaussianProcess def build_gp(theta, X): mean = theta[-1] # We want most of out parameters to be positive so we take the `exp` here # Note that we're using `jnp` instead of `np` theta = jnp.exp(theta[:-1]) # Construct the kernel by multiplying and adding `Kernel` objects k1 = theta[0] ** 2 * kernels.ExpSquared(theta[1]) k2 = theta[2] ** 2 * kernels.ExpSquared(theta[3]) * kernels.ExpSineSquared(period=theta[4], gamma=theta[5]) k3 = theta[6] ** 2 * kernels.RationalQuadratic(alpha=theta[7], scale=theta[8]) k4 = theta[9] ** 2 * kernels.ExpSquared(theta[10]) kernel = k1 + k2 + k3 + k4 return GaussianProcess(kernel, X, diag=theta[11] ** 2, mean=mean) def neg_log_likelihood(theta, X, y): gp = build_gp(theta, X) return -gp.condition(y)
deprecated/gp_mauna_loa.ipynb
probml/pyprobml
mit
Normalizing text
import string def norm_words(words): words = words.lower().translate(None, string.punctuation) return words jeopardy["clean_question"] = jeopardy["Question"].apply(norm_words) jeopardy["clean_answer"] = jeopardy["Answer"].apply(norm_words) jeopardy.head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Normalizing columns
def norm_value(value): try: value = int(value.translate(None, string.punctuation)) except: value = 0 return value jeopardy["clean_value"] = jeopardy["Value"].apply(norm_value) jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"]) print(jeopardy.dtypes) jeopardy.head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Answers in questions
def ans_in_q(row): match_count = 0 split_answer = row["clean_answer"].split(" ") split_question = row["clean_question"].split(" ") try: split_answer.remove("the") except: pass if len(split_answer) == 0: return 0 else: for word in split_answer: if word in split_question: match_count += 1 return match_count / len(split_answer) jeopardy["answer_in_question"] = jeopardy.apply(ans_in_q, axis=1) print(jeopardy["answer_in_question"].mean()) jeopardy[jeopardy["answer_in_question"] > 0].head() jeopardy[(jeopardy["answer_in_question"] > 0) & (jeopardy["clean_question"].apply(string.split).apply(len) > 6)].head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Only 0.6% of the answers appear in the questions itself. Out of this 0.6%, a sample of the questions shows that they are all multiple choice questions, which concludes that it is very unlikely that the answer will be in the question itself. Recycled questions
jeopardy = jeopardy.sort_values(by="Air Date") question_overlap = [] terms_used = set() for index, row in jeopardy.iterrows(): match_count = 0 split_question = row["clean_question"].split(" ") for word in split_question: if len(word) < 6: split_question.remove(word) for word in split_question: if word in terms_used: match_count += 1 terms_used.add(word) if len(split_question) > 0: match_count /= float(len(split_question)) question_overlap.append(match_count) jeopardy["question_overlap"] = question_overlap print(jeopardy["question_overlap"].mean()) jeopardy.tail()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Low value vs high value questions
def value(row): if row["clean_value"] > 800: value = 1 else: value = 0 return value jeopardy["high_value"] = jeopardy.apply(value, axis=1) jeopardy.head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
The above is what the output should look like.
%%script 20170706_c_foo void blank(char buf[LINES][COLUMNS], int row, int column) { for ( ; row < LINES; row++) { for ( ; column < COLUMNS; column++) buf[row][column] = ' '; column = 0; } } %%script 20170706_c_foo void blank_to_end_of_row(char buf[LINES][COLUMNS], int row, int column) { for ( ;column < COLUMNS; column++) buf[row][column] = ' '; } void blank_row(char buf[LINES][COLUMNS], int row) { blank_to_end_of_row(buf, row, 0); } void blank(char buf[LINES][COLUMNS], int row, int column) { blank_to_end_of_row(buf, row++, column); for ( ; row < LINES; row++) blank_row(buf, row); } %%script 20170706_c_foo void blank_to_end_of_row(char buf[LINES][COLUMNS], int row, int column) { for ( ;column < COLUMNS; column++) buf[row][column] = ' '; } void blank(char buf[LINES][COLUMNS], int row, int column) { blank_to_end_of_row(buf, row++, column); for ( ; row < LINES; row++) blank_to_end_of_row(buf, row, 0); }
20170706-dojo-clear-to-end-of-table.ipynb
james-prior/cohpy
mit