markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
A la vista de la gráfica comentar (modificando el texto de este apartado) los siguientes puntos: valor mínimo de la reflectancia y la longitud de onda en la que ocurre. ¿Tiene algo que ver esta longitud de onda con la seleccionada en la Tarea 2, es decir con $\lambda_0$?, explicar la relación. valor máximo de la r...
# MODIFICAR EL PARAMETRO. LUEGO EJECUTAR ####################################################################################################### angulo_incidente = 50 # Incluir el ángulo de incidencia (en grados) a la interfase aire-monocapa # DESDE AQUÍ NO TOCAR. ###################################################...
TratamientoAntirreflejante/Tratamiento_Antirreflejante_Ejercicio.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
A la vista de la gráfica comentar (modificando el texto de este apartado) los siguientes puntos: Para un ángulo de 30 grados dar el valor de la reflectancia en $\lambda_0$. Para dicho ángulo de incidencia ¿cuál es el valor mínimo de la reflectancia y la longitud de onda a la que ocurre? Al aumentar el ángulo de inc...
# MODIFICAR LOS DOS PARAMETROS. LUEGO EJECUTAR ######################################################## espesor2 = 99*3 # Incluir el valor del segundo espesor más pequeño de la monocapa (en nm) espesor3 = 99*5 # Incluir el valor del tercer espesor más pequeño de la monocapa (en nm) # DESDE AQUÍ NO TOCAR. ########...
TratamientoAntirreflejante/Tratamiento_Antirreflejante_Ejercicio.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
So the maximum weight of a truck crossing the bridges can only be 1000 units (kg, lbs, etc.). Now what about this example where we have two routes we can take? <img src="https://raw.githubusercontent.com/pbeens/ICS-Computer-Studies/master/Python/Class%20Demos/files/cscircles_two_roads.png", width=50%, height=50%> In th...
a = 1000 b = 2340 c = 3246 d = 1400 e = 5000
Python/Class Demos/CS Circles 2 (Functions) - Bridges).ipynb
pbeens/ICS-Computer-Studies
mit
Then let's create two variables to represent the maximum weight for each of the two paths, which, don't forget, has to be the <b>minimum</b> of the values of the two paths.
path_1_limit = min(a, b, c) path_2_limit = min(d, e) print ('The 1st path limit is', path_1_limit, 'and the 2nd path limit is', path_2_limit)
Python/Class Demos/CS Circles 2 (Functions) - Bridges).ipynb
pbeens/ICS-Computer-Studies
mit
The maximum weight limit would obviously be 1400 units, which is the <b>maximum</b> of our two values. Using the max() function we have:
print ('The maximum weight that be carried is', max(path_1_limit, path_2_limit), 'units.')
Python/Class Demos/CS Circles 2 (Functions) - Bridges).ipynb
pbeens/ICS-Computer-Studies
mit
Policy Evaluation by Dynamic Programming For the MDP represented above we define the state transition probability matrix $\mathcal{P}^a_{ss'}=p(S_{t+1}=s'\mid S_{t}=s, A_t=a)$. In this MDP we assume that when we choose to move to state $s_i$, $i={1,2,3}$ we always end up in that state, meaning that $\mathcal{P}^a_{ss'}...
import numpy as np policy = np.array([[0.3, 0.2, 0.5], [0.5, 0.4, 0.1], [0.8, 0.1, 0.1]]) print("This is represents the policy with 3 states and 3 actions p(row=a|col=s):\n", np.matrix(policy)) # 'raw_rewards' variable contains rewards obtained after transition to each state # In our example it doesn't depend on sourc...
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
Policy Evaluation by Linear Programming The state-value-function can be directly solved through linear programming (as shown on page 15 from the lecture slides): $$ V_{\pi}(s)=\left(I-\gamma\mathcal{P}^{\pi}\right)^{-1}\mathcal{R}^{\pi} $$
solution=np.matmul(np.linalg.inv(np.eye(3)-0.1*policy), rewards) print('Solution by inversion:\nV={}'.format(state_value_function))
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
The result stays the same. Policy Evaluation by Monte Carlo Sampling We can design yet another way of evaluating the value of a given policy $\pi$, see lecture slides pag.20. The intuition is to incrementally the expected return from sampled episodes, sequences of triplets ${(s_i,a_i,r_{i})}{i=1}^N$. The function $\co...
import random from collections import defaultdict reward_counter = np.array([0., 0., 0.]) visit_counter = np.array([0., 0., 0.]) nIterations = 400 def gt(rewardlist, gamma=0.1): ''' Function to calculate the total discounted reward >>> gt([10, 2, 3], gamma=0.1) 10.23 ''' total_disc_return = 0 ...
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
As can be seen the result is nearly the same as the state-value-function calculated above. So far we have seen different ways of given a known policy $\pi(a\mid s)$ how to comput its value $V_{\pi}(s)$. Next, we wish to find the optimal policy $\pi^\ast(s)$ for the MDP in the example. Policy Optimization by Q-Learning ...
q_table = np.zeros((3, 3)) #state action value function Q-table gamma = 0.1 alpha = 1.0 eps = 0.1 def get_eps_greedy_action(state): if random.uniform(0, 1) < eps: return random.randint(0, 2) return np.argmax(q_table[state]).item() for i in range(1001): state = random.randint(0, 2) action = get...
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
Value Iteration
import numpy as np raw_rewards = np.array([1.5, -1.833333333, 19.833333333]) gamma = 0.1 state_value_function = np.zeros(3) print('V_{} = {}'.format(0, state_value_function)) for i in range(1000): for s in range(3): Q_s = [raw_rewards[s_next] + gamma * state_value_function[s_next] for s_nex...
labs/notebooks/reinforcement_learning/exercise_1_3_solutions.ipynb
LxMLS/lxmls-toolkit
mit
Preparing dataset:
TRAIN = "train/" TEST = "test/" # Load "X" (the neural network's training and testing inputs) def load_X(X_signals_paths): X_signals = [] for signal_type_path in X_signals_paths: file = open(signal_type_path, 'rb') # Read dataset from disk, dealing with text files' syntax X_signa...
LSTM.ipynb
KennyCandy/HAR
mit
Additionnal Parameters: Here are some core parameter definitions for the training. The whole neural network's structure could be summarised by enumerating those parameters and the fact an LSTM is used.
# Input Data training_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie) test_data_count = len(X_test) # 2947 testing series n_steps = len(X_train[0]) # 128 timesteps per series n_input = len(X_train[0][0]) # 9 input parameters per timestep # LSTM Neural Network's internal s...
LSTM.ipynb
KennyCandy/HAR
mit
Utility functions for training:
def LSTM_RNN(_X, _weights, _biases): # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters. # Moreover, two LSTM cells are stacked which adds deepness to the neural network. # Note, some code of this notebook is inspired from an slightly different # RNN architectu...
LSTM.ipynb
KennyCandy/HAR
mit
Let's get serious and build the neural network:
# Graph input/output x = tf.placeholder(tf.float32, [None, n_steps, n_input]) y = tf.placeholder(tf.float32, [None, n_classes]) # Graph weights weights = { 'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights 'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))...
LSTM.ipynb
KennyCandy/HAR
mit
Hooray, now train the neural network:
# To keep track of training's performance test_losses = [] test_accuracies = [] train_losses = [] train_accuracies = [] # Launch the graph sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True)) init = tf.initialize_all_variables() sess.run(init) # Perform Training steps with "batch_size" amoun...
LSTM.ipynb
KennyCandy/HAR
mit
Training is good, but having visual insight is even better: Okay, let's plot this simply in the notebook for now.
# (Inline plots: ) %matplotlib inline font = { 'family' : 'Bitstream Vera Sans', 'weight' : 'bold', 'size' : 18 } matplotlib.rc('font', **font) width = 12 height = 12 plt.figure(figsize=(width, height)) indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size)) plt.plo...
LSTM.ipynb
KennyCandy/HAR
mit
And finally, the multi-class confusion matrix and metrics!
# Results predictions = one_hot_predictions.argmax(1) print "Testing Accuracy: {}%".format(100*accuracy) print "" print "Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted")) print "Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted")) print "f1...
LSTM.ipynb
KennyCandy/HAR
mit
Conclusion Outstandingly, the accuracy is of 91%! This means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so those pr...
# Let's convert this notebook to a README as the GitHub project's title page: !jupyter nbconvert --to markdown LSTM.ipynb !mv LSTM.md README.md
LSTM.ipynb
KennyCandy/HAR
mit
Load data HJCFIT depends on DCPROGS/DCPYPS module for data input and setting kinetic mechanism:
from dcpyps.samples import samples from dcpyps import dataset, mechanism, dcplots fname = "CH82.scn" # binary SCN file containing simulated idealised single-channel open/shut intervals tr = 1e-4 # temporal resolution to be imposed to the record tc = 4e-3 # critical time interval to cut the record into bursts conc = 10...
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Initialise Single-Channel Record from dcpyps. Note that SCRecord takes a list of file names; several SCN files from the same patch can be loaded.
# Initaialise SCRecord instance. rec = dataset.SCRecord([fname], conc, tres=tr, tcrit=tc) rec.printout()
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Plot dwell-time histograms for inspection. In single-channel analysis field it is common to plot these histograms with x-axis in log scale and y-axis in square-root scale. After such transformation exponential pdf has a bell-shaped form.
fig, ax = plt.subplots(1, 2, figsize=(12,5)) dcplots.xlog_hist_data(ax[0], rec.opint, rec.tres, shut=False) dcplots.xlog_hist_data(ax[1], rec.shint, rec.tres) fig.tight_layout()
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Load demo mechanism (C&H82 numerical example)
mec = samples.CH82() mec.printout() # PREPARE RATE CONSTANTS. # Fixed rates mec.Rates[7].fixed = True # Constrained rates mec.Rates[5].is_constrained = True mec.Rates[5].constrain_func = mechanism.constrain_rate_multiple mec.Rates[5].constrain_args = [4, 2] mec.Rates[6].is_constrained = True mec.Rates[6].constrain_fun...
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Prepare likelihood function
def dcprogslik(x, lik, m, c): m.theta_unsqueeze(np.exp(x)) l = 0 for i in range(len(c)): m.set_eff('c', c[i]) l += lik[i](m.Q) return -l * math.log(10) # Import HJCFIT likelihood function from dcprogs.likelihood import Log10Likelihood # Get bursts from the record bursts = rec.bursts.in...
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Run optimisation
from scipy.optimize import minimize print ("\nScyPy.minimize (Nelder-Mead) Fitting started: " + "%4d/%02d/%02d %02d:%02d:%02d"%time.localtime()[0:6]) start = time.clock() start_wall = time.time() result = minimize(dcprogslik, np.log(theta), args=([likelihood], mec, [conc]), method='Nelder-Mead') t...
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Plot experimental histograms and predicted pdfs
from dcprogs.likelihood import QMatrix from dcprogs.likelihood import missed_events_pdf, ideal_pdf, IdealG, eig qmatrix = QMatrix(mec.Q, 2) idealG = IdealG(qmatrix)
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
Note that to properly overlay ideal and missed-event corrected pdfs ideal pdf has to be scaled (need to renormailse to 1 the area under pdf from $\tau_{res}$).
# Scale for ideal pdf def scalefac(tres, matrix, phiA): eigs, M = eig(-matrix) N = inv(M) k = N.shape[0] A = np.zeros((k, k, k)) for i in range(k): A[i] = np.dot(M[:, i].reshape(k, 1), N[i].reshape(1, k)) w = np.zeros(k) for i in range(k): w[i] = np.dot(np.dot(np.dot(phiA, A[...
exploration/Example_MLL_Fit_AChR_1patch.ipynb
jenshnielsen/HJCFIT
gpl-3.0
1. Read in the groundtrack data
lats,lons,date_times,prof_times,dem_elevation=get_geo(lidar_file)
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
2. use the modis corner lats and lons to clip the cloudsat lats and lons to the same region
from a301utils.modismeta_read import parseMeta metadict=parseMeta(rad_file) corner_keys = ['min_lon','max_lon','min_lat','max_lat'] min_lon,max_lon,min_lat,max_lat=[metadict[key] for key in corner_keys]
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
Find all the cloudsat points that are between the min/max by construting a logical True/False vector. As with matlab, this vector can be used as an index to pick out those points at the indices where it evaluates to True. Also as in matlab if a logical vector is passed to a numpy function like sum, the True values ar...
lon_hit=np.logical_and(lons>min_lon,lons<max_lon) lat_hit = np.logical_and(lats>min_lat,lats< max_lat) in_box=np.logical_and(lon_hit,lat_hit) print("ground track has {} points, we've selected {}".format(len(lon_hit),np.sum(in_box)) ) box_lons,box_lats=lons[in_box],lats[in_box]
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
3. Reproject MYD021KM channel 1 to a lambert azimuthal projection If we are on OSX we can run the a301utils.modis_to_h5 script to turn the h5 level 1b files into a pyresample projected file for channel 1 by running python using the os.system command If we are on windows, a201utils.modis_to_h5 needs to be run in the pyr...
from a301lib.modis_reproject import make_projectname reproject_name=make_projectname(rad_file) reproject_path = Path(reproject_name) if reproject_path.exists(): print('using reprojected h5 file {}'.format(reproject_name)) else: #need to create reproject.h5 for channel 1 channels='-c 1 4 3 31' template=...
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads
with h5py.File(reproject_name,'r') as h5_file: basemap_args=json.loads(h5_file.attrs['basemap_args']) chan1=h5_file['channels']['1'][...] geo_string = h5_file.attrs['geotiff_args'] geotiff_args = json.loads(geo_string) print('basemap_args: \n{}\n'.format(basemap_args)) print('geotiff_args: \n{}\n'.form...
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
write the groundtrack out for future use
groundtrack_name = reproject_name.replace('reproject','groundtrack') print('writing groundtrack to {}'.format(groundtrack_name)) box_times=date_times[in_box] # # h5 files can't store dates, but they can store floating point # seconds since 1970, which is called POSIX timestamp # timestamps = [item.timestamp() for item ...
notebooks/ground_track.ipynb
a301-teaching/a301_code
mit
Now we set some key variables for the simulation, $\theta$ is the contact angle in each phase and without contact hysteresis sums to 180. The fiber radius is 5 $\mu m$ for this particular material and this used in the pore-scale capillary pressure models.
theta_w = 110 theta_a = 70 fiber_rad = 5e-6
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Experimental Data The experimental data we are matching is taken from the 2009 paper for uncompressed Toray 090D which has had some treatment with PTFE to make it non-wetting to water. However, the material also seems to be non-wetting to air, once filled with water as reducing the pressure once invaded with water does...
data = np.array([[-1.95351934e+04, 0.00000000e+00], [-1.79098945e+04, 1.43308300e-03], [-1.63107500e+04, 1.19626000e-03], [-1.45700654e+04, 9.59437000e-04], [-1.30020859e+04, 7.22614000e-04], [-1.14239746e+04, 4.85791000e-04], [-9.90715234e+03, 2.48968000e-04], [-8.45271973e+03,...
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
New Geometric Parameters The following code block cleans up the data a bit. The conduit_lengths are a new addition to openpnm to be able to apply different conductances along the length of the conduit for each section. Conduits in OpenPNM are considered to be comprised of a throat and the two half-pores either side and...
net_health = pn.check_network_health() if len(net_health['trim_pores']) > 0: op.topotools.trim(network=pn, pores=net_health['trim_pores']) Ps = pn.pores() Ts = pn.throats() geom = op.geometry.GenericGeometry(network=pn, pores=Ps, throats=Ts, name='geometry') geom['throat.conduit_lengths.pore1'] = 1e-12 geom['throa...
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Phase Setup Now we set up the phases and apply the contact angles.
air = op.phases.Air(network=pn, name='air') water = op.phases.Water(network=pn, name='water') air['pore.contact_angle'] = theta_a air["pore.surface_tension"] = water["pore.surface_tension"] water['pore.contact_angle'] = theta_w water["pore.temperature"] = 293.7 water.regenerate_models()
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Physics Setup Now we set up the physics for each phase. The default capillary pressure model from the Standard physics class is the Washburn model which applies to straight capillary tubes and we must override it here with the Purcell model. We add the model to both phases and also add a value for pore.entry_pressure m...
phys_air = op.physics.Standard(network=pn, phase=air, geometry=geom, name='phys_air') phys_water = op.physics.Standard(network=pn, phase=water, geometry=geom, name='phys_water') throat_diam = 'throat.diameter' pore_diam = 'pore.indiameter' pmod = pm.capillary_pressure.purcell phys_water.add_model(propname='throat.entr...
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
We apply the following late pore filling model: $ S_{res} = S_{wp}^\left(\frac{P_c^}{P_c}\right)^{\eta}$ This is a heuristic model that adjusts the phase ocupancy inside an individual after is has been invaded and reproduces the gradual expansion of the phases into smaller sub-pore scale features such as cracks fiber i...
lpf = 'pore.late_filling' phys_water.add_model(propname='pore.pc_star', model=op.models.misc.from_neighbor_throats, throat_prop='throat.entry_pressure', mode='min') phys_water.add_model(propname=lpf, model=pm.multiphase.late_filling, ...
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Finally we add the meniscus model for cooperative pore filling. The model mechanics are explained in greater detail in part c of this tutorial but the process is shown in the animation below. The brown fibrous cage structure represents the fibers surrounding and defining a single pore in the network. The shrinking sphe...
phys_air.add_model(propname='throat.meniscus', model=op.models.physics.meniscus.purcell, mode='men', r_toroid=fiber_rad, target_Pc=5000)
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Percolation Algorithms Now all the physics is defined we can setup and run two algorithms for water injection and withdrawal and compare to the experimental data. NOTE: THIS NEXT STEP MIGHT TAKE SEVERAL MINUTES.
#NBVAL_IGNORE_OUTPUT inv_points = np.arange(-15000, 15100, 10) IP_injection = op.algorithms.MixedInvasionPercolation(network=pn, name='injection') IP_injection.setup(phase=water) IP_injection.set_inlets(pores=pn.pores('bottom_boundary')) IP_injection.settings['late_pore_filling'] = 'pore.late_filling' IP_injection.run...
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Let's take a look at the data plotted in the above cell:
print(f"Injection - capillary pressure (Pa):\n {injection_data.Pcap}") print(f"Injection - Saturation:\n {injection_data.S_tot}") print(f"Withdrawal - capillary pressure (Pa):\n {-withdrawal_data.Pcap}") print(f"Withdrawal - Saturation:\n {1-withdrawal_data.S_tot}")
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Saving the output OpenPNM manages the simulation projects with the Workspace manager class which is a singleton and instantied when OpenPNM is first imported. We can print it to take a look at the contents
#NBVAL_IGNORE_OUTPUT print(ws)
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
The project is saved for part b of this tutorial
ws.save_project(prj, '../../fixtures/hysteresis_paper_project')
examples/paper_recreations/Tranter et al. (2017)/Tranter et al. (2017) - Part A.ipynb
TomTranter/OpenPNM
mit
Create a RnR Cluster
# Either re-use an existing solr cluster id by over riding the below, or leave as None to create a new cluster cluster_id = None # If you choose to leave it as None, it'll use these details to request a new cluster cluster_name = 'Test Cluster' cluster_size = '2' bluemix_wrapper = RetrieveAndRankProxy(solr_cluster_id...
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Create a Solr collection Here we create a Solr document collection in the previously created cluster and upload the InsuranceLibV2 documents (i.e. answers) to the collection.
collection_id = 'TestCollection' config_id = 'TestConfig' zipped_solr_config = path.join(insurance_lib_data_dir, 'config.zip') bluemix_wrapper.setup_cluster_and_collection(collection_id=collection_id, config_id=config_id, config_zip=zipped_solr_config)
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Upload documents The InsuranceLibV2 had to be pre-processed and formatted into the Solr format for adding documents. TODO: show the scripts for how to do this conversion to solr format from the raw data provided at https://github.com/shuzi/insuranceQA.
documents = path.join(insurance_lib_data_dir, 'document_corpus.solr.xml') print('Uploading from: %s' % documents) bluemix_wrapper.upload_documents_to_collection(collection_id=collection_id, corpus_file=documents, content_type='application/xml') print('Uploaded %d documen...
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Train a Ranker Since we already have the annotated queries with the document ids that are relevant in this case, we can use that to train a ranker. TODO: show the scripts for how to do this conversion to the relevance file format from the raw data provided at https://github.com/shuzi/insuranceQA. Generate a feature fil...
collection_id = 'TestCollection' cluster_id = 'sc40bbecbd_362a_4388_b61b_e3a90578d3b3' temporary_output_dir = mkdtemp() feature_file = path.join(temporary_output_dir, 'ranker_feature_file.csv') print('Saving file to: %s' % feature_file) num_rows = 50 with smart_file_open(path.join(insurance_lib_data_dir, 'validation_g...
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Call Train with the Feature File WARNING: Each RnR account gives you 8 rankers to be active at any given time, since I experiment a lot, I have a convenience flag to delete rankers in case the quota is full. You obviously want to switch this flag off if you have rankers you don't want deleted.
ranker_api_wrapper = RankerProxy(config=config) ranker_name = 'TestRanker' ranker_id = ranker_api_wrapper.train_ranker(train_file_location=feature_file, train_file_has_answer_id=True, is_enabled_make_space=True, ranker_name=ranker_name) ranker_api_wrapper.wait_for_training_to...
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Query the cluster with questions
query_string = 'can i add my brother to my health insurance ' def print_results(response, num_to_print=3): results = json.loads(response)['response']['docs'] for i, doc in enumerate(results[0:num_to_print]): print('Result {}:\n\tid: {}\n\tbody:{}...'.format(i+1,doc['id'], " ".join(doc['body'])[...
examples/1.0 - Create RnR Cluster & Train Ranker.ipynb
rchaks/retrieve-and-rank-tuning
apache-2.0
Get the Data
fertility_df = pd.read_csv('data/fertility.csv', index_col='Country') life_expectancy_df = pd.read_csv('data/life_expectancy.csv', index_col='Country') population_df = pd.read_csv('data/population.csv', index_col='Country') regions_df = pd.read_csv('data/regions.csv', index_col='Country')
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Make the column names ints not strings for handling
columns = list(fertility_df.columns) years = list(range(int(columns[0]), int(columns[-1]))) rename_dict = dict(zip(columns, years)) fertility_df = fertility_df.rename(columns=rename_dict) life_expectancy_df = life_expectancy_df.rename(columns=rename_dict) population_df = population_df.rename(columns=rename_dict) region...
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Turn population into bubble sizes. Use min_size and factor to tweak.
scale_factor = 200 population_df_size = np.sqrt(population_df/np.pi)/scale_factor min_size = 3 population_df_size = population_df_size.where(population_df_size >= min_size).fillna(min_size)
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Use pandas categories and categorize & color the regions
regions_df.Group = regions_df.Group.astype('category') regions = list(regions_df.Group.cat.categories) def get_color(r): index = regions.index(r.Group) return Spectral6[regions.index(r.Group)] regions_df['region_color'] = regions_df.apply(get_color, axis=1) zip(regions, Spectral6)
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Build the plot Setting up the data The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot. We could use bokeh-server to drive this change, but a...
sources = {} region_color = regions_df['region_color'] region_color.name = 'region_color' for year in years: fertility = fertility_df[year] fertility.name = 'fertility' life = life_expectancy_df[year] life.name = 'life' population = population_df_size[year] population.name = 'population' ...
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Add the slider and callback Last, but not least, we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source,...
# Add the slider code = """ var year = slider.get('value'), sources = %s, new_source_data = sources[year].get('data'); renderer_source.set('data', new_source_data); renderer_source.trigger('change'); text_source.set('data', {'year': [String(year)]}); text_source.trigger('change'); ""...
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Embed in a template and render Last but not least, we use vplot to stick togethre the chart and the slider. And we embed that in a template we write using the script, div output from components. We display it in IPython and save it as an html file.
# Stick the plot and the slider together layout = vplot(plot, hplot(slider)) with open('gapminder_template_simple.html', 'r') as f: template = Template(f.read()) script, div = components(layout) html = template.render( title="Bokeh - Gapminder demo", plot_script=script, plot_div=div, ) with open(...
old/slider_example/Bubble plot.ipynb
birdsarah/bokeh-miscellany
gpl-2.0
Good. Here's the code that is being run, inside the "XrayData" class: ```python def evaluate_log_prior(self): # Uniform in all parameters... return 0.0 # HACK def evaluate_log_likelihood(self): self.make_mean_image() # Return un-normalized Poisson sampling distribution: # log (\mu^N e^{...
npix = 15 # Initial guess at the interesting range of cluster position parameters: xmin,xmax = 310,350 ymin,ymax = 310,350 # Refinement, found by fiddling around a bit: # xmin,xmax = 327.7,328.3 # ymin,ymax = 346.4,347.0 x0grid = np.linspace(xmin,xmax,npix) y0grid = np.linspace(ymin,ymax,npix) logprob = np.zeros([np...
examples/XrayImage/Inference.ipynb
wmorning/StatisticalMethods
gpl-2.0
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dict...
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ words = set(text) words_to_key = {w: i for i, w in en...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to token...
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.':'||PERIOD||', ',':'||COMMA||', '"':'||QUOTATION_MARK||', ';':'||SEMICOLON||', '!...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple (...
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ input = tf.placeholder(tf.int32, shape=(None, None), name="input") targets = tf.placeholder(tf.int32, shape=(None, None), name="targets") learning_rate = tf....
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the follo...
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstm_layer_count = 2 lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rn...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ random = tf.Variable(...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) return output, tf.identity(final_state, name='final_s...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number...
def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - Th...
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ batch_count = int(len(...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the ...
# Number of Epochs num_epochs = 180 # Batch Size batch_size = 100 # RNN Size rnn_size = 256 # Sequence Length seq_length = 16 # Learning Rate learning_rate = 0.001 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTen...
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ input = loaded_graph.get_tensor_by_nam...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ probabilities = [(i,p) for i,...
py3/project-3/dlnd_tv_script_generation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Table 1 - VOTable with all source properties
tbl1 = ascii.read("http://iopscience.iop.org/0004-637X/758/1/31/suppdata/apj443828t1_mrt.txt") tbl1.columns tbl1[0:5] len(tbl1)
notebooks/Luhman2012.ipynb
BrownDwarf/ApJdataFrames
mit
Cross match with SIMBAD
from astroquery.simbad import Simbad import astropy.coordinates as coord import astropy.units as u customSimbad = Simbad() customSimbad.add_votable_fields('otype', 'sptype') query_list = tbl1["Name"].data.data result = customSimbad.query_objects(query_list, verbose=True) result[0:3] print "There were {} sources que...
notebooks/Luhman2012.ipynb
BrownDwarf/ApJdataFrames
mit
Save the data table locally.
tbl1_plusSimbad.head() ! mkdir ../data/Luhman2012/ tbl1_plusSimbad.to_csv("../data/Luhman2012/tbl1_plusSimbad.csv", index=False)
notebooks/Luhman2012.ipynb
BrownDwarf/ApJdataFrames
mit
Load the HRPC Mailing List Now let's load the email data for analysis.
wg = "hrpc" urls = [wg] archives = [Archive(url,mbox=True) for url in urls] activities = [arx.get_activity(resolved=False) for arx in archives] activity = activities[0]
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Load IETF Draft Data Next, we will use the ietfdata tracker to look at the frequency of drafts for this working group.
from ietfdata.datatracker import * from ietfdata.datatracker_ext import * import pandas as pd dt = DataTracker() g = dt.group_from_acronym("hrpc") drafts = [draft for draft in dt.documents(group=g, doctype=dt.document_type_from_slug("draft"))] draft_df = pd.DataFrame.from_records([ {'time' : draft.time, '...
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
We will want to use the data of the drafts. Time resolution is too small.
draft_df['date'] = draft_df['time'].dt.date
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Plotting Some preprocessing is necessary to get the drafts data ready for plotting.
from matplotlib import cm viridis = cm.get_cmap('viridis') drafts_per_day = draft_df.groupby('date').count()['title']
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
For each of the mailing lists we are looking at, plot the rolling average (over window) of number of emails sent per day. Then plot a vertical line with the height of the drafts count and colored by the gender tendency.
window = 100 plt.figure(figsize=(12, 6)) for i, gender in enumerate(gender_activity.columns): colors = [viridis(0), viridis(.5), viridis(.99)] ta = gender_activity[gender] rmta = ta.rolling(window).mean() rmtadna = rmta.dropna() plt.plot_date(np.array(rmtadna.index), np.array(r...
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Is gender diversity correlated with draft output?
from scipy.stats import pearsonr import pandas as pd def calculate_pvalues(df): df = df.dropna()._get_numeric_data() dfcols = pd.DataFrame(columns=df.columns) pvalues = dfcols.transpose().join(dfcols, how='outer') for r in df.columns: for c in df.columns: pvalues[r][c] = round(pears...
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Measuring diversity As a rough measure of gender diversity, we sum the mailing list activity of women and those of unidentified gender, and divide by the activity of men.
garm['diversity'] = (garm['unknown'] + garm['women']) / garm['men'] garm['drafts'] = drafts_per_ordinal_day garm['drafts'] = garm['drafts'].fillna(0) garm.corr(method='pearson') calculate_pvalues(garm)
examples/name-and-gender/Working Group Emails and Drafts-hrpc.ipynb
datactive/bigbang
mit
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is know...
x1=np.arange(-5,6) y1=5*np.ones(11) f1=np.zeros(11) x2=np.arange(-5,6) y2=-5*np.ones(11) f2=np.zeros(11) y3=np.arange(-4,5) x3=5*np.ones(9) f3=np.zeros(9) y4=np.arange(-4,5) x4=-5*np.ones(9) f4=np.zeros(9) x5=np.array([0]) y5=np.array([0]) f5=np.array([1]) x=np.hstack((x1,x2,x3,x4,x5)) y=np.hstack((y1,y2,y3,y4,y5)) f=n...
assignments/assignment08/InterpolationEx02.ipynb
ajhenrikson/phys202-2015-work
mit
Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain: xnew and ynew should be 1d arrays with 100 points between $[-5,5]$. Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid. Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xne...
xnew=np.linspace(-5.0,6.0,100) ynew=np.linspace(-5,6,100) Xnew,Ynew=np.meshgrid(xnew,ynew) Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic',fill_value=0.0) assert xnew.shape==(100,) assert ynew.shape==(100,) assert Xnew.shape==(100,100) assert Ynew.shape==(100,100) assert Fnew.shape==(100,100)
assignments/assignment08/InterpolationEx02.ipynb
ajhenrikson/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
plt.figure(figsize=(10,8)) plt.contourf(Fnew,cmap='cubehelix_r') plt.title('2D Interpolation'); assert True # leave this to grade the plot
assignments/assignment08/InterpolationEx02.ipynb
ajhenrikson/phys202-2015-work
mit
Numpy's ndarray One of the reasons that makes numpy a great tool for computations on arrays is it ndarray calls. This class allows to declare arrays with a number of convenient methods and attributes that makes our life easier when programming complex algorithms on large arrays.
#Now let's see what one of its instances looks like: a = np.ndarray(4) b = np.ndarray([3,4]) print(type(b)) print('a: ', a) print('b: ', b)
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
There is a wide range of numpy functions that allow to declare ndarrays filled with your favourite flavours: https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html
# zeros z = np.zeros(5) print(type(z)) print(z) # ones o = np.ones((4,2)) print(type(o)) print(o) # ordered integers oi = np.arange(10) #Only one-dimensional print(type(oi)) print(oi)
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Operations on ndarrays Arithmetic operations on ndarrays are possible using python's symbols. It is important to notice that these operations are performed term by term on arrays of same size and dimensions. It is also possible to make operations between ndarrays and numbers, in which case, the same operation is perfor...
#An array of ones x = np.arange(5) #An array of random values drawn uniformly between 0 and 1 y = np.random.rand(5) print('x: ', x) print('y: ', y) print('addition: ', x + y) print('mutliplication: ', x * y) print('power: ', x ** y) #Operation with numbers print('subtraction: ', x - 3) print('fraction: ', x / 2) prin...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
ndarrays and numpy also have methods or functions to perform matrix operations:
#Let's just declare some new arrays x = (np.random.rand(4,5)*10).astype(int) # note, astype is a method that allows to change the type of all the elements in the ndarray y = np.ones((5))+1 # Note: here, show addition of non-matching shapes #np.ones((5,3,4))+np.random.randn(4) #transpose print('the array x: \n', x) pri...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
array shapes It is possible to access the shape and size (there is a difference!) of an array, and even to alter its shape in various different ways.
print('Shape of x: ',x.shape) # From ndarray attributes print('Shape of y: ',np.shape(y)) # From numpy function print('Size of x: ', x.size) # From ndarray attributes print('Size of y: ', np.size(y)) # From numpy function
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Now this is how we can change an array's size:
print('the original array: \n', x) print('change of shape: \n', x.reshape((10,2)))#reshape 4x5 into 10x2 print('change of shape and number of dimensions: \n', x.reshape((5,2,2)))#reshape 4x5 into 5x2x2 print('the size has to be conserved: \n', x.reshape((10,2)).size) #flattenning an array: xflat = x.flatten() print('...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Indexing with numpy For the most part, indexing in numpy works exactly as we saw in python. We are going to use this section to introduce a couple of features for indexing (some native from python) that can significantly improve your coding skills. In particular, numpy introduces a particularly useful object: np.newaxi...
#conventional indexing print(x) print('first line of x: {}'.format(x[0,:])) print('second column of x: {}'.format(x[:,1])) print('last element of x: {}'.format(x[-1,-1])) #selection print('One element in 3 between the second and 13th element: ', xflat[1:14:3]) #This selection writes as array[begin:end:step] #Equivale...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
ndarray method for simple operations on array elements Here I list a small number of ndarray methods that are very convenient and often used in astronomy and image processing. It is always a good thing to have them in mind to simplfy your code. Of course, we only take a look at a few of them, but there is plenty more w...
a = np.linspace(1,6,3) # 3 values evenly spaced between 1 and 6 b = np.arange(16).reshape(4,4) c = np.random.randn(3,4)*10 # random draws from a normal distribution with standard deviation 10 print(f'Here are 3 new arrays, a:\n {a}, \nb:\n {b}\nand c:\n {c}') #Sum the elements of an array print('Sum over all of the ar...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Oups, not what we were expecting, but what happened is that c was replaced by its sorted version. This is an in-place computation.
print(c) #Your turn now: give me the ALL the elements of c sorted (not just along one axis). #Your answer.... #Then, sort the array in decreasing order #Your answer....
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Now, we are going to see an important feature in numpy. While one can live without nowing this trick, one cannot be a good python coder without using it. I am talking about the mighty: Newaxis!! Newaxis allows to add a dimension to an array. This allows to expand arrays in a cheap way, which leads to faster operations ...
import numpy as np #A couple of arrays first: x_arr = np.arange(10) y_arr = np.arange(10) print(x_arr.shape) x = x_arr[np.newaxis,:] print(x.shape) print(x_arr) print(x) print(x+x_arr) #Now let's index these with newaxes: print('Newaxis indexed array \n {} and its shape \n {}'.format(x_arr[:,np.newaxis],x_arr[:,np.new...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
A quick intro to matplotlib When wrinting complex algorithms, it is important to be able to chack that calculations are done properly, but also to be able to display results in a clear manner. When dimaensionality and size are small, it is still possible to rely on printing, but more generally and for better clarity, d...
import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(0,5,100) #Plotting a curve plt.plot(np.exp(x)) plt.show() #The same curve with the right x-axis in red dashed line plt.plot(x, np.exp(x), '--r') plt.show() #The same curve with the right x-axis and only the points in the data as dots plt.plot(x[::4...
day2/numpy-intro.ipynb
timothydmorton/usrp-sciprog
mit
Data The data are measurements of the atmospheric CO2 concentration made at Mauna Loa, Hawaii (Keeling & Whorf 2004). Data can be found at http://scrippsco2.ucsd.edu/data/atmospheric_co2/primary_mlo_co2_record. We use the [statsmodels version](http://statsmodels.sourceforge.net/devel/datasets/generated/co2.html].
import numpy as np import matplotlib.pyplot as plt from statsmodels.datasets import co2 data = co2.load_pandas().data t = 2000 + (np.array(data.index.to_julian_date()) - 2451545.0) / 365.25 y = np.array(data.co2) m = np.isfinite(t) & np.isfinite(y) & (t < 1996) t, y = t[m][::4], y[m][::4] plt.plot(t, y, ".k") plt.xli...
deprecated/gp_mauna_loa.ipynb
probml/pyprobml
mit
Kernel In this figure, you can see that there is periodic (or quasi-periodic) signal with a year-long period superimposed on a long term trend. We will follow R&W and model these effects non-parametrically using a complicated covariance function. The covariance function that we’ll use is: $$k(r) = k_1(r) + k_2(r) + k_3...
import jax import jax.numpy as jnp from tinygp import kernels, transforms, GaussianProcess def build_gp(theta, X): mean = theta[-1] # We want most of out parameters to be positive so we take the `exp` here # Note that we're using `jnp` instead of `np` theta = jnp.exp(theta[:-1]) # Construct the...
deprecated/gp_mauna_loa.ipynb
probml/pyprobml
mit
Normalizing text
import string def norm_words(words): words = words.lower().translate(None, string.punctuation) return words jeopardy["clean_question"] = jeopardy["Question"].apply(norm_words) jeopardy["clean_answer"] = jeopardy["Answer"].apply(norm_words) jeopardy.head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Normalizing columns
def norm_value(value): try: value = int(value.translate(None, string.punctuation)) except: value = 0 return value jeopardy["clean_value"] = jeopardy["Value"].apply(norm_value) jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"]) print(jeopardy.dtypes) jeopardy.head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Answers in questions
def ans_in_q(row): match_count = 0 split_answer = row["clean_answer"].split(" ") split_question = row["clean_question"].split(" ") try: split_answer.remove("the") except: pass if len(split_answer) == 0: return 0 else: for word in split_answer: ...
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Only 0.6% of the answers appear in the questions itself. Out of this 0.6%, a sample of the questions shows that they are all multiple choice questions, which concludes that it is very unlikely that the answer will be in the question itself. Recycled questions
jeopardy = jeopardy.sort_values(by="Air Date") question_overlap = [] terms_used = set() for index, row in jeopardy.iterrows(): match_count = 0 split_question = row["clean_question"].split(" ") for word in split_question: if len(word) < 6: split_question.remove(word) for word in spl...
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
Low value vs high value questions
def value(row): if row["clean_value"] > 800: value = 1 else: value = 0 return value jeopardy["high_value"] = jeopardy.apply(value, axis=1) jeopardy.head()
Probability and Statistics in Python/Guided Project - Winning Jeopardy.ipynb
foxan/dataquest
apache-2.0
The above is what the output should look like.
%%script 20170706_c_foo void blank(char buf[LINES][COLUMNS], int row, int column) { for ( ; row < LINES; row++) { for ( ; column < COLUMNS; column++) buf[row][column] = ' '; column = 0; } } %%script 20170706_c_foo void blank_to_end_of_row(char buf[LINES][COLUMNS], int row, int col...
20170706-dojo-clear-to-end-of-table.ipynb
james-prior/cohpy
mit