markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']] top3segments.head(15)
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument. Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classif...
mb1.index[:3]
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class.
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3])) mb_class = mb1.copy() mb_class.index = class_index
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
However, since there are multiple taxonomic units with the same class, our index is no longer unique:
mb_class.head()
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
We can re-establish a unique index by summing all rows with the same class, using groupby:
mb_class.groupby(level=0).sum().head(10)
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
Exercise 2 Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
from IPython.core.display import HTML HTML(filename='Data/titanic.html')
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
Women and children first? Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings. Use the groupby method to calculate the proportion of passengers that survived by sex. Calculate the same proportion, but by class and sex. Create age categories: children ...
titanic_df = pd.read_excel('Data/titanic.xls', 'titanic', index_col=None, header=0) titanic_df titanic_nameduplicate = titanic_df.duplicated(subset='name') #titanic_nameduplicate titanic_df.drop_duplicates(['name']) gender_map = {'male':0, 'female':1} titanic_df['sex'] = titanic_df.sex.map(gender_map) titanic_df...
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
The Heart of the Notebook: The Combinatorial Optimization class
class CombinatorialOptimisation(Disaggregator): """ A Combinatorial Optimization Algorithm based on the implementation by NILMTK This class is build upon the main Dissagregator class already implemented by NILMTK All the methods from Dissagregator are passed in here as well since we import the...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Importing and Loading the REDD dataset
data_dir = '\Users\Nick\Google Drive\PhD\Courses\Semester 2\AM207\Project' we = DataSet(join(data_dir, 'REDD.h5')) print('loaded ' + str(len(we.buildings)) + ' buildings')
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
We want to train the Combinatorial Optimization Algorithm using the data for 5 buildings and then test it against the last building. To simplify our analysis and also to enable comparison with other methods (Neural Nets, FHMM, MLE etc) we will only try to dissagregate data associated with the fridge and the microwave. ...
for i in xrange(1,7): print('Timeframe for building {} is {}'.format(i,we.buildings[i].elec.get_timeframe()))
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Unfortunately, due to a bug in one of the main classes of the NILMTK package the implementation of the Combinatorial Optimization do not save the meters for the disaggregated data correctly unless the building on which we test on also exists in the trainihg set. More on this issue can be found here https://github.com/n...
# Data file directory data_dir = '\Users\Nick\Google Drive\PhD\Courses\Semester 2\AM207\Project' # Make the Data set Data = DataSet(join(data_dir, 'REDD.h5')) # Make copies of the Data Set so that local changes would not affect the global dataset Data_for_5 = DataSet(join(data_dir, 'REDD.h5')) Data_for_rest = DataSet...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Creating MeterGroups with the desired appliances from the desired buildings Below we define a function tha is able to create a metergroup that only includes meters for the appliances that we are interested in and is also able to exclude buildings that we don't want in the meter. Also, if an appliance is requested but a...
def get_all_trainings(appliance, dataset, buildings_to_exclude): # Filtering by appliances: elecs = [] for app in appliance: app_l = [app] print ('Now loading data for ' + app + ' for all buildings in the data to create the metergroup') print() for building in dataset.build...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Now we set the appliances that we want as well as the buildings to exclude and we create the metergroup
applianceName = ['fridge','microwave'] buildings_to_exclude = [4,5] metergroup = get_all_trainings(applianceName,Data_for_rest,buildings_to_exclude) print('Now printing the Meter Group...') print() print(metergroup)
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
As we can see the Metergroup was successfully created and contains all the appliances we requested (Fridge and Microwave) in all buildings that the appliances exist apart from the ones we excluded Correcting the MeterGroup (Necessary for the CO to work) Now we need to perform the trick we mentioned previously. We need ...
def correct_meter(Data,building,appliance,oldmeter): # Unpack meters from the MeterGroup meters = oldmeter.all_meters() # Get the rest of the meters and append for a in appliance: meter_to_add = Data.buildings[building].elec[a] meters.append(meter_to_add) # Group again...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
As we can see the metergroup was updated successfully Training We now need to train in the Metergroup we just created. First, let us load the class for the CO
# Train co = CombinatorialOptimisation()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Now Let's train
co.train(corr_metergroup)
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Preparing the Testing Data Now that the training is done, the only thing that we have to do is to prepare the Data for Building 5 that we want to test on and call the Disaggregation. The data set is now the remaining part of building 5 that is not seen. After that, we only keep the Main meter which contains ifrormation...
Test_Data = DataSet(join(data_dir, 'REDD.h5')) Test_Data.set_window(start=break_point) # The building number on which we test building_for_testing = 5 test = Test_Data.buildings[building_for_testing].elec mains = test.mains()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Disaggregating the test data The disaggregation Begins Now
# Disaggregate disag_filename = join(data_dir, 'COMBINATORIAL_OPTIMIZATION.h5') mains = test.mains() try: output = HDFDataStore(disag_filename, 'w') co.disaggregate(mains, output) except ValueError: output.close() output = HDFDataStore(disag_filename, 'w') co.disaggregate(mains, output) for meter...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
OK.. Now we are all done. All that remains is to interpret the results and plot the scores.. Post Processing & Results
# Opening the Dataset with the Disaggregated data disag = DataSet(disag_filename) # Getting electric appliances and meters disag_elec = disag.buildings[building_for_testing].elec # We also get the electric appliances and meters for the ground truth data to compare elec = Test_Data.buildings[building_for_testing].elec...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Resampling to align meters Before we are able to calculate and plot the metrics we need to align the ground truth meter with the disaggregated meters. Why so? If you notice in the dissagregation method of the CO class above, you may see that by default the time sampling is changed from 3s which is the raw data to 60s. ...
def align_two_meters(master, slave, func='when_on'): """Returns a generator of 2-column pd.DataFrames. The first column is from `master`, the second from `slave`. Takes the sample rate and good_periods of `master` and applies to `slave`. Parameters ---------- master, slave : ElecMeter or Mete...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Here we just plot the disaggregated data alongside the ground truth for the Fridge
disag_elec.select(instance=18).plot() me.select(instance=18).plot()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Aligning meters, Converting to Numpy and Computing Metrics In this part of the Notebook, we call the function we previously defined to align the meters and then we convert the meters to pandas and ultimately to numpy arrays. We check if any NaN's exist (which is something possible after resmplilng.. Resampling errors m...
appliances_scores = {} for m in me.meters: print('Processing {}...'.format(m.label())) ground_truth = m inst = m.instance() prediction = disag_elec.select(instance=inst) a = prediction.meters[0] b = a.power_series_all_data() pr_a,gt_a = align_two_meters(prediction.meters[0],g...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Results Now we just plot the scores for both the Fridge and the Microwave in order to be able to visualize what is going on. We do not comment on the results in this notebook since we do this in the report. There is a separate notebook where all these results are combined along with the corresponding results from the N...
x = np.arange(2) y = np.array([appliances_scores[i]['F1-Score'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Precision
x = np.arange(2) y = np.array([appliances_scores[i]['Precision'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x)...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Recall
x = np.arange(2) y = np.array([appliances_scores[i]['Recall'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ax...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Accuracy
x = np.arange(2) y = np.array([appliances_scores[i]['Accuracy'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ...
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
Create the grid We are going to build a uniform rectilinear grid with a node spacing of 100 km in the y-direction and 10 km in the x-direction on which we will solve the flexure equation. First we need to import RasterModelGrid.
from landlab import RasterModelGrid
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Create a rectilinear grid with a spacing of 100 km between rows and 10 km between columns. The numbers of rows and columms are provided as a tuple of (n_rows, n_cols), in the same manner as similar numpy functions. The spacing is also a tuple, (dy, dx).
grid = RasterModelGrid((3, 800), xy_spacing=(100e3, 10e3)) grid.dy, grid.dx
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Create the component Now we create the flexure component and tell it to use our newly created grid. First, though, we'll examine the Flexure component a bit.
from landlab.components import Flexure1D
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
The Flexure1D component, as with most landlab components, will require our grid to have some data that it will use. We can get the names of these data fields with the input_var_names attribute of the component class.
Flexure1D.input_var_names
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
We see that flexure uses just one data field: the change in lithospheric loading. Landlab component classes can provide additional information about each of these fields. For instance, to see the units for a field, use the var_units method.
Flexure1D.var_units('lithosphere__increment_of_overlying_pressure')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
To print a more detailed description of a field, use var_help.
Flexure1D.var_help('lithosphere__increment_of_overlying_pressure')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
What about the data that Flexure1D provides? Use the output_var_names attribute.
Flexure1D.output_var_names Flexure1D.var_help('lithosphere_surface__increment_of_elevation')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Now that we understand the component a little more, create it using our grid.
grid.add_zeros("lithosphere__increment_of_overlying_pressure", at="node") flex = Flexure1D(grid, method='flexure')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Add a point load First we'll add just a single point load to the grid. We need to call the update method of the component to calculate the resulting deflection (if we don't run update the deflections would still be all zeros). Use the load_at_node attribute of Flexure1D to set the loads. Notice that load_at_node has th...
flex.load_at_node[1, 200] = 1e6 flex.update() plt.plot(flex.x_at_node[1, :400] / 1000., flex.dz_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Before we make any changes, reset the deflections to zero.
flex.dz_at_node[:] = 0.
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Now we will double the effective elastic thickness but keep the same point load. Notice that, as expected, the deflections are more spread out.
flex.eet *= 2. flex.update() plt.plot(flex.x_at_node[1, :400] / 1000., flex.dz_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Add some loading We will now add a distributed load. As we saw above, for this component, the name of the attribute that holds the applied loads is load_at_node. For this example we create a loading that increases linearly of the center portion of the grid until some maximum. This could by thought of as the water load ...
flex.load_at_node[1, :100] = 0. flex.load_at_node[1, 100:300] = np.arange(200) * 1e6 / 200. flex.load_at_node[1, 300:] = 1e6 plt.plot(flex.load_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Update the component to solve for deflection Clear the current deflections, and run update to get the new deflections.
flex.dz_at_node[:] = 0. flex.update() plt.plot(flex.x_at_node[1, :400] / 1000., flex.dz_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
Init
import os %%R library(ggplot2) library(dplyr) library(tidyr) library(phyloseq) library(fitdistrplus) library(sads) %%R dir.create(workDir, showWarnings=FALSE)
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Loading phyloseq list datasets
%%R # bulk core samples F = file.path(physeqDir, physeqBulkCore) physeq.bulk = readRDS(F) #physeq.bulk.m = physeq.bulk %>% sample_data physeq.bulk %>% names %%R # SIP core samples F = file.path(physeqDir, physeqSIP) physeq.SIP = readRDS(F) #physeq.SIP.m = physeq.SIP %>% sample_data physeq.SIP %>% names
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Infer abundance distribution of each bulk soil community distribution fit
%%R physeq2otu.long = function(physeq){ df.OTU = physeq %>% transform_sample_counts(function(x) x/sum(x)) %>% otu_table %>% as.matrix %>% as.data.frame df.OTU$OTU = rownames(df.OTU) df.OTU = df.OTU %>% gather('sample', 'abundance', 1:(ncol(df.OTU)-1)) return(...
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Relative abundance of most abundant taxa
%%R -w 800 df.OTU = do.call(rbind, df.OTU.l) %>% mutate(abundance = abundance * 100) %>% group_by(sample) %>% mutate(rank = row_number(desc(abundance))) %>% ungroup() %>% filter(rank < 10) ggplot(df.OTU, aes(rank, abundance, color=sample, group=sample)) + geom_point() + geom_line() + la...
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Making a community file for the simulations
%%R -w 800 -h 300 df.OTU = do.call(rbind, df.OTU.l) %>% mutate(abundance = abundance * 100) %>% group_by(sample) %>% mutate(rank = row_number(desc(abundance))) %>% group_by(rank) %>% summarize(mean_abundance = mean(abundance)) %>% ungroup() %>% mutate(library = 1, mean_abundance =...
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Adding reference genome taxon names
ret = !SIPSim KDE_info -t /home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde.pkl ret = ret[1:] ret[:5] %%R F = '/home/nick/notebook/SIPSim/dev/fullCyc_trim//ampFrags_kde_amplified.txt' ret = read.delim(F, sep='\t') ret = ret$genomeID ret %>% length %>% print ret %>% head %%R ret %>% length %>% pri...
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Writing file
%%R F = file.path(workDir, 'fullCyc_12C-Con_trm_comm.txt') write.table(df.OTU, F, sep='\t', quote=FALSE, row.names=FALSE) cat('File written:', F, '\n')
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
parsing amp-Frag file to match comm file
!tail -n +2 /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm.txt | \ cut -f 2 > /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm_taxa.txt outFile = os.path.splitext(ampFragFile)[0] + '_parsed.pkl' !SIPSim KDE_parse \ $ampFragFile \ /home/nick/notebook/SIPSim/dev/fullCyc/fullCy...
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
Case 1: a = a+b The sum is first computed and resulting in a new array and the a is bound to the new array
a = np.array(range(10000000)) b = np.array(range(9999999,-1,-1)) %%time a = a + b
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
Case 2: a += b The elements of b are directly added into the elements of a (in memory) - no intermediate array. These operators implement the so-called "in-place arithmetics" (e.g., +=, *=, /=, -= )
a = np.array(range(10000000)) b = np.array(range(9999999,-1,-1)) %%time a +=b
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
2. Vectorization
#Apply function to a complete array instead of writing loop to iterate over all elements of the array. #This is called vectorization. The opposite of vectorization (for loops) is known as the scalar implementation def f(x): return x*np.exp(4) print(f(a))
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
3. Slicing and reshape Array slicing x[i:j:s] picks out the elements starting with index i and stepping s indices at the time up to, but not including, j.
x = np.array(range(100)) x[1:-1] # picks out all elements except the first and the last, but contrary to lists, a[1:-1] is not a copy of the data in a. x[0:-1:2] # picks out every two elements up to, but not including, the last element, while x[::4] # picks out every four elements in the whole array.
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
Array shape manipulation
a = np.linspace(-1, 1, 6) print (a) a.shape a.size # rows, columns a.shape = (2, 3) a = a.reshape(2, 3) # alternative a.shape print (a) # len(a) always returns the length of the first dimension of an array. -> no. of rows
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
Exercise 1. Create a 10x10 2d array with 1 on the border and 0 inside
Z = np.ones((10,10)) Z[1:-1,1:-1] = 0 print(Z)
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
2. Create a structured array representing a position (x,y) and a color (r,g,b)
Z = np.zeros(10, [ ('position', [ ('x', float, 1), ('y', float, 1)]), ('color', [ ('r', float, 1), ('g', float, 1), ('b', float, 1)])]) print(Z)
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
3. Consider a large vector Z, compute Z to the power of 3 using 2 different methods
x = np.random.rand(5e7) %timeit np.power(x,3) %timeit x*x*x
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
Run the code below multiple times by repeatedly pressing Ctrl + Enter. After each run observe how the state has changed.
simulation.run(1) simulation.show_beliefs()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
What do you think this call to run is doing? Look at the code in simulate.py to find out. Spend a few minutes looking at the run method and the methods it calls to get a sense for what's going on. What am I looking at? The red star shows the robot's true position. The blue circles indicate the strength of the robot's b...
# We will provide you with the function below to help you look # at the raw numbers. def show_rounded_beliefs(beliefs): for row in beliefs: for belief in row: print("{:0.3f}".format(belief), end=" ") print() # The {:0.3f} notation is an example of "string # formatting" in Pyt...
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
Part 2: Implement a 2D sense function. As you can see, the robot's beliefs aren't changing. No matter how many times we call the simulation's sense method, nothing happens. The beliefs remain uniform. Instructions Open localizer.py and complete the sense function. Run the code in the cell below to import the localizer...
from imp import reload reload(localizer) def test_sense(): R = 'r' _ = 'g' simple_grid = [ [_,_,_], [_,R,_], [_,_,_] ] p = 1.0 / 9 initial_beliefs = [ [p,p,p], [p,p,p], [p,p,p] ] observation = R expected_beliefs_after = [ [1...
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
Integration Testing Before we call this "complete" we should perform an integration test. We've verified that the sense function works on it's own, but does the localizer work overall? Let's perform an integration test. First you you should execute the code in the cell below to prepare the simulation environment.
from simulate import Simulation import simulate as sim import helpers reload(localizer) reload(sim) reload(helpers) R = 'r' G = 'g' grid = [ [R,G,G,G,R,R,R], [G,G,R,G,R,G,R], [G,R,G,G,G,G,R], [R,R,G,R,G,G,G], [R,G,R,G,R,R,R], [G,R,R,R,G,R,G], [R,R,R,G,R,G,G], ] # Use small value for blur. ...
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
Part 3: Identify and Reproduce a Bug Software has bugs. That's okay. A user of your robot called tech support with a complaint "So I was using your robot in a square room and everything was fine. Then I tried loading in a map for a rectangular room and it drove around for a couple seconds and then suddenly stopped wor...
from simulate import Simulation import simulate as sim import helpers reload(localizer) reload(sim) reload(helpers) R = 'r' G = 'g' grid = [ [R,G,G,G,R,R,R], [G,G,R,G,R,G,R], [G,R,G,G,G,G,R], [R,R,G,R,G,G,G], ] blur = 0.001 p_hit = 100.0 simulation = sim.Simulation(grid, blur, p_hit) # remember, th...
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
Step 2: Read and Understand the error message If you triggered the bug, you should see an error message directly above this cell. The end of that message should say: IndexError: list index out of range And just above that you should see something like path/to/your/directory/localizer.pyc in move(dy, dx, beliefs, blurri...
# According to the user, sometimes the robot actually does run "for a while" # - How can you change the code so the robot runs "for a while"? # - How many times do you need to call simulation.run() to consistently # reproduce the bug? # Modify the code below so that when the function is called # it consistently rep...
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
Step 4: Generate a Hypothesis In order to have a guess about what's causing the problem, it will be helpful to use some Python debuggin tools The pdb module (python debugger) will be helpful here! Setting up the debugger Open localizer.py and uncomment the line to the top that says import pdb Just before the line of c...
test_robot_works_in_rectangle_world()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
Using the debugger The debugger works by pausing program execution wherever you write pdb.set_trace() in your code. You also have access to any variables which are accessible from that point in your code. Try running your test again. This time, when the text entry box shows up, type new_i and hit enter. You will see t...
test_robot_works_in_rectangle_world()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
B17001 Poverty Status by Sex by Age For the Poverty Status by Sex by Age we'll select the columns for male and female, below poverty, 65 and older. NOTE if you want to get seniors of a particular race, use table C17001a-g, condensed race iterations. The 'C' tables have fewer age ranges, but there is no 'C' table for a...
[e for e in b17001.columns if '65 to 74' in str(e) or '75 years' in str(e) ] # Now create a subset dataframe with just the columns we need. b17001s = b17001[['geoid', 'B17001015', 'B17001016','B17001029','B17001030']] b17001s.head()
examples/Pandas Reporter Example.ipynb
CivicKnowledge/metatab-py
bsd-3-clause
Senior poverty rates Creating the sums for the senior below poverty rates at the tract level is easy, but there is a serious problem with the results: the numbers are completely unstable. The minimum RSE is 22%, and the median is about 60%. These are useless results.
b17001_65mf = pr.CensusDataFrame() b17001_65mf['geoid'] = b17001['geoid'] b17001_65mf['poverty_65'], b17001_65mf['poverty_65_m90'] = b17001.sum_m('B17001015', 'B17001016','B17001029','B17001030') b17001_65mf.add_rse('poverty_65') b17001_65mf.poverty_65_rse.replace([np.inf, -np.inf], np.nan).dropna().describe()
examples/Pandas Reporter Example.ipynb
CivicKnowledge/metatab-py
bsd-3-clause
Time windows <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />...
import tensorflow as tf
courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb
tensorflow/examples
apache-2.0
Time Windows First, we will train a model to forecast the next step given the previous 20 steps, therefore, we need to create a dataset of 20-step windows for training.
dataset = tf.data.Dataset.range(10) for val in dataset: print(val.numpy()) dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1) for window_dataset in dataset: for val in window_dataset: print(val.numpy(), end=" ") print() dataset = tf.data.Dataset.range(10) dataset = dataset.wi...
courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb
tensorflow/examples
apache-2.0
I want to open each of the conf.py files and replace the nanme of the site with hythsc.lower Dir /home/wcmckee/ccschol has all the schools folders. Need to replace in conf.py Demo Name with folder name of school. Schools name missing characters - eg ardmore
lisschol = os.listdir('/home/wcmckee/ccschol/') findwat = ('LICENSE = """') def replacetext(findtext, replacetext): for lisol in lisschol: filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py') f = open(filereaz,'r') filedata = f.read() f.close() newdata = filedata.re...
posts/niktrans.ipynb
wcmckee/wcmckee.com
mit
Perform Nikola build of all the sites in ccschol folder
buildnik = input('Build school sites y/N ') for lisol in lisschol: print (lisol) os.chdir('/home/wcmckee/ccschol/' + lisol) if 'y' in buildnik: os.system('nikola build') makerst = open('/home/wcmckee/ccs') for rs in rssch.keys(): hythsc = (rs.replace(' ', '-')) hylow = hythsc.lower() ...
posts/niktrans.ipynb
wcmckee/wcmckee.com
mit
Let's build the construction table in order to bend one of the terephtalic acid ligands.
fragment = molecule.get_fragment([(12, 17), (55, 60)]) connection = np.array([[3, 99, 1, 12], [17, 3, 99, 12], [60, 3, 17, 12]]) connection = pd.DataFrame(connection[:, 1:], index=connection[:, 0], columns=['b', 'a', 'd']) c_table = molecule.get_construction_table([(fragment, connection)]) molecule = molecule.loc[c_tab...
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
This gives the following movement:
zmolecule_symb = zmolecule.copy() zmolecule_symb.safe_loc[3, 'angle'] += theta cc.xyz_functions.view([zmolecule_symb.subs(theta, a).get_cartesian() for a in [-30, 0, 30]])
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
Gradient for Zmat to Cartesian For the gradients it is very illustrating to compare: $$ f(x + h) \approx f(x) + f'(x) h $$ $f(x + h)$ will be zmolecule2 and $h$ will be dist_zmol The boolean chain argument denotes if the movement should be chained or not. Bond
dist_zmol1 = zmolecule.copy() r = 3 dist_zmol1.unsafe_loc[:, ['bond', 'angle', 'dihedral']] = 0 dist_zmol1.unsafe_loc[3, 'bond'] = r cc.xyz_functions.view([molecule, molecule + zmolecule.get_grad_cartesian(chain=False)(dist_zmol1), molecule + zmolecule.get_grad_cartesian...
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
Angle
angle = 30 dist_zmol2 = zmolecule.copy() dist_zmol2.unsafe_loc[:, ['bond', 'angle', 'dihedral']] = 0 dist_zmol2.unsafe_loc[3, 'angle'] = angle cc.xyz_functions.view([molecule, molecule + zmolecule.get_grad_cartesian(chain=False)(dist_zmol2), molecule + zmolecule.get_grad_...
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
Note that the deviation between $f(x + h)$ and $f(x) + h f'(x)$ is not an error in the implementation but a visualisation of the small angle approximation. The smaller the angle the better is the linearisation. Gradient for Cartesian to Zmat
x_dist = 2 dist_mol = molecule.copy() dist_mol.loc[:, ['x', 'y', 'z']] = 0. dist_mol.loc[13, 'x'] = x_dist zmat_dist = molecule.get_grad_zmat(c_table)(dist_mol)
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
It is immediately obvious, that only the ['bond', 'angle', 'dihedral'] of those atoms change, which are either moved themselves in cartesian space or use moved references.
zmat_dist[(zmat_dist.loc[:, ['bond', 'angle', 'dihedral']] != 0).any(axis=1)]
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
2D trajectory interpolation The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time: t which has discrete values of time t[i]. x which has values of the x position at those times: x[i] = x(t[i]). y which has values of the y position at those times: y[i] = y(t[i...
with np.load('trajectory.npz') as data: t = data['t'] x = data['x'] y = data['y'] print(x) assert isinstance(x, np.ndarray) and len(x)==40 assert isinstance(y, np.ndarray) and len(y)==40 assert isinstance(t, np.ndarray) and len(t)==40
assignments/assignment08/InterpolationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays: newt which has 200 points between ${t_{min},t_{max}}$. newx which has the interpolated values of $x(t)$ at those times. newy which has the interpolated values of $y(t)$ at those times.
newt = np.linspace(min(t),max(t), 200) f = np.sin(newt) approxx= interp1d(x,t,kind = 'cubic') newx = np.linspace(np.min(t), np.max(t), 200) approxy = interp1d(y,t,kind = 'cubic') newy = np.linspace(np.min(t), np.max(t), 200) ?interp1d assert newt[0]==t.min() assert newt[-1]==t.max() assert len(newt)==200 assert le...
assignments/assignment08/InterpolationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points: For the interpolated points, use a solid line. For the original points, use circles of a different color and no line. Customize you plot to make it effective and beautiful.
plt.plot(newt, f, marker='o', linestyle='', label='original data') plt.plot(newx, newy, marker='.', label='interpolated'); plt.legend(); plt.xlabel('x') plt.ylabel('f(x)'); assert True # leave this to grade the trajectory plot
assignments/assignment08/InterpolationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Pandas <img src="pandas_logo.png" width="50%" /> Pandas is a library that provides data analysis tools for the Python programming language. You can think of it as Excel on steroids, but in Python. To start off, I've used the meetup API to gather a bunch of data on members of the DataPhilly meetup group. First let's sta...
events_df = pd.read_pickle('events.pkl') events_df = events_df.sort_values(by='time') events_df
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
You can access values in a DataFrame column like this:
events_df['yes_rsvp_count']
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
You can access a row of a DataFrame using iloc:
events_df.iloc[4]
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We can view the first few rows using the head method:
events_df.head()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
And similarly the last few using tail:
events_df.tail(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We can see that the yes_rsvp_count contains the number of people who RSVPed yes for each event. First let's look at some basic statistics:
yes_rsvp_count = events_df['yes_rsvp_count'] yes_rsvp_count.sum(), yes_rsvp_count.mean(), yes_rsvp_count.min(), yes_rsvp_count.max()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
When we access a single column of the DataFrame like this we get a Series object which is just a 1-dimensional version of a DataFrame.
type(yes_rsvp_count)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We can use the built-in describe method to print out a lot of useful stats in a nice tabular format:
yes_rsvp_count.describe()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next I'd like to graph the number of RSVPs over time to see if there are any interesting trends. To do this let's first sum the waitlist_count and yes_rsvp_count columns and make a new column called total_RSVP_count.
events_df['total_RSVP_count'] = events_df['waitlist_count'] + events_df['yes_rsvp_count'] events_df['total_RSVP_count']
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We can plot these values using the plot method
events_df['total_RSVP_count'].plot()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
The plot method utilizes the matplotlib library behind the scenes to draw the plot. This is interesting, but it would be nice to have the dates of the meetups on the X-axis of the plot. To accomplish this, let's convert the time field from a unix epoch timestamp to a python datetime utilizing the apply method and a fun...
events_df.head(2) import datetime def get_datetime_from_epoch(epoch): return datetime.datetime.fromtimestamp(epoch/1000.0) events_df['time'] = events_df['time'].apply(get_datetime_from_epoch) events_df['time']
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next let's make the time column the index of the DataFrame using the set_index method and then re-plot our data.
events_df.set_index('time', inplace=True) events_df[['total_RSVP_count']].plot()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We can also easily plot multiple columns on the same plot.
all_rsvps = events_df[['yes_rsvp_count', 'waitlist_count', 'total_RSVP_count']] all_rsvps.plot(title='Attendance over time')
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
DataPhilly members dataset Alright so I'm seeing some interesting trends here. Let's take a look at something different. The Meetup API also provides us access to member info. Let's have a look at the data we have available:
members_df = pd.read_pickle('members.pkl') for column in ['joined', 'visited']: members_df[column] = members_df[column].apply(get_datetime_from_epoch) members_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
You'll notice that I've anonymized the meetup member_id and the member's name. I've also used the python module SexMachine to infer members gender based on their first name. I ran SexMachine on the original names before I anonymized them. Let's have a closer look at the gender breakdown of our members:
gender_counts = members_df['gender'].value_counts() gender_counts
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next let's use the hist method to plot a histogram of membership_count. This is the number of groups each member is in.
members_df['membership_count'].hist(bins=20)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Something looks odd here let's check out the value_counts:
members_df['membership_count'].value_counts().head()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Okay so most members are members of 0 meetup groups?! This seems odd! I did a little digging and came up with the answer; members can set their membership details to be private, and then this value will be zero. Let's filter out these members and recreate the histogram.
members_df_non_zero = members_df[members_df['membership_count'] != 0] members_df_non_zero['membership_count'].hist(bins=50)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Okay so most members are only members of a few meetup groups. There's some outliers that are pretty hard to read, let's try plotting this on a logarithmic scale to see if that helps:
ax = members_df_non_zero['membership_count'].hist(bins=50) ax.set_yscale('log') ax.set_xlim(0, 500)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Let's use a mask to filter out the outliers so we can dig into them a little further:
all_the_meetups = members_df[members_df['membership_count'] > 100] filtered = all_the_meetups[['membership_count', 'city', 'country', 'state']] filtered.sort_values(by='membership_count', ascending=False)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
The people from Philly might actually be legitimate members, let's use a compound mask to filter them out as well:
all_the_meetups = members_df[ (members_df['membership_count'] > 100) & (members_df['city'] != 'Philadelphia') ] filtered = all_the_meetups[['membership_count', 'city', 'country', 'state']] filtered.sort_values(by='membership_count', ascending=False)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
That's strange, I don't think we've ever had any members from Berlin, San Francisco, or Jerusalem in attendance :-). The RSVP dataset Moving on, we also have all the events that each member RSVPed to:
rsvps_df = pd.read_pickle('rsvps.pkl') rsvps_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit