markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
To double-check the merge happened without incident, we can check that every row has a State with this line:
len(weather[weather.State.isnull()])
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We can now safely remove the columns with the state names (file and StateName) since they we'll use the short codes.
weather.drop(columns=['file', 'StateName'], inplace=True)
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
To add the weather informations to our store table, we first use the table store_states to match a store code with the corresponding state, then we merge with our weather table.
store = join_df(store, store_states, 'Store') store = join_df(store, weather, 'State')
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
And again, we can check if the merge went well by looking if new NaNs where introduced.
len(store[store.Mean_TemperatureC.isnull()])
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Next, we want to join the googletrend table to this store table. If you remember from our previous look at it, it's not exactly in the same format:
googletrend.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We will need to change the column with the states and the columns with the dates: - in the column fil, the state names contain Rossmann_DE_XX with XX being the code of the state, so we want to remove Rossmann_DE. We will do this by creating a new column containing the last part of a split of the string by '_'. - in the...
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0] googletrend['State'] = googletrend.file.str.split('_', expand=True)[2] googletrend.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Let's check everything went well by looking at the values in the new State column of our googletrend table.
store['State'].unique(),googletrend['State'].unique()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We have two additional values in the second (None and 'SL') but this isn't a problem since they'll be ignored when we join. One problem however is that 'HB,NI' in the first table is named 'NI' in the second one, so we need to change that.
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Why do we have a None in state? As we said before, there is a global trend for Germany that corresponds to Rosmann_DE in the field file. For those, the previous split failed which gave the None value. We will keep this global trend and put it in a new column.
trend_de = googletrend[googletrend.file == 'Rossmann_DE'][['Date', 'trend']]
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Then we can merge it with the rest of our trends, by adding the suffix '_DE' to know it's the general trend.
googletrend = join_df(googletrend, trend_de, 'Date', suffix='_DE')
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Then at this stage, we can remove the columns file and weeksince they won't be useful anymore, as well as the rows where State is None (since they correspond to the global trend that we saved in another column).
googletrend.drop(columns=['file', 'week'], axis=1, inplace=True) googletrend = googletrend[~googletrend['State'].isnull()]
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
The last thing missing to be able to join this with or store table is to extract the week from the date in this table and in the store table: we need to join them on week values since each trend is given for the full week that starts on the indicated date. This is linked to the next topic in feature engineering: extrac...
googletrend.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
If we add the dateparts, we will gain a lot more
googletrend = add_datepart(googletrend, 'Date', drop=False) googletrend.head().T
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We chose the option drop=False as we want to keep the Date column for now. Another option is to add the time part of the date, but it's not relevant to our problem here. Now we can join our Google trends with the information in the store table, it's just a join on ['Week', 'Year'] once we apply add_datepart to that t...
googletrend = googletrend[['trend', 'State', 'trend_DE', 'Week', 'Year']] store = add_datepart(store, 'Date', drop=False) store = join_df(store, googletrend, ['Week', 'Year', 'State'])
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
At this stage, store contains all the information about the stores, the weather on that day and the Google trends applicable. We only have to join it with our training and test table. We have to use make_date before being able to execute that merge, to convert the Date column of train and test to proper date format.
make_date(train, 'Date') make_date(test, 'Date') train_fe = join_df(train, store, ['Store', 'Date']) test_fe = join_df(test, store, ['Store', 'Date'])
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Elapsed times Another feature that can be useful is the elapsed time before/after a certain event occurs. For instance the number of days since the last promotion or before the next school holiday. Like for the date parts, there is a fastai convenience function that will automatically add them. One thing to take into a...
all_ftrs = train_fe.append(test_fe, sort=False)
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We will consider the elapsed times for three events: 'Promo', 'StateHoliday' and 'SchoolHoliday'. Note that those must correspondon to booleans in your dataframe. 'Promo' and 'SchoolHoliday' already are (only 0s and 1s) but 'StateHoliday' has multiple values.
all_ftrs['StateHoliday'].unique()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
If we refer to the explanation on Kaggle, 'b' is for Easter, 'c' for Christmas and 'a' for the other holidays. We will just converts this into a boolean that flags any holiday.
all_ftrs.StateHoliday = all_ftrs.StateHoliday!='0'
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Now we can add, for each store, the number of days since or until the next promotion, state or school holiday. This will take a little while since the whole table is big.
all_ftrs = add_elapsed_times(all_ftrs, ['Promo', 'StateHoliday', 'SchoolHoliday'], date_field='Date', base_field='Store')
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
It added a four new features. If we look at 'StateHoliday' for instance:
[c for c in all_ftrs.columns if 'StateHoliday' in c]
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
The column 'AfterStateHoliday' contains the number of days since the last state holiday, 'BeforeStateHoliday' the number of days until the next one. As for 'StateHoliday_bw' and 'StateHoliday_fw', they contain the number of state holidays in the past or future seven days respectively. The same four columns have been ad...
train_df = all_ftrs.iloc[:len(train_fe)] test_df = all_ftrs.iloc[len(train_fe):]
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
One last thing the authors of this winning solution did was to remove the rows with no sales, which correspond to exceptional closures of the stores. This might not have been a good idea since even if we don't have access to the same features in the test data, it can explain why we have some spikes in the training data...
train_df = train_df[train_df.Sales != 0.]
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We will use those for training but since all those steps took a bit of time, it's a good idea to save our progress until now. We will just pickle those tables on the hard drive.
train_df.to_pickle(path/'train_clean') test_df.to_pickle(path/'test_clean')
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
First we'll create an instance of RadarServer to point to the appropriate radar server access URL.
# The S3 URL did not work for me, despite .edu domain #url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/' #Trying motherlode URL url = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/' from siphon.radarserver import RadarServer rs = RadarServer(url)
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL.
from datetime import datetime, timedelta query = rs.query() query.stations('KLVX').time(datetime.utcnow())
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s)
rs.validate_query(query)
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information.
catalog = rs.get_catalog(query)
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time.
catalog.datasets
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download).
ds = list(catalog.datasets.values())[0] ds.access_urls
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL.
from siphon.cdmr import Dataset data = Dataset(ds.access_urls['CdmRemote'])
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y).
import numpy as np def raw_to_masked_float(var, data): # Values come back signed. If the _Unsigned attribute is set, we need to convert # from the range [-127, 128] to [0, 255]. if var._Unsigned: data = data & 255 # Mask missing points data = np.ma.array(data, mask=data==0) # Convert t...
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself.
sweep = 0 ref_var = data.variables['Reflectivity_HI'] ref_data = ref_var[sweep] rng = data.variables['distanceR_HI'][:] az = data.variables['azimuthR_HI'][sweep]
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Then convert the raw data to floating point values and the polar coordinates to Cartesian.
ref = raw_to_masked_float(ref_var, ref_data) x, y = polar_to_cartesian(az, rng)
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data.
from metpy.plots import ctables # For NWS colortable ref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5)
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later.
import matplotlib.pyplot as plt import cartopy def new_map(fig, lon, lat): # Create projection centered on the radar. This allows us to use x # and y relative to the radar. proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat) # New axes with the specified projection ax ...
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Download a collection of historical data This time we'll make a query based on a longitude, latitude point and using a time range.
query = rs.query() #dt = datetime(2012, 10, 29, 15) # Our specified time dt = datetime(2016, 6, 8, 18) # Our specified time query.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1))
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets.
cat = rs.get_catalog(query) cat.datasets
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map.
ds = list(cat.datasets.values())[0] data = Dataset(ds.access_urls['CdmRemote']) # Pull out the data of interest sweep = 0 rng = data.variables['distanceR_HI'][:] az = data.variables['azimuthR_HI'][sweep] ref_var = data.variables['Reflectivity_HI'] # Convert data to float and coordinates to Cartesian ref = raw_to_maske...
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Use the function to make a new map and plot a colormapped view of the data
fig = plt.figure(figsize=(10, 10)) ax = new_map(fig, data.StationLongitude, data.StationLatitude) # Set limits in lat/lon space ax.set_extent([-77, -70, 38, 42]) # Add ocean and land background ocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m', ed...
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Now we can loop over the collection of returned datasets and plot them. As we plot, we collect the returned plot objects so that we can use them to make an animated plot. We also add a timestamp for each plot.
meshes = [] for item in sorted(cat.datasets.items()): # After looping over the list of sorted datasets, pull the actual Dataset object out # of our list of items and access over CDMRemote ds = item[1] data = Dataset(ds.access_urls['CdmRemote']) # Pull out the data of interest sweep = 0 rng ...
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Using matplotlib, we can take a collection of Artists that have been plotted and turn them into an animation. With matplotlib 1.5 (1.5-rc2 is available now!), this animation can be converted to HTML5 video viewable in the notebook.
# Set up matplotlib to do the conversion to HTML5 video import matplotlib matplotlib.rcParams['animation.html'] = 'html5' # Create an animation from matplotlib.animation import ArtistAnimation ArtistAnimation(fig, meshes)
NEXRAD/THREDDS_Radar_Server.ipynb
rsignell-usgs/notebook
mit
Utility method for display
def display_with_overlay( segmentation_number, slice_number, image, segs, window_min, window_max ): """ Display a CT slice with segmented contours overlaid onto it. The contours are the edges of the labeled regions. """ img = image[:, :, slice_number] msk = segs[segmentation_number][:, :, sl...
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Fetch the data Retrieve a single CT scan and three manual delineations of a liver tumor. Visual inspection of the data highlights the variability between experts.
image = sitk.ReadImage(fdata("liverTumorSegmentations/Patient01Homo.mha")) segmentation_file_names = [ "liverTumorSegmentations/Patient01Homo_Rad01.mha", "liverTumorSegmentations/Patient01Homo_Rad02.mha", "liverTumorSegmentations/Patient01Homo_Rad03.mha", ] segmentations = [ sitk.ReadImage(fdata(file_n...
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Derive a reference There are a variety of ways to derive a reference segmentation from multiple expert inputs. Several options, there are more, are described in "A comparison of ground truth estimation methods", A. M. Biancardi, A. C. Jirapatnakul, A. P. Reeves. Two methods that are available in SimpleITK are <b>major...
# Use majority voting to obtain the reference segmentation. Note that this filter does not resolve ties. In case of # ties, it will assign max_label_value+1 or a user specified label value (labelForUndecidedPixels) to the result. # Before using the results of this filter you will have to check whether there were ties a...
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Evaluate segmentations using the reference Once we derive a reference from our experts input we can compare segmentation results to it. Note that in this notebook we compare the expert segmentations to the reference derived from them. This is not relevant for algorithm evaluation, but it can potentially be used to rank...
from enum import Enum # Use enumerations to represent the various evaluation measures class OverlapMeasures(Enum): jaccard, dice, volume_similarity, false_negative, false_positive = range(5) class SurfaceDistanceMeasures(Enum): ( hausdorff_distance, mean_surface_distance, median_surfa...
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Improved output If the pandas package is installed in your Python environment then you can easily produce high quality output.
import pandas as pd from IPython.display import display, HTML # Graft our results matrix into pandas data frames overlap_results_df = pd.DataFrame( data=overlap_results, index=list(range(len(segmentations))), columns=[name for name, _ in OverlapMeasures.__members__.items()], ) surface_distance_results_df =...
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
You can also export the data as a table for your LaTeX manuscript using the to_latex function. <b>Note</b>: You will need to add the \usepackage{booktabs} to your LaTeX document's preamble. To create the minimal LaTeX document which will allow you to see the difference between the tables below, copy paste: \documentcl...
# The formatting of the table using the default settings is less than ideal print(overlap_results_df.to_latex()) # We can improve on this by specifying the table's column format and the float format print( overlap_results_df.to_latex( column_format="ccccccc", float_format=lambda x: "%.3f" % x ) )
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Segmentation Representation and the Hausdorff Distance The results of segmentation can be represented as a set of closed contours/surfaces or as the discrete set of points (pixels/voxels) belonging to the segmented objects. Ideally using either representation would yield the same values for the segmentation evaluation ...
# Create our segmentations and display image_size = [64, 64] circle_center = [30, 30] circle_radius = [20, 20] # A filled circle with radius R seg = ( sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 200 ) # A torus with inner radius r reference_segmentation1 = seg - ( sitk.Gauss...
Python/34_Segmentation_Evaluation.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
3.10.1 Updating the metadata after the tree has been built Often, we want to display information that did not exist in the Project when we first built our trees. This is not an issue. We can add metadata now and propagate it to all the parts of the Project, including to our preexisting trees. For example, I add here s...
genera_with_porocalices = ['Cinachyrella', 'Cinachyra', 'Amphitethya', 'Fangophilina', 'Acanthotetilla', 'Paratetilla']
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
while others do not:
genera_without_porocalices = ['Craniella', 'Tetilla', 'Astrophorida']
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
The following command will add the value 'present' to a new qualifier called 'porocalices' in sequence features of species that belong to genera_with_porocalices:
for genus in genera_with_porocalices: pj.if_this_then_that(genus, 'genus', 'present', 'porocalices')
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
and the following command will add the value 'absent' to a new qualifier called 'porocalices' to sequence features of species that belong to genera_without_porocalices:
for genus in genera_without_porocalices: pj.if_this_then_that(genus, 'genus', 'absent', 'porocalices')
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
The new qualifier porocalices in now updated in the SeqRecord objects within the pj.records list (more on this in section 3.4). But in order for it to exist in the Tree objects, stored in the pj.trees dictionary, we have to run this command:
pj.propagate_metadata()
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
Only now the new qualifier is available for tree annotation. Note that qualifiers that existed in the Project when we built the trees, will be included in the Tree object by default. 3.10.2 Configuring and writing a tree figure The annotate Project method will produce one figure for each tree in the Project according ...
bg_colors = {'present':'red', 'absent': 'white'} supports = {'black': [100,99], 'gray': [99,80]} pj.annotate('./images/', # Path to write figs to 'genus', 'Astrophorida', # Set OTUs that have 'Astrophorida' # in t...
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
In the resulting figure (below), clades of species with porocalices have red background, node with maximal relBootstrap support have black bullets, and nodes with branch support > 80 has gray bullets.
from IPython.display import Image Image('./images/example1.png', width=300)
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
3.10.2.2 Example 2, the metadata as a heatmap The second example introduces midpoint rooting and a heatmap. There are three columns in this heatmap, representing numerical values of three qualifiers. In this instance, the values are 0 or 1 for presence and absence. In addition, we change the branch colour to black and ...
bg_colors = {'Cinachyrella': 'gray', 'Cinachyra': 'silver', 'Amphitethya': 'white', 'Fangophilina':'white', 'Acanthotetilla':'silver', 'Paratetilla':'white', 'Craniella': 'gray', 'Tetilla': 'silver', 'Astrophorida'...
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
And this is what it looks like:
from IPython.display import Image Image('./images/example2.png', width=300)
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
2.10.3 Archive the analysis as a zip file The publish function will produce an html human readable report containing a description of the data, alignments, trees, and the methods that created them in various ways. The following options control this function: folder_name: zip file name or directory name for the repor...
publish(pj, 'my_report', './images/', size='large') pickle_pj(pj, 'outputs/my_project.pkpj')
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
szitenberg/ReproPhyloVagrant
mit
Calculate basic street network measures (topological and metric)
# get the network for Piedmont, calculate its basic stats, then show the average circuity stats = ox.basic_stats(ox.graph_from_place('Piedmont, California, USA')) stats['circuity_avg']
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
To calculate density-based metrics, you must also pass the network's bounding area in square meters (otherwise basic_stats() will just skip them in the calculation):
# get the street network for a place, and its area in square meters place = 'Piedmont, California, USA' gdf = ox.gdf_from_place(place) area = ox.project_gdf(gdf).unary_union.area G = ox.graph_from_place(place, network_type='drive_service') # calculate basic and extended network stats, merge them together, and display ...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Streets/intersection counts and proportions are nested dicts inside the stats dict. To convert these stats to a pandas dataframe (to compare/analyze multiple networks against each other), just unpack these nested dicts first:
# unpack dicts into individiual keys:values stats = ox.basic_stats(G, area=area) for k, count in stats['streets_per_node_counts'].items(): stats['int_{}_count'.format(k)] = count for k, proportion in stats['streets_per_node_proportion'].items(): stats['int_{}_prop'.format(k)] = proportion # delete the no longe...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Inspect betweenness centrality
G_projected = ox.project_graph(G) max_node, max_bc = max(extended_stats['betweenness_centrality'].items(), key=lambda x: x[1]) max_node, max_bc
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
In the city of Piedmont, California, the node with the highest betweenness centrality has 29.4% of all shortest paths running through it. Let's highlight it in the plot:
nc = ['r' if node==max_node else '#336699' for node in G_projected.nodes()] ns = [50 if node==max_node else 8 for node in G_projected.nodes()] fig, ax = ox.plot_graph(G_projected, node_size=ns, node_color=nc, node_zorder=2)
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
29.4% of all shortest paths run through the node highlighted in red. Let's look at the relative betweenness centrality of every node in the graph:
# get a color for each node def get_color_list(n, color_map='plasma', start=0, end=1): return [cm.get_cmap(color_map)(x) for x in np.linspace(start, end, n)] def get_node_colors_by_stat(G, data, start=0, end=1): df = pd.DataFrame(data=pd.Series(data).sort_values(), columns=['value']) df['colors'] = get_col...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Above, the nodes are visualized by betweenness centrality, from low (dark violet) to high (light yellow). Routing: calculate the network path from the centermost node to some other node Let the origin node be the node nearest the location and let the destination node just be the last node in the network. Then find the ...
# define a lat-long point, create network around point, define origin/destination nodes location_point = (37.791427, -122.410018) G = ox.graph_from_point(location_point, distance=500, distance_type='network', network_type='walk') origin_node = ox.get_nearest_node(G, location_point) destination_node = list(G.nodes())[-1...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Routing: plot network path from one lat-long to another
# define origin/desination points then get the nodes nearest to each origin_point = (37.792896, -122.412325) destination_point = (37.790495, -122.408353) origin_node = ox.get_nearest_node(G, origin_point) destination_node = ox.get_nearest_node(G, destination_point) origin_node, destination_node # find the shortest pat...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Demonstrate routing with one-way streets
G = ox.graph_from_address('N. Sicily Pl., Chandler, Arizona', distance=800, network_type='drive') origin = (33.307792, -111.894940) destination = (33.312994, -111.894998) origin_node = ox.get_nearest_node(G, origin) destination_node = ox.get_nearest_node(G, destination) route = nx.shortest_path(G, origin_node, destinat...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Also, when there are parallel edges between nodes in the route, OSMnx picks the shortest edge to plot
location_point = (33.299896, -111.831638) G = ox.graph_from_point(location_point, distance=500, clean_periphery=False) origin = (33.301821, -111.829871) destination = (33.301402, -111.833108) origin_node = ox.get_nearest_node(G, origin) destination_node = ox.get_nearest_node(G, destination) route = nx.shortest_path(G, ...
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
kerimlcr/ab2017-dpyo
gpl-3.0
Le but de la fonction stupid_generator est de lister les entiers inférieurs à end. Cependant, elle ne retourne pas directement la liste mais un générateur sur cette liste. Comparez avec la fonction suivante.
def stupid_list(end): i = 0 result = [] while i < end: result.append(i) i+=1 return result stupid_list(3)
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Pour récupérer les objets de stupid_generator, il faut le transformer explicitement en liste ou alors parcourir les objets à travers une boucle.
it = stupid_generator(3) it.next() list(stupid_generator(3)) for v in stupid_generator(3): print v
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Remarque : les instructions de stupid_generator ne sont pas exécutées lors de l'appel initial de la fonction mais seulement lorsque l'on commence à parcourir le générateur pour récupérer le premier objet. L'instruction yield stoppe alors l'exécution et retourne le premier objet. Si l'on demande un dexuième objet, l'exé...
def test_generator(): print "Cette instruction est exécutée lors de l'appel du premier objet" yield 1 print "Cette instruction est exécutée lors de l'appel du deuxième objet" yield 2 print "Cette instruction est exécutée lors de l'appel du troisième objet" yield 3 it = test_generator() it.next...
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exercice : implanter la fonction suivante dont le but est de générer les n premiers nombre de Fibonacci La suite de fibonacci est définie par : $f_0 = 0$ $f_1 = 1$ $f_n = f_{n-1} + f_{n-2}$ pour $n \geq 2$.
def first_fibonacci_generator(n): """ Return a generator for the first ``n`` Fibonacci numbers """ # write code here
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Votre fonction doit passer les tests suivants :
import types assert(type(first_fibonacci_generator(3)) == types.GeneratorType) assert(list(first_fibonacci_generator(0)) == []) assert(list(first_fibonacci_generator(1)) == [0]) assert(list(first_fibonacci_generator(2)) == [0,1]) assert(list(first_fibonacci_generator(8)) == [0,1,1,2,3,5,8,13])
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Dans les cas précédent, le générateur s'arrête de lui même au bout d'un certain temps. Cependant, il est aussi possible d'écrire des générateurs infinis. Dans ce cas, la responsabilité de l'arrêt revient à la l'appelant.
def powers2(): v = 1 while True: yield v v*=2 for v in powers2(): print v if v > 1000000: break
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exercice: Implantez les fonctions suivantes
def fibonacci_generator(): """ Return an infinite generator for Fibonacci numbers """ # write code here it = fibonacci_generator() it.next() def fibonacci_finder(n): """ Return the first Fibonacci number greater than or equal to n """ # write code here assert(fibonacci_finder(10) == ...
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Mots binaires Nous allons nous intéresser à la génération récursive de mots binaires vérifiants certaines propriétés. Nous allons représenter les mots binaires par des chaines de carcatères, par exemples.
binaires1 = ["0", "1"] binaires2 = ["00", "01", "10", "11"]
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Les fonctions suivantes génèrent les mots binaires de taille 0,1, et 2.
def binary_word_generator0(): yield "" def binary_word_generator1(): yield "0" yield "1" def binary_word_generator2(): for b in binary_word_generator1(): yield b + "0" yield b + "1" list(binary_word_generator2())
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
En vous inspirant des fonctions précédentes (mais sans les utiliser) ou en reprenant la fonction du cours, implantez de façon récursive la fonction suivante qui engendre l'ensemble des mots binaires d'une taille donnée.
def binary_word_generator(n): """ Return a generator on binary words of size n in lexicographic order Input : - n, the length of the words """ # write code here list(binary_word_generator(3)) # tests import types assert(type(binary_word_generator(0)) == types.GeneratorType) assert(lis...
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Sur le même modèle, implanter la fonction suivante. (un peu plus dur) Posez-vous la question de cette façon : si j'ai un mot de taille $n$ qui termine par 0 et qui contient $k$ fois 1, combien de 1 contenait le mot taille $n-1$ à partir duquel il a été créé ? De même s'il termine par 1. Remarque : l'ordre des mots n'es...
def binary_kword_generator(n,k): """ Return a generator on binary words of size n such that each word contains exacty k occurences of 1 Input : - n, the size of the words - k, the number of 1 """ # write code here list(binary_kword_generator(4,2)) # tests import types assert(type...
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Et pour finir On appelle un prefixe de Dyck un mot binaire de taille $n$ avec $k$ occurences de 1, tel que dans tout préfixe, le nombre de 1 soit supérieur ou égal au nombre de 0. Par exemple : $1101$ est un préfixe de Dyck pour $n=4$ et $k=3$. Mais $1001$ n'en est pas un car dans le prefixe $100$ le nombre de 0 est su...
def dyck_prefix_generator(n,k): """ Return a generator on binary words of size n such that each word contains exacty k occurences of 1, and in any prefix, the number of 1 is greater than or equal to the number of 0. Input : - n, the size of the words - k, the number of 1 """ #...
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exécutez la ligne suivante et copiez la liste des nombres obentus dans Google.
[len(set(dyck_prefix_generator(2*n, n))) for n in xrange(8)]
TP/Python2/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Environment Preparation Install Analytics Zoo You can install the latest pre-release version with chronos support using pip install --pre --upgrade analytics-zoo[automl].
# Install latest pre-release version of Analytics Zoo # Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade analytics-zoo[automl] exit() # restart the runtime to refresh installed pkg
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Step 0: Download & prepare dataset We used NYC taxi passengers dataset in Numenta Anomaly Benchmark (NAB) for demo, which contains 10320 records, each indicating the total number of taxi passengers in NYC at a corresonponding time spot.
# download the dataset !wget https://raw.githubusercontent.com/numenta/NAB/v1.0/data/realKnownCause/nyc_taxi.csv # load the dataset. The downloaded dataframe contains two columns, "timestamp" and "value". import pandas as pd df = pd.read_csv("nyc_taxi.csv", parse_dates=["timestamp"])
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Time series forecasting using Chronos Forecaster Forecaster Step1. Data transformation and feature engineering using Chronos TSDataset TSDataset is our abstract of time series dataset for data transformation and feature engineering. Here we use it to preprocess the data.
from zoo.chronos.data import TSDataset from sklearn.preprocessing import StandardScaler
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Initialize train, valid and test tsdataset from raw pandas dataframe.
tsdata_train, tsdata_valid, tsdata_test = TSDataset.from_pandas(df, dt_col="timestamp", target_col="value", with_split=True, val_ratio=0.1, test_ratio=0.1)
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Preprocess the datasets. Here we perform: - deduplicate: remove those identical data records - impute: fill the missing values - gen_dt_feature: generate feature from datetime (e.g. month, day...) - scale: scale each feature to standard distribution. - roll: sample the data with sliding window. For forecasting task, we...
lookback, horizon = 6, 1 scaler = StandardScaler() for tsdata in [tsdata_train, tsdata_valid, tsdata_test]: tsdata.deduplicate()\ .impute()\ .gen_dt_feature()\ .scale(scaler, fit=(tsdata is tsdata_train))\ .roll(lookback=lookback, horizon=horizon)
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Forecaster Step 2: Time series forecasting using Chronos Forecaster After preprocessing the datasets. We can use Chronos Forecaster to handle the forecasting tasks. Transform TSDataset to sampled numpy ndarray and feed them to forecaster.
from zoo.chronos.forecaster.tcn_forecaster import TCNForecaster x, y = tsdata_train.to_numpy() # x.shape = (num of sample, lookback, num of input feature) # y.shape = (num of sample, horizon, num of output feature) forecaster = TCNForecaster(past_seq_len=lookback, # number of steps to look back ...
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Forecaster Step 3: Further deployment with fitted forecaster Use fitted forecaster to predict test data and plot the result
x_test, y_test = tsdata_test.to_numpy() pred = forecaster.predict(x_test) pred_unscale, groundtruth_unscale = tsdata_test.unscale_numpy(pred), tsdata_test.unscale_numpy(y_test) import matplotlib.pyplot as plt plt.figure(figsize=(24,6)) plt.plot(pred_unscale[:,:,0]) plt.plot(groundtruth_unscale[:,:,0]) plt.legend(["pr...
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
Save & restore the forecaster.
forecaster.save("nyc_taxi.fxt") forecaster.load("nyc_taxi.fxt")
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
intel-analytics/analytics-zoo
apache-2.0
2. Good, that went smoothly, now let's go deal with twitter To run our bot we'll need to use a protocol called OAuth which sounds a little bit daunting, but really it's just a kind of secret handshake that we agree on with twitter so they know that we're cool. First thing you'll need to do is make an "app". It's pre...
f = open('secrets_example.json','rb') print "".join(f.readlines()) f.close()
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
nmacri/twitter-bots-smw-2016
mit
3. Make your Bot's Account! Twitter's onboarding process isn't really optimized for the bot use-case, but once you get to the welcome screen you'll be logged in and ready for the next step (iow, you can keep the "all the stuff you love" to yourself). <br> <br> <div class="container" style="width: 80%;"> <div class="th...
f = open('secrets.json','rb') secrets = json.load(f) f.close()
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
nmacri/twitter-bots-smw-2016
mit
Use a library that knows how to implement OAuth1 (trust me, it's not fun to figure out by scratch). I'm using rauth but there are tons more out there.
tw_oauth_service = OAuth1Service( consumer_key=secrets['twitter']['app']['consumer_key'], consumer_secret=secrets['twitter']['app']['consumer_secret'], name='twitter', access_token_url='https://api.twitter.com/oauth/access_token', authorize_url='https://api.twitter.com/oauth/authorize', request_...
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
nmacri/twitter-bots-smw-2016
mit
The cells above will open a permissions dialog for you in a new tab: If you're cool w/ it, authorize your app against your bot user you will then be redirected to the callback url you specified when you set up your app. I get redirected to something that looks like this http://127.0.0.1:9999/?oauth_token=JvutuAAAAAAAk...
# Once you go through the flow and land on an error page http://127.0.0.1:9999 something # enter your token and verifier below like so. The # The example below (which won't work until you update the parameters) is from the following url: # http://127.0.0.1:9999/?oauth_token=JvutuAAAAAAAkfBmbVABUwFD6pI&oauth_verifier...
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
nmacri/twitter-bots-smw-2016
mit
5. Store your secrets somewhere safe
# Copy this guy into your secrets file # { # "user_id": "701177805317472256", # "screen_name": "SmwKanye", # HERE ----> "token_key": "YOUR_TOKEN_KEY", # "token_secret": "YOUR_TOKEN_SECRET" # }, session.access_token # Copy this guy into your secrets file # ...
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
nmacri/twitter-bots-smw-2016
mit
Awesome, now we have our user access tokens and secret. Store them in secrets.json and test below to see if they work. You don't really need 3 test accounts, so if you don't want to repeat the process just keep "production". Finally, test to see that your secrets are good...
f = open('secrets.json','rb') secrets = json.load(f) f.close() tw_api_client = twitter.Api(consumer_key = secrets['twitter']['app']['consumer_key'], consumer_secret = secrets['twitter']['app']['consumer_secret'], access_token_key = secrets['twitter']['accounts']['production']['token_key'], ...
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
nmacri/twitter-bots-smw-2016
mit
Tree shape: Best case
# All of these will create the same best case tree shape # Each example has the same keys, but different values BST(get_kv(['H', 'C', 'S', 'A', 'E', 'R', 'X'])).wr_png("BST_bc0.png") BST(get_kv(['H', 'S', 'X', 'R', 'C', 'E', 'A'])).wr_png("BST_bc1.png") BST(get_kv(['H', 'C', 'A', 'E', 'S', 'R', 'X'])).wr_png("BST_bc2....
notebooks/ElemSymbolTbls.ipynb
dvklopfenstein/PrincetonAlgorithms
gpl-2.0
Best Case Tree Shape Tree shape: Worst case
# These will create worst case tree shapes BST(get_kv(['A', 'C', 'E', 'H', 'R', 'S', 'X'])).wr_png("BST_wc_fwd.png") BST(get_kv(['X', 'S', 'R', 'H', 'E', 'C', 'A'])).wr_png("BST_wc_rev.png")
notebooks/ElemSymbolTbls.ipynb
dvklopfenstein/PrincetonAlgorithms
gpl-2.0
Atenção: Os exemplos foram formulados no Linux; se você estiver usando Windows ou MacOS, talvez seja necessário alterar os separadores de diretórios/arquivos para uma contrabarra, por exemplo. O bloco with open(...), quando executado, abre o arquivo com a opção escohida (no caso acima, 'r', pois queremos apenas ler o a...
with open(os.path.join(diretorio,"file1.txt"), "r") as meuarquivo: for linha in meuarquivo: print(linha)
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Exemplo Tentar encontrar uma string específica (no nosso caso, "sf") dentro do arquivo file1.txt
string = "sf" b = [] with open(os.path.join(diretorio,"file1.txt"),"r") as arquivo: for line in arquivo: if string in line: b.append(line.rstrip("\n"))
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Neste momento, a lista b contém todas as linhas do arquivo que continham a string desejada:
print(b)
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0