markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
To check what the frequency of occurrences is for male/female of the categories, a bar chart is a possible representation: <div class="alert alert-success"> **EXERCISE 5** - Make a horizontal bar chart comparing the number of male, female and unknown (`NaN`) records in the data set. <details><summary>Hints</summary>...
# %load _solutions/case2_observations_processing6.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-warning"> <b>NOTE</b>: The usage of `groupby` combined with the `size` of each group would be an option as well. However, the latter does not support to count the `NaN` values as well. The `value_counts` method does support this with the `dropna=False` argument. </div> Solving double entry fi...
survey_data["species"].unique() survey_data.head(10)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
There apparently exists a double entry: 'DM and SH', which basically defines two records and should be decoupled to two individual records (i.e. rows). Hence, we should be able to create an additional row based on this split. To do so, Pandas provides a dedicated function since version 0.25, called explode. Starting fr...
example = survey_data.loc[7:10, "species"] example
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Using the split method on strings, we can split the string using a given character, in this case the word and:
example.str.split("and")
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The explode method will create a row for each element in the list:
example_split = example.str.split("and").explode() example_split
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Hence, the DM and SH are now enlisted in separate rows. Other rows remain unchanged. The only remaining issue is the spaces around the characters:
example_split.iloc[1], example_split.iloc[2]
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Which we can solve again using the string method strip, removing the spaces before and after the characters:
example_split.str.strip()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
To make this reusable, let's create a dedicated function to combine these steps, called solve_double_field_entry:
def solve_double_field_entry(df, keyword="and", column="verbatimEventDate"): """Split on keyword in column for an enumeration and create extra record Parameters ---------- df: pd.DataFrame DataFrame with a double field entry in one or more values keyword: str word/character to split...
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The function takes a DataFrame as input, splits the record into separate rows and returns an updated DataFrame. We can use this function to get an update of the DataFrame, with an additional row (observation) added by decoupling the specific field. Let's apply this new function. <div class="alert alert-success"> **EXE...
# %load _solutions/case2_observations_processing7.py survey_data_decoupled["species"].unique() survey_data_decoupled.head(11)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Create new occurrence identifier The record_id is no longer a unique identifier for each observation after the decoupling of this data set. We will make a new data set specific identifier, by adding a column called occurrenceID that takes a new counter as identifier. As a simple and straightforward approach, we will us...
np.arange(1, len(survey_data_decoupled) + 1, 1)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
To create a new column with header occurrenceID with the values 1 -> 35550 as field values:
survey_data_decoupled["occurrenceID"] = np.arange(1, len(survey_data_decoupled) + 1, 1)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
To overcome the confusion on having both a record_id and occurrenceID field, we will remove the record_id term:
survey_data_decoupled = survey_data_decoupled.drop(columns="record_id")
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Hence, columns can be drop-ped out of a DataFrame
survey_data_decoupled.head(10)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Converting the date values In the survey data set we received, the month, day, and year columns are containing the information about the date, i.e. eventDate in DarwinCore terms. We want this data in a ISO format YYYY-MM-DD. A convenient Pandas function is the usage of to_datetime, which provides multiple options to in...
# pd.to_datetime(survey_data_decoupled[["year", "month", "day"]]) # uncomment the line and test this statement
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
This is not working, not all dates can be interpreted... We should get some more information on the reason of the errors. By using the option coerce, the problem makers will be labeled as a missing value NaT. We can count the number of dates that can not be interpreted:
sum(pd.to_datetime(survey_data_decoupled[["year", "month", "day"]], errors='coerce').isna())
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 7** - Make a selection of `survey_data_decoupled` containing those records that can not correctly be interpreted as date values and save the resulting `DataFrame` as a new variable `trouble_makers` <details><summary>Hints</summary> - The result of the `.isna()` method is...
# %load _solutions/case2_observations_processing8.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Checking some charactersitics of the trouble_makers:
trouble_makers.head() trouble_makers["day"].unique() trouble_makers["month"].unique() trouble_makers["year"].unique()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The issue is the presence of day 31 during the months April and September of the year 2000. At this moment, we would have to recheck the original data in order to know how the issue could be solved. Apparently, - for this specific case - there has been a data-entry problem in 2000, making the 31 days during this period...
# %load _solutions/case2_observations_processing9.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Now, we do the parsing again to create a proper eventDate field, containing the dates:
survey_data_decoupled["eventDate"] = \ pd.to_datetime(survey_data_decoupled[["year", "month", "day"]])
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 9** - Check the number of observations for each year. Create a horizontal bar chart with the number of rows/observations for each year. <details><summary>Hints</summary> - To get the total number of observations, both the usage of `value_counts` as using `groupby` + `siz...
# %load _solutions/case2_observations_processing10.py # %load _solutions/case2_observations_processing11.py survey_data_decoupled.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Currently, the dates are stored in a python specific date format:
survey_data_decoupled["eventDate"].dtype
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
This is great, because it allows for many functionalities using the .dt accessor:
survey_data_decoupled.eventDate.dt #add a dot (.) and press TAB to explore the date options it provides
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 10** - Create a horizontal bar chart with the number of records for each year (cfr. supra), but without using the column `year`, using the `eventDate` column directly. <details><summary>Hints</summary> - Check the `groupby` + `size` solution of the previous exercise and ...
# %load _solutions/case2_observations_processing12.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
We actually do not need the day, month, year columns anymore, but feel free to use what suits you best. <div class="alert alert-success"> **EXERCISE 11** - Create a bar chart with the number of records for each day of the week (`dayofweek`) <details><summary>Hints</summary> - Pandas has an accessor for `dayofweek` ...
# %load _solutions/case2_observations_processing13.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
When saving the information to a file (e.g. CSV-file), this data type will be automatically converted to a string representation. However, we could also decide to explicitly provide the string format the dates are stored (losing the date type functionalities), in order to have full control on the way these dates are fo...
survey_data_decoupled["eventDate"] = survey_data_decoupled["eventDate"].dt.strftime('%Y-%m-%d') survey_data_decoupled["eventDate"].head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
For the remainder, let's remove the day/year/month columns.
survey_data_decoupled = survey_data_decoupled.drop(columns=["day", "month", "year"])
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
2. Add species names to dataset The column species only provides a short identifier in the survey overview. The name information is stored in a separate file species.csv. We want our data set to include this information, read in the data and add it to our survey data set: <div class="alert alert-success"> **EXERCISE 1...
# %load _solutions/case2_observations_processing14.py species_data.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Fix a wrong acronym naming When reviewing the metadata, you see that in the data-file the acronym NE is used to describe Neotoma albigula, whereas in the metadata description, the acronym NA is used. <div class="alert alert-success"> **EXERCISE 13** - Convert the value of 'NE' to 'NA' by using Boolean indexing/Filter...
# %load _solutions/case2_observations_processing15.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Merging surveys and species As we now prepared the two series, we can combine the data, using again the pd.merge operation. We want to add the data of the species to the survey data, in order to see the full species names in the combined data table. <div class="alert alert-success"> **EXERCISE 14** Combine the DataFr...
# %load _solutions/case2_observations_processing16.py len(survey_data_species) # check length after join operation
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The join is ok, but we are left with some redundant columns and wrong naming:
survey_data_species.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
We do not need the columns species_x and species_id column anymore, as we will use the scientific names from now on:
survey_data_species = survey_data_species.drop(["species_x", "species_id"], axis=1)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The column species_y could just be named species:
survey_data_species = survey_data_species.rename(columns={"species_y": "species"}) survey_data_species.head() len(survey_data_species)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
3. Add coordinates from the plot locations Loading the coordinate data The individual plots are only identified by a plot identification number. In order to provide sufficient information to external users, additional information about the coordinates should be added. The coordinates of the individual plots are saved i...
# %load _solutions/case2_observations_processing17.py plot_data.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Transforming to other coordinate reference system These coordinates are in meters, more specifically in the UTM 12 N coordinate system. However, the agreed coordinate representation for Darwin Core is the World Geodetic System 1984 (WGS84). As this is not a GIS course, we will shortcut the discussion about different pr...
from pyproj import Transformer transformer = Transformer.from_crs("EPSG:32612", "epsg:4326")
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The reprojection can be done by the function transform of the projection toolkit, providing the coordinate systems and a set of x, y coordinates. For example, for a single coordinate, this can be applied as follows:
transformer.transform(681222.131658, 3.535262e+06)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Such a transformation is a function not supported by Pandas itself (it is in https://geopandas.org/). In such an situation, we want to apply a custom function to each row of the DataFrame. Instead of writing a for loop to do this for each of the coordinates in the list, we can .apply() this function with Pandas. <div c...
# %load _solutions/case2_observations_processing18.py # %load _solutions/case2_observations_processing19.py # %load _solutions/case2_observations_processing20.py # %load _solutions/case2_observations_processing21.py plot_data.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The above function transform_utm_to_wgs you have created is a very specific function that knows the structure of the DataFrame you will apply it to (it assumes the 'xutm' and 'yutm' column names). We could also make a more generic function that just takes a X and Y coordinate and returns the Series of converted coordin...
# %load _solutions/case2_observations_processing22.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 18** Combine the DataFrame `plot_data_selection` and the DataFrame `survey_data_species` by adding the corresponding coordinate information to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data_plots`. <details><s...
# %load _solutions/case2_observations_processing23.py survey_data_plots.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The plot locations need to be stored with the variable name verbatimLocality indicating the identifier as integer value of the plot:
survey_data_plots = survey_data_plots.rename(columns={'plot': 'verbatimLocality'})
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Let's now save our clean data to a csv file, so we can further analyze the data in a following notebook:
survey_data_plots.to_csv("interim_survey_data_species.csv", index=False)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
(OPTIONAL SECTION) 4. Using a API service to match the scientific names As the current species names are rather short and could eventually lead to confusion when shared with other users, retrieving additional information about the different species in our dataset would be useful to integrate our work with other researc...
import requests
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Example matching with Alcedo Atthis For the example of Alcedo atthis:
species_name = 'Alcedo atthis' base_string = 'http://api.gbif.org/v1/species/match?' request_parameters = {'verbose': False, 'strict': True, 'name': species_name} message = requests.get(base_string, params=request_parameters).json() message
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
From which we get a dictionary containing more information about the taxonomy of the Alcedo atthis. In the species data set available, the name to match is provided in the combination of two columns, so we have to combine those to in order to execute the name matching:
genus_name = "Callipepla" species_name = "squamata" name_to_match = '{} {}'.format(genus_name, species_name) base_string = 'http://api.gbif.org/v1/species/match?' request_parameters = {'strict': True, 'name': name_to_match} # use strict matching(!) message = requests.get(base_string, params=request_parameters).json() m...
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
To apply this on our species data set, we will have to do this request for each of the individual species/genus combination. As, this is a returning functionality, we will write a small function to do this: Writing a custom matching function <div class="alert alert-success"> **EXERCISE 19** - Write a function, called...
# %load _solutions/case2_observations_processing24.py
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-info"> **NOTE** For many of these API request handling, dedicated packages do exist, e.g. <a href="https://github.com/sckott/pygbif">pygbif</a> provides different functions to do requests to the GBIF API, basically wrapping the request possibilities. For any kind of service, just ask yourself:...
genus_name = "Callipepla" species_name = "squamata" name_match(genus_name, species_name, strict=True)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
However, the matching won't provide an answer for every search:
genus_name = "Lizard" species_name = "sp." name_match(genus_name, species_name, strict=True)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Match each of the species names of the survey data set Hence, in order to add this information to our survey DataFrame, we need to perform the following steps: 1. extract the unique genus/species combinations in our dataset and combine them in single column 2. match each of these names to the GBIF API service 3. proces...
# %load _solutions/case2_observations_processing25.py len(unique_species)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 21** - Extract the unique combinations of genus and species in the `survey_data_plots` using `groupby`. Save the result as the variable `unique_species`. <details><summary>Hints</summary> - As `groupby` needs an aggregation function, this can be `first()` (the first of e...
# %load _solutions/case2_observations_processing26.py len(unique_species)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 22** - Combine the columns genus and species to a single column with the complete name, save it in a new column named 'name' </div>
# %load _solutions/case2_observations_processing27.py unique_species.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
To perform the matching for each of the combination, different options do exist (remember apply?) Just to showcase the possibility of using for loops in such a situation, let's do the addition of the matched information with a for loop. First, we will store everything in one dictionary, where the keys of the dictionary...
# this will take a bit as we do a request to gbif for each individual species species_annotated = {} for key, row in unique_species.iterrows(): species_annotated[key] = name_match(row["genus"], row["species"], strict=True) #species_annotated # uncomment to see output
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
We can now transform this to a pandas DataFrame: <div class="alert alert-success"> **EXERCISE 23** - Convert the dictionary `species_annotated` into a pandas DataFrame with the row index the key-values corresponding to `unique_species` and the column headers the output columns of the API response. Save the result as ...
# %load _solutions/case2_observations_processing28.py df_species_annotated.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Select relevant information and add this to the survey data <div class="alert alert-success"> **EXERCISE 24** - Subselect the columns 'class', 'kingdom', 'order', 'phylum', 'scientificName', 'status' and 'usageKey' from the DataFrame `df_species_annotated`. Save it as the variable `df_species_annotated_subset` </di...
# %load _solutions/case2_observations_processing29.py df_species_annotated_subset.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 25** - Join the `df_species_annotated_subset` information to the `unique_species` overview of species. Save the result as variable `unique_species_annotated`. </div>
# %load _solutions/case2_observations_processing30.py unique_species_annotated.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 26** - Join the `unique_species_annotated` data to the `survey_data_plots` data set, using both the genus and species column as keys. Save the result as the variable `survey_data_completed`. </div>
# %load _solutions/case2_observations_processing31.py len(survey_data_completed) survey_data_completed.head()
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Congratulations! You did a great cleaning job, save your result:
survey_data_completed.to_csv("survey_data_completed_.csv", index=False)
notebooks/case2_observations_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Reload the naive predictions Shows how to make use of the data produced from the scripted script naive.py.
%matplotlib inline import matplotlib.pyplot as plt import open_cp.scripted import open_cp.scripted.analysis as analysis loaded = open_cp.scripted.Loader("naive_preds.pic.xz") loaded.timed_points.time_range fig, axes = plt.subplots(ncols=2, figsize=(16,7)) analysis.plot_data_scatter(loaded, axes[0]) analysis.plot_dat...
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
Manually redo the predictions and scoring
import datetime import open_cp.naive import numpy as np import pandas as pd import open_cp.evaluation start = datetime.datetime(2016, 10, 1) our_preds = [] while start < datetime.datetime(2017, 1, 1): predictor = open_cp.naive.CountingGridKernel(loaded.grid.xsize, region=loaded.grid.region()) mask = loaded.tim...
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
Check the scoring
frame = pd.read_csv("naive.csv") frame.head() frame.tail() coverages = list(range(1,101)) start = datetime.datetime(2016, 10, 1) rows = [] for pred in our_preds: end = start + datetime.timedelta(days=1) mask = (loaded.timed_points.timestamps >= start) & (loaded.timed_points.timestamps < end) rows.append(...
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
Some plots Average hit rate And standard error
def plot_mean_hitrate(ax, frame, xrange): coverages = list(range(1,101)) data= {} for pred_type in frame.Predictor.unique(): data[pred_type] = {} f = frame[frame.Predictor == pred_type].describe() for cov in coverages: r = f["{}%".format(cov)] data[pred_type]...
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
Fit binomial model instead Use a beta prior
betas = analysis.hit_counts_to_beta("naive_counts.csv") fig, ax = plt.subplots(figsize=(12,8)) analysis.plot_betas(betas, ax) fig, ax = plt.subplots(figsize=(12,8)) analysis.plot_betas(betas, ax, range(1,21))
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
What does this difference actually mean?? Suppose we pick 5% coverage. There is a big gap between the curves there.
tps = loaded.timed_points.bin_timestamps(datetime.datetime(2016,1,1), datetime.timedelta(days=1)) import collections, statistics c = collections.Counter(tps.timestamps) statistics.mean(c.values())
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
So we have about 5 crime events a day, on average.
import scipy.special def BetaBinom(alpha,beta,n,k): """http://www.channelgrubb.com/blog/2015/2/27/beta-binomial-in-python""" part_1 = scipy.special.comb(n,k) part_2 = scipy.special.betaln(k+alpha,n-k+beta) part_3 = scipy.special.betaln(alpha,beta) result = (np.log(part_1) + part_2)- part_3 ...
examples/Scripts/Reload naive predictions.ipynb
QuantCrimAtLeeds/PredictCode
artistic-2.0
Check the version of the Graphistry module
g.__version__ # To specify Graphistry account & server, use: # graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com') # For more options, see https://github.com/graphistry/pygraphistry#configure
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
100 dense columns with 100K edges (restricted set of integer values 1-100) Values can be 1-100
edges = [{'src': x, 'dst': (x + 1) % 100000} for x in range(0, 100000)] for i, edge in enumerate(edges): for fld in range(0, 100): edge['fld' + str((fld))] = (fld + i) % 100 edges = pd.DataFrame(edges) edges[:3] g.edges(edges).bind(source='src', destination='dst').plot()
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
100 dense columns with 100K edges (random floats) Each edge as 100 attributes which is a randomly selected float
edges = [{'src': x, 'dst': (x + 1) % 100000} for x in range(0, 100000)] for i, edge in enumerate(edges): for fld in range(0, 100): edge['fld' + str((fld))] = random.random() edges = pd.DataFrame(edges) edges[:3] g.edges(edges).bind(source='src', destination='dst').plot()
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
100 dense columns with 100K edges (random strings) Each edge as 100 attributes which is a randomly selected float
edges = [{'src': x, 'dst': (x + 1) % 100000} for x in range(0, 100000)] for i, edge in enumerate(edges): for fld in range(0, 100): edge['fld' + str((fld))] = 'String' + str(random.random()) edges = pd.DataFrame(edges) edges[:3] g.edges(edges).bind(source='src', destination='dst').plot()
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
10 dense columns with 800K edges (restricted set of integers 1-100)
edges = [{'src': (x % 300), 'dst': ((x + 1) % 800)} for x in range(0, 800000)] for i, edge in enumerate(edges): for fld in range(0, 10): edge['fld' + str((fld))] = (fld + i) % 100 edges = pd.DataFrame(edges) edges[:3] g.edges(edges).bind(source='src', destination='dst').plot()
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
10 dense columns with 800K edges (random float)
edges = [{'src': (x % 300), 'dst': ((x + 1) % 800)} for x in range(0, 800000)] for i, edge in enumerate(edges): for fld in range(0, 10): edge['fld' + str((fld))] = random.random() edges = pd.DataFrame(edges) edges[:3] g.edges(edges).bind(source='src', destination='dst').plot()
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
10 dense columns with 800K edges (random strings)
edges = [{'src': (x % 300), 'dst': ((x + 1) % 800)} for x in range(0, 800000)] for i, edge in enumerate(edges): for fld in range(0, 10): edge['fld' + str((fld))] = 'String + ' + str(random.random()) edges = pd.DataFrame(edges) edges[:3] g.edges(edges).bind(source='src', destination='dst').plot()
demos/data/benchmarking/DenseDatasets.ipynb
graphistry/pygraphistry
bsd-3-clause
Retrieving Stock Price Data In this case, I'm retrieving stock price data for American Express Company using its stock symbol AXP from Google Finance.
AXP = web.DataReader('AXP', data_source='google')
fin_big_data.ipynb
sanabasangare/data-visualization
mit
The "AXP" object is of type "DataFrame".
type(AXP)
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Get meta information
AXP.info()
fin_big_data.ipynb
sanabasangare/data-visualization
mit
List the columns in the dataframe
AXP.columns
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Display the final five rows of the data set.
AXP.tail()
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Easily select single or multiple columns of a DataFrame object. .head() shows the first five rows of the selected column
AXP['Close'].head()
fin_big_data.ipynb
sanabasangare/data-visualization
mit
.tail() here, shows the last five rows of the 2 selected columns
AXP[['Open', 'Close']].tail()
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Similarly, a single or multiple rows can be selected
AXP.loc['2017-06-05'] # single row via index value AXP.iloc[:2] # two rows via index numbers
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Data Visualization
AXP['Close'].plot(figsize=(20, 10));
fin_big_data.ipynb
sanabasangare/data-visualization
mit
fully vectorized operation for log return calculation
rets = np.log(AXP['Close'] / AXP['Close'].shift(1))
fin_big_data.ipynb
sanabasangare/data-visualization
mit
The log returns can then be visualized via a histogram.
rets.hist(figsize=(20, 10), bins=35);
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Calculating Moving Averages with pandas function vectorized calculation of 50 days moving average/trend
AXP['MA50'] = pd.Series(AXP['Close']).rolling(window=50,center=False).mean() AXP[['Close', 'MA50']].plot(figsize=(20, 10));
fin_big_data.ipynb
sanabasangare/data-visualization
mit
Today Melanie lead the meeting with a session on the ArcGIS software and how we can use Python to automatise the geospatial data processing. The slides are available below. We started with a brief introduction to the types of data and analysis you can do in ArcGIS. Then Melanie demonstrated how to produce a 3D terrain ...
# embed pdf into an automatically resized window (requires imagemagick) w_h_str = !identify -format "%w %h" ../pdfs/arcgis-intro.pdf[0] HTML('<iframe src=../pdfs/arcgis-intro.pdf width={0[0]} height={0[1]}></iframe>'.format([int(i)*0.8 for i in w_h_str[0].split()]))
content/notebooks/2016-06-10-arcgis-intro.ipynb
ueapy/ueapy.github.io
mit
We all agreed that ArcGIS has a lot to offer to geoscientists. But what makes this software even more appealing is that you can work in a command-line interface using Python (ArcPy module). So we looked at how to run processes using the Python window command-by-command and how you might integrate ArcGIS processes withi...
HTML(html)
content/notebooks/2016-06-10-arcgis-intro.ipynb
ueapy/ueapy.github.io
mit
Illustrate signal & background windows Draw the average trace over all channels and illustrate the signal and background windows. This is done as a sanity check.
wfs = diagnostics.waveform_stats() wf_mean = zeros(cfg.num_samples()) wf_var = zeros(cfg.num_samples()) for ich in range(0,wfs.high_gain_size()): wf_mean += calin.diagnostics.waveform.WaveformStatsVisitor.waveform_mean(wfs.high_gain(ich)) wf_var += calin.diagnostics.waveform.WaveformStatsVisitor.waveform_var(...
examples/calib/gain estimation from photostatistics.ipynb
sfegan/calin
gpl-2.0
Calculate the gain of the high-gain channel Extract the mean and variance of the signal and background regions and decompose them to calculate the common-mode component that can be attributed to the intrinsic variance of the flasher, and the component in each channel that is independent of this. Calculate the gain in e...
enf = 1.14 smi=calin.diagnostics.functional.channel_mean(diagnostics.sig_stats().high_gain()) bmi=calin.diagnostics.functional.channel_mean(diagnostics.bkg_stats().high_gain()) svi=calin.diagnostics.functional.channel_var(diagnostics.sig_stats().high_gain()) bvi=calin.diagnostics.functional.channel_var(diagnostics.bkg...
examples/calib/gain estimation from photostatistics.ipynb
sfegan/calin
gpl-2.0
Print and display the results
for i,l in enumerate([g[i:i + 7] for i in range(0, len(g), 7)]): print("| Module %-2d"%cfg.configured_module_id(i),'|',' | '.\ join(map(lambda x: '%5.3f'%x, l)),'|') calin.plotting.plot_camera(g, cfg.camera_layout(), cfg.configured_channel_id_view()) title('Gain in high-gain channels')
examples/calib/gain estimation from photostatistics.ipynb
sfegan/calin
gpl-2.0
Calculate the gain of the low-gain channel This is mostly only for fun, or more accurately as a sanity check. A better way is probably to estimate the high-to-low gain ratio using the position of the mean signal in each channel and extrapolate from the absolute gain high-gain channels. This is effectively done at the v...
lg_smi=calin.diagnostics.functional.channel_mean(diagnostics.sig_stats().low_gain()) lg_bmi=calin.diagnostics.functional.channel_mean(diagnostics.bkg_stats().low_gain()) lg_svi=calin.diagnostics.functional.channel_var(diagnostics.sig_stats().low_gain()) lg_bvi=calin.diagnostics.functional.channel_var(diagnostics.bkg_st...
examples/calib/gain estimation from photostatistics.ipynb
sfegan/calin
gpl-2.0
Calculate the high-to-low gain ratio First, by dividing the absolute gains calculated above Secondly, by dividing the means of the signal histograms
g_ratio = g/lg_g for i,l in enumerate([g_ratio[i:i + 7] for i in range(0, len(g_ratio), 7)]): print("| Module %-2d"%cfg.configured_module_id(i),'|',' | '.\ join(map(lambda x: '%5.3f'%x, l)),'|') rg_ratio = (smi-bmi)/(lg_smi-lg_bmi) for i,l in enumerate([rg_ratio[i:i + 7] for i in range(0, len(rg_ratio), ...
examples/calib/gain estimation from photostatistics.ipynb
sfegan/calin
gpl-2.0
Defining Regression
yh = lambda xs, ws: \ ws.dot(xs) grad = lambda ys, yhs, xs: \ (1./xs.shape[1])*sum((yhs-ys)*xs).astype(float32) delta = lambda gs, a: \ a*gs def regress(y, x, alpha, T=1000, wh=None, **kwargs): wh = random.normal(2, size=2) whs = zeros((T, 2)) whs[0,:] = wh for i in xrange(1,...
_notebooks/.ipynb_checkpoints/Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization-checkpoint.ipynb
theideasmith/theideasmith.github.io
mit
Running Regression above and Below the Upper Bound on $\alpha$ The theoretically derived bounds on $\alpha$ are $$\alpha \in \left( -2\frac{N}{|\mathbf{x}|^2}, 0 \right]$$ Other $\alpha$ values diverge
def plotDynamicsForAlpha(alpha, axTitle, T=1000, N=10): t = np.arange(T) mu, sig = statsRegr(y, x, alpha, T=T, N=N) plot(mu[:,0], 'r:', label='$w_1$') plot(mu[:,1], 'b:', label='$w_2$') fill_between(t, \ mu[:,0]+sig[:,0], \ mu[:,0]-sig[:,0], \ facec...
_notebooks/.ipynb_checkpoints/Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization-checkpoint.ipynb
theideasmith/theideasmith.github.io
mit
$\alpha = \sup A$
t = arange(0,10) ws = (0**t)*(w0[0]+x[0,:].dot(y)/linalg.norm(x[0,:])**2) + x[0,:].dot(y)/linalg.norm(x[0,:])**2 figure() ax = subplot(111) ax.set_title("alpha = sup A") ax.plot(ws) t = arange(0,10) ws = ((-1)**t)*w0[0] - (x[0,:].dot(y)/linalg.norm(x[0,:])**2) + (-2)**t*x[0,:].dot(y)/linalg.norm(x[0,:])**2 figure() ax...
_notebooks/.ipynb_checkpoints/Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization-checkpoint.ipynb
theideasmith/theideasmith.github.io
mit
(this takes a long time...)
""" some useful parallel iterated map constructs """ def icompare(pwds): import itertools res = itertools.imap(compare, pwds) for x in res: if x: return x return def uicompare(pwds, n=50): from multiprocess.dummy import Pool tp = Pool(n) res = tp.imap_unordered(compare, ...
kocham.ipynb
mmckerns/tuthpc
bsd-3-clause
Make sure to restart your kernel to ensure this change has taken place.
import numpy as np import tensorflow as tf tf.enable_eager_execution() print(tf.__version__)
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action,...
users = ['Ryan', 'Danielle', 'Vijay', 'Chris'] movies = ['Star Wars', 'The Dark Knight', 'Shrek', 'The Incredibles', 'Bleu', 'Memento'] features = ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama'] num_users = len(users) num_movies = len(movies) num_feats = len(features) num_recommendations = 2
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
Initialize our users, movie ratings and features We'll need to enter the user's movie ratings and the k-hot encoded movie features matrix. Each row of the users_movies matrix represents a single user's rating (from 1 to 10) for each movie. A zero indicates that the user has not seen/rated that movie. The movies_feats m...
# each row represents a user's rating for the different movies users_movies = tf.constant([ [4, 6, 8, 0, 0, 0], [0, 0, 10, 0, 8, 3], [0, 6, 0, 0, 3, 7], [10, 9, 0, 5, 0, 2]],dtype=tf.float32) # features of the movies one-hot encoded # e.g. colum...
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
Computing the user feature matrix We will compute the user feature matrix; that is, a matrix containing each user's embedding in the five-dimensional feature space. We can calculuate this as the matrix multiplication of the users_movies tensor with the movies_feats tensor. Implement this in the TODO below.
users_feats = #TODO users_feats
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users.
users_feats = users_feats/tf.reduce_sum(users_feats,axis=1,keepdims=True) users_feats
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
Ranking feature relevance for each user We can use the users_feats computed above to represent the relative importance of each movie category for each user.
top_users_features = tf.nn.top_k(users_feats, num_feats)[1] top_users_features for i in range(num_users): feature_names = [features[index] for index in top_users_features[i]] print('{}: {}'.format(users[i],feature_names))
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
Determining movie recommendations. We'll now use the users_feats tensor we computed above to determine the movie ratings and recommendations for each user. To compute the projected ratings for each movie, we compute the similarity measure between the user's feature vector and the corresponding movie feature vector. W...
users_ratings = #TODO users_ratings
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix. If a user has already rated a movie, we ignore that rating. This way, we only focus on ratings for previously unseen/unrate...
users_ratings_new = tf.where(tf.equal(users_movies, tf.zeros_like(users_movies)), users_ratings, tf.zeros_like(tf.cast(users_movies, tf.float32))) users_ratings_new
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0