repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
mne-tools/mne-tools.github.io | 0.24/_downloads/e23ed246a9a354f899dfb3ce3b06e194/10_overview.ipynb | bsd-3-clause | import os
import numpy as np
import mne
"""
Explanation: Overview of MEG/EEG analysis with MNE-Python
This tutorial covers the basic EEG/MEG pipeline for event-related analysis:
loading data, epoching, averaging, plotting, and estimating cortical activity
from sensor data. It introduces the core MNE-Python data structures
~mne.io.Raw, ~mne.Epochs, ~mne.Evoked, and ~mne.SourceEstimate, and
covers a lot of ground fairly quickly (at the expense of depth). Subsequent
tutorials address each of these topics in greater detail.
We begin by importing the necessary Python modules:
End of explanation
"""
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
"""
Explanation: Loading data
MNE-Python data structures are based around the FIF file format from
Neuromag, but there are reader functions for a wide variety of other
data formats <data-formats>. MNE-Python also has interfaces to a
variety of publicly available datasets <datasets>,
which MNE-Python can download and manage for you.
We'll start this tutorial by loading one of the example datasets (called
"sample-dataset"), which contains EEG and MEG data from one subject
performing an audiovisual experiment, along with structural MRI scans for
that subject. The mne.datasets.sample.data_path function will automatically
download the dataset if it isn't found in one of the expected locations, then
return the directory path to the dataset (see the documentation of
~mne.datasets.sample.data_path for a list of places it checks before
downloading). Note also that for this tutorial to run smoothly on our
servers, we're using a filtered and downsampled version of the data
(:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version
(:file:sample_audvis_raw.fif) is also included in the sample dataset and
could be substituted here when running the tutorial locally.
End of explanation
"""
print(raw)
print(raw.info)
"""
Explanation: By default, ~mne.io.read_raw_fif displays some information about the file
it's loading; for example, here it tells us that there are four "projection
items" in the file along with the recorded data; those are :term:SSP
projectors <projector> calculated to remove environmental noise from the MEG
signals, plus a projector to mean-reference the EEG channels; these are
discussed in the tutorial tut-projectors-background. In addition to
the information displayed during loading, you can get a glimpse of the basic
details of a ~mne.io.Raw object by printing it; even more is available by
printing its info attribute (a dictionary-like object <mne.Info> that
is preserved across ~mne.io.Raw, ~mne.Epochs, and ~mne.Evoked objects).
The info data structure keeps track of channel locations, applied
filters, projectors, etc. Notice especially the chs entry, showing that
MNE-Python detects different sensor types and handles each appropriately. See
tut-info-class for more on the ~mne.Info class.
End of explanation
"""
raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30)
"""
Explanation: ~mne.io.Raw objects also have several built-in plotting methods; here we
show the power spectral density (PSD) for each sensor type with
~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with
~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below 50 Hz
(since our data are low-pass filtered at 40 Hz). In interactive Python
sessions, ~mne.io.Raw.plot is interactive and allows scrolling, scaling,
bad channel marking, annotations, projector toggling, etc.
End of explanation
"""
# set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude)
"""
Explanation: Preprocessing
MNE-Python supports a variety of preprocessing approaches and techniques
(maxwell filtering, signal-space projection, independent components analysis,
filtering, downsampling, etc); see the full list of capabilities in the
:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean
up our data by performing independent components analysis
(~mne.preprocessing.ICA); for brevity we'll skip the steps that helped us
determined which components best capture the artifacts (see
tut-artifact-ica for a detailed walk-through of that process).
End of explanation
"""
orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',
'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
orig_raw.plot(order=chan_idxs, start=12, duration=4)
raw.plot(order=chan_idxs, start=12, duration=4)
"""
Explanation: Once we're confident about which component(s) we want to remove, we pass them
as the exclude parameter and then apply the ICA to the raw signal. The
~mne.preprocessing.ICA.apply method requires the raw data to be loaded into
memory (by default it's only read from disk as-needed), so we'll use
~mne.io.Raw.load_data first. We'll also make a copy of the ~mne.io.Raw
object so we can compare the signal before and after artifact removal
side-by-side:
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
"""
Explanation: Detecting experimental events
The sample dataset includes several :term:"STIM" channels <stim channel>
that recorded electrical signals sent from the stimulus delivery computer (as
brief DC shifts / squarewave pulses). These pulses (often called "triggers")
are used in this dataset to mark experimental events: stimulus onset,
stimulus type, and participant response (button press). The individual STIM
channels are combined onto a single channel, in such a way that voltage
levels on that channel can be unambiguously decoded as a particular event
type. On older Neuromag systems (such as that used to record the sample data)
this summation channel was called STI 014, so we can pass that channel
name to the mne.find_events function to recover the timing and identity of
the stimulus events.
End of explanation
"""
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
"""
Explanation: The resulting events array is an ordinary 3-column :class:NumPy array
<numpy.ndarray>, with sample number in the first column and integer event ID
in the last column; the middle column is usually ignored. Rather than keeping
track of integer event IDs, we can provide an event dictionary that maps
the integer IDs to experimental conditions or events. In this dataset, the
mapping looks like this:
+----------+----------------------------------------------------------+
| Event ID | Condition |
+==========+==========================================================+
| 1 | auditory stimulus (tone) to the left ear |
+----------+----------------------------------------------------------+
| 2 | auditory stimulus (tone) to the right ear |
+----------+----------------------------------------------------------+
| 3 | visual stimulus (checkerboard) to the left visual field |
+----------+----------------------------------------------------------+
| 4 | visual stimulus (checkerboard) to the right visual field |
+----------+----------------------------------------------------------+
| 5 | smiley face (catch trial) |
+----------+----------------------------------------------------------+
| 32 | subject button press |
+----------+----------------------------------------------------------+
End of explanation
"""
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp)
"""
Explanation: Event dictionaries like this one are used when extracting epochs from
continuous data; the / character in the dictionary keys allows pooling
across conditions by requesting partial condition descriptors (i.e.,
requesting 'auditory' will select all epochs with Event IDs 1 and 2;
requesting 'left' will select all epochs with Event IDs 1 and 3). An
example of this is shown in the next section. There is also a convenient
~mne.viz.plot_events function for visualizing the distribution of events
across the duration of the recording (to make sure event detection worked as
expected). Here we'll also make use of the ~mne.Info attribute to get the
sampling frequency of the recording (so our x-axis will be in seconds instead
of in samples).
End of explanation
"""
reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
"""
Explanation: For paradigms that are not event-related (e.g., analysis of resting-state
data), you can extract regularly spaced (possibly overlapping) spans of data
by creating events using mne.make_fixed_length_events and then proceeding
with epoching as described in the next section.
Epoching continuous data
The ~mne.io.Raw object and the events array are the bare minimum needed to
create an ~mne.Epochs object, which we create with the ~mne.Epochs class
constructor. Here we'll also specify some data quality constraints: we'll
reject any epoch where peak-to-peak signal amplitude is beyond reasonable
limits for that channel type. This is done with a rejection dictionary; you
may include or omit thresholds for any of the channel types present in your
data. The values given here are reasonable for this particular dataset, but
may need to be adapted for different hardware or recording conditions. For a
more automated approach, consider using the autoreject package_.
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True)
"""
Explanation: We'll also pass the event dictionary as the event_id parameter (so we can
work with easy-to-pool event labels instead of the integer event IDs), and
specify tmin and tmax (the time relative to each event at which to
start and end each epoch). As mentioned above, by default ~mne.io.Raw and
~mne.Epochs data aren't loaded into memory (they're accessed from disk only
when needed), but here we'll force loading into memory using the
preload=True parameter so that we can see the results of the rejection
criteria being applied:
End of explanation
"""
conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory
"""
Explanation: Next we'll pool across left/right stimulus presentations so we can compare
auditory versus visual responses. To avoid biasing our signals to the left or
right, we'll use ~mne.Epochs.equalize_event_counts first to randomly sample
epochs from each condition to match the number of epochs present in the
condition with the fewest good epochs.
End of explanation
"""
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
"""
Explanation: Like ~mne.io.Raw objects, ~mne.Epochs objects also have a number of
built-in plotting methods. One is ~mne.Epochs.plot_image, which shows each
epoch as one row of an image map, with color representing signal magnitude;
the average evoked response and the sensor location are shown below the
image:
End of explanation
"""
frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332'])
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Both `~mne.io.Raw` and `~mne.Epochs` objects have `~mne.Epochs.get_data`
methods that return the underlying data as a
:class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``
parameter for subselecting which channel(s) to return; ``raw.get_data()``
has additional parameters for restricting the time domain. The resulting
matrices have dimension ``(n_channels, n_times)`` for `~mne.io.Raw` and
``(n_epochs, n_channels, n_times)`` for `~mne.Epochs`.</p></div>
Time-frequency analysis
The :mod:mne.time_frequency submodule provides implementations of several
algorithms to compute time-frequency representations, power spectral density,
and cross-spectral density. Here, for example, we'll compute for the auditory
epochs the induced power at different frequencies and times, using Morlet
wavelets. On this dataset the result is not especially informative (it just
shows the evoked "auditory N100" response); see here
<inter-trial-coherence> for a more extended example on a dataset with richer
frequency content.
End of explanation
"""
aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
legend='upper left', show_sensors='upper right')
"""
Explanation: Estimating evoked responses
Now that we have our conditions in aud_epochs and vis_epochs, we can
get an estimate of evoked responses to auditory versus visual stimuli by
averaging together the epochs in each condition. This is as simple as calling
the ~mne.Epochs.average method on the ~mne.Epochs object, and then using
a function from the :mod:mne.viz module to compare the global field power
for each sensor type of the two ~mne.Evoked objects:
End of explanation
"""
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
"""
Explanation: We can also get a more detailed view of each ~mne.Evoked object using other
plotting methods such as ~mne.Evoked.plot_joint or
~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels, and see
the classic auditory evoked N100-P200 pattern over dorso-frontal electrodes,
then plot scalp topographies at some additional arbitrary times:
End of explanation
"""
evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])
evoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)
"""
Explanation: Evoked objects can also be combined to show contrasts between conditions,
using the mne.combine_evoked function. A simple difference can be
generated by passing weights=[1, -1]. We'll then plot the difference wave
at each sensor using ~mne.Evoked.plot_topo:
End of explanation
"""
# load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)
snr = 3.
lambda2 = 1. / snr ** 2
# generate the source time course (STC)
stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,
lambda2=lambda2,
method='MNE') # or dSPM, sLORETA, eLORETA
"""
Explanation: Inverse modeling
Finally, we can estimate the origins of the evoked activity by projecting the
sensor data into this subject's :term:source space (a set of points either
on the cortical surface or within the cortical volume of that subject, as
estimated by structural MRI scans). MNE-Python supports lots of ways of doing
this (dynamic statistical parametric mapping, dipole fitting, beamformers,
etc.); here we'll use minimum-norm estimation (MNE) to generate a continuous
map of activation constrained to the cortical surface. MNE uses a linear
:term:inverse operator to project EEG+MEG sensor measurements into the
source space. The inverse operator is computed from the
:term:forward solution for this subject and an estimate of the
covariance of sensor measurements <tut-compute-covariance>. For this
tutorial we'll skip those computational steps and load a pre-computed inverse
operator from disk (it's included with the sample data
<sample-dataset>). Because this "inverse problem" is underdetermined (there
is no unique solution), here we further constrain the solution by providing a
regularization parameter specifying the relative smoothness of the current
estimates in terms of a signal-to-noise ratio (where "noise" here is akin to
baseline activity level across all of cortex).
End of explanation
"""
# path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot the STC
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir)
"""
Explanation: Finally, in order to plot the source estimate on the subject's cortical
surface we'll also need the path to the sample subject's structural MRI files
(the subjects_dir):
End of explanation
"""
|
ddebrunner/streamsx.dsx.notebooks | HelloWorld.ipynb | apache-2.0 | from streamsx.topology.topology import Topology
from streamsx.topology.context import *
topo = Topology("hello_dsx")
hw = topo.source(["Hello", "DSX!!"])
hw.print()
"""
Explanation: Hello World with Streaming Analytics service
Create a Hello World streaming application that simply prints Hello and DSX! to the PE console of the application.
The application runs as a job in the Streaming Analytic service running on IBM Bluemix.
End of explanation
"""
service_name='debrunne-streams2'
import json
with open('vcap_services.json') as json_data:
vs = json.load(json_data)
"""
Explanation: Now the application is submitted to the service, first we need to declare the VCAP services used to connect to the service.
This picks up the VCAP services from the file vcap_services.json create by the 'Create VCAP services' notebook. That must be run once before running this notebook.
End of explanation
"""
cfg = {}
cfg[ConfigParams.VCAP_SERVICES] = vs
cfg[ConfigParams.SERVICE_NAME] = service_name
submit("ANALYTICS_SERVICE", topo, cfg)
print("Done - Submitted job!")
"""
Explanation: Now we submit the topo object that represents the application's topology to the ANALYTICS_SERVICE context, passing in the VCAP services information in the configuration.
End of explanation
"""
|
SATHVIKRAJU/Inferential_Statistics | Racial_disc.ipynb | mit | import pandas as pd
import numpy as np
from scipy import stats
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
# number of callbacks for black-sounding names
df_race_b=(data[data.race=='b'])
no_calls_b=sum(df_race_b.call)
#data['race'].count()
#data['call'].count()
df_race_w=(data[data.race=='w'])
no_calls_w=sum(df_race_w.call)
print(len(df_race_b))
print(len(df_race_w))
print("The number of calls for a black sounding person is %d and the number of calls for a white sounding person is %d" %(no_calls_b,no_calls_w))
data.head()
prob_b=no_calls_b/len(df_race_b)
prob_w=no_calls_w/len(df_race_w)
print(prob_b,prob_w)
difference_prob=abs(prob_b - prob_w)
difference_prob
"""
Explanation: Examining Racial Discrimination in the US Job Market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés to black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes when presented to the employer.
Exercises
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Write a story describing the statistical significance in the context or the original problem.
Does your analysis mean that race/name is the most important factor in callback success? Why or why not? If not, how would you amend your analysis?
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Experiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states
Scipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
"""
standard_error = np.sqrt((prob_w*(1 - prob_w)/(len(df_race_w))) + (prob_b*(1 - prob_b) /(len(df_race_b))))
#print(standard_error)
critical_value=1.96 #95% confidence Interval from z-table
Margin_error=abs(standard_error*critical_value)
print("The proportion of calls received for White sounding names for thier CV's are in between %F and %F" % (difference_prob + Margin_error,difference_prob - Margin_error))
"""
Explanation: Question 1:
z-test is more appropriate for this example when compared to t-test. This is a categorical variable where it is better to compute proportions between the two variables by calculating the sum of the win/loss or success/failure than computing the mean.The hypothesis is meant to compare the difference between the two proportions to a null value and hence we can apply the z test.
Central Limit theorem applies to categorical data as well and hence the distribution is mostly normal.
Question2:
The Null hypothesis can be defined here that there is no significant difference between the proportion of Black sounding names and White sounding names being called for interviews. (H0)
The Alternate hypothesis states that there is a significant difference between the two proportions(H1)
Question3:
Margin of Error can be calculated as:
End of explanation
"""
from statsmodels.stats.weightstats import ztest
z_test = ztest(df_race_w.call,df_race_b.call, alternative = 'two-sided')
print("The p-value is given by %F and the z -score is given by %F" %(z_test[1],z_test[0]))
"""
Explanation: Question4:
Calculating p-value
End of explanation
"""
|
ceos-seo/data_cube_notebooks | notebooks/UN_SDG/UN_SDG_11_3_1.ipynb | apache-2.0 | def sdg_11_3_1(land_consumption, population_growth_rate):
return land_consumption/population_growth_rate
"""
Explanation: <a id="top"></a>
UN SDG Indicator 11.3.1:<br> Ratio of Land Consumption Rate to Population Growth Rate
<hr>
Notebook Summary
The United Nations have prescribed 17 "Sustainable Development Goals" (SDGs). This notebook attempts to monitor SDG Indicator 11.3.1 - ratio of land consumption rate to population growth rate.
UN SDG Indicator 11.3.1 provides a metric for determining wether or not land consumption is scaling responsibly with the growth of the population in a given region.
Case Study
This notebook conducts analysis in the Dar es Salaam, Tanzania with reference years of 2000 and 2015.
Index
Define Formulas for Calculating the Indicator
Import Dependencies and Connect to the Data Cube
Show the Area
Determine Population Growth Rate
Determine Land Consumption Rate
Build Composites for the First and Last Years
Filter Out Everything Except the Survey Region
Determine Urban Extent
SDG Indicator 11.3.1
<a id="define_formulas"></a>Define Formulas for Calculating the Indicator ▴
SDG Indicator 11.3.1
The ratio between land consumption and population growth rate.
$$ SDG_{11.1.3} = \frac{LandConsumptionRate}{PopulationGrowthRate} $$
End of explanation
"""
import numpy as np
def population_growth_rate_pct(pop_t1 = None, pop_t2 = None, y = None):
"""
Calculates the average percent population growth rate per year.
Parameters
----------
pop_t1: numeric
The population of the first year.
pop_t2: numberic
The population of the last year.
y: int
The numbers of years between t1 and t2.
Returns
-------
pop_growth_rate: float
The average percent population growth rate per year.
"""
return 10**(np.log10(pop_t2/pop_t1)/y) - 1
def population_growth_rate(pop_t1 = None, pop_t2 = None, y = None):
"""
Calculates the average increase in population per year.
Parameters
----------
pop_t1: numeric
The population of the first year.
pop_t2: numberic
The population of the last year.
y: int
The numbers of years between t1 and t2.
Returns
-------
pop_growth_rate: float
The average increase in population per year.
"""
return (pop_t2 - pop_t1) / y
"""
Explanation: Population Growth Rate
For calculating the indicator value for this SDG, the formula is the simple average yearly change in population.
For calculating the average yearly population growth rate as a percent (e.g. to show on maps), the following formula
is used:
$$ PopulationGrowthRate = 10 ^ {LOG( Pop_{t_2} \space / \space Pop_{t_1}) \space / \space {y}} - 1 $$
Where:
$Pop_{t_2}$ - Total population within the area in the current/final year
$Pop_{t_1}$ - Total population within the area in the past/initial year
$y$ - The number of years between the two measurement periods $t = Year_{t_2} - Year_{t_1}$
End of explanation
"""
def land_consumption_rate(area_t1 = None, area_t2 = None, y = None):
"""
Calculates the average increase in land consumption per year.
Parameters
----------
area_t1: numeric
The number of urbanized pixels for the first year.
area_t2: numberic
The number of urbanized pixels for the last year.
y: int
The numbers of years between t1 and t2.
Returns
-------
pop_growth_rate: float
The average increase in land consumption per year.
"""
return (area_t2 - area_t1) / y
"""
Explanation: Land Consumption Rate
For calculating the indicator value for this SDG, the formula is the simple average yearly change in land consumption.
End of explanation
"""
# Supress Some Warnings
import warnings
warnings.filterwarnings('ignore')
# Allow importing of our utilities.
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
# Prepare for plotting.
import matplotlib.pyplot as plt
%matplotlib inline
import datacube
dc = datacube.Datacube()
"""
Explanation: <a id="import"></a>Import Dependencies and Connect to the Data Cube ▴
End of explanation
"""
# Dar es Salaam, Tanzania
latitude_extents = (-6.95, -6.70)
longitude_extents = (39.05, 39.45)
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = latitude_extents, longitude = longitude_extents)
"""
Explanation: <a id="show_area"></a>Show the Area ▴
End of explanation
"""
CSV_FILE_PATH = "../data/Tanzania/population_shape/ADM2_GPWV4_population.csv"
SHAPE_FILE_PATH = "../data/Tanzania/population_shape/TZA_ADM2.geojson"
import geopandas as gpd
import pandas as pd
first_year, last_year = 2000, 2015
first_year_pop_col = 'gpw_v4_count.{}.sum'.format(first_year)
last_year_pop_col = 'gpw_v4_count.{}.sum'.format(last_year)
shape_data = gpd.read_file(SHAPE_FILE_PATH)
shape_data = shape_data[['Name', 'geometry']]
pop_data = pd.read_csv(CSV_FILE_PATH)
pop_data = pop_data[[first_year_pop_col, last_year_pop_col, 'Name']]
pop_data = pop_data.rename({first_year_pop_col: 'pop_t1',
last_year_pop_col: 'pop_t2'}, axis='columns')
country_data = shape_data.merge(pop_data, on='Name')
def shapely_geom_intersects_rect(geom, x, y):
"""
Determines whether the bounding box of a Shapely polygon intesects
a rectangle defined by `x` and `y` extents.
Parameters
----------
geom: shapely.geometry.polygon.Polygon
The object to determine intersection with the region defined by `x` and `y`.
x, y: list-like
The x and y extents, expressed as 2-tuples.
Returns
-------
intersects: bool
Whether the bounding box of `geom` intersects the rectangle.
"""
geom_bounds = np.array(list(geom.bounds))
x_shp, y_shp = geom_bounds[[0,2]], geom_bounds[[1,3]]
x_in_range = (x_shp[0] < x[1]) & (x[0] < x_shp[1])
y_in_range = (y_shp[0] < y[1]) & (y[0] < y_shp[1])
return x_in_range & y_in_range
# `intersecting_shapes` can be examined to determine which districts to ultimately keep.
intersecting_shapes = country_data[country_data.apply(
lambda row: shapely_geom_intersects_rect(row.geometry, longitude_extents, latitude_extents),
axis=1).values]
"""
Explanation: <a id="pop_rate"></a>Determine Population Growth Rate ▴
Load Population Data
<br>Shape files are based on GPW estimates. You can derive similar population figures from AidData GeoQuery at
- http://geo.aiddata.org/query
End of explanation
"""
districts = ['Kinondoni', 'Ilala', 'Temeke']
districts_mask = country_data.Name.isin(districts)
country_data.plot(column=districts_mask, cmap='jet', figsize=(10,10))
survey_region = country_data[districts_mask]
plt.show()
"""
Explanation: Show the Survey Region in the Context of the Country
End of explanation
"""
survey_region.plot( figsize = (10,10))
plt.show()
"""
Explanation: Show the Survey Region Alone
End of explanation
"""
from shapely.ops import cascaded_union
disjoint_areas = cascaded_union([*survey_region.geometry]) ## Top Right is 'disjoint' from bottom left.
"""
Explanation: Determine the Shape that Masks the Survey Region
End of explanation
"""
time_range = last_year - first_year
country_data = country_data.assign(population_growth_rate = \
population_growth_rate_pct(country_data["pop_t1"], country_data["pop_t2"], time_range))
"""
Explanation: Calculate Population Growth Rate
Calcuate Population Growth Rate for All Regions Individually
End of explanation
"""
fig, ax = plt.subplots(figsize = (10, 10))
ax.set_title("Population Growth Rate {}-{}".format(first_year, last_year))
ax1 = country_data.plot(column = "population_growth_rate", ax = ax, legend=True)
survey_region_total_pop_t1 = survey_region["pop_t1"].sum()
survey_region_total_pop_t2 = survey_region["pop_t2"].sum()
pop_growth = population_growth_rate(pop_t1 = survey_region_total_pop_t1,
pop_t2 = survey_region_total_pop_t2,
y = time_range)
print("Annual Population Growth Rate of the Survey Region: {:.2f} People per Year".format(pop_growth))
"""
Explanation: Visualize Population Growth Rate
End of explanation
"""
measurements = ["red", "green", "blue", "nir", "swir1", "swir2", "pixel_qa"]
# Determine the bounding box of the survey region to load data for.
min_lon, min_lat, max_lon, max_lat = disjoint_areas.bounds
lat = (min_lat, max_lat)
lon = (min_lon, max_lon)
product_1 = 'ls7_usgs_sr_scene'
platform_1 = 'LANDSAT_7'
collection_1 = 'c1'
level_1 = 'l2'
product_2 = 'ls8_usgs_sr_scene'
platform_2 = 'LANDSAT_8'
collection_2 = 'c1'
level_2 = 'l2'
# For a full test, each time extent should be 1 full year.
time_extents_t1 = ('2000-01-01', '2000-01-31')
time_extents_t2 = ('2017-01-01', '2017-01-31')
load_params = dict(measurements = measurements,
latitude = lat, longitude = lon, \
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})
"""
Explanation: <a id="land_consumption_rate"></a>Determine Land Consumption Rate ▴
Specify Load Parameters
End of explanation
"""
from utils.data_cube_utilities.aggregate import xr_scale_res
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
# The fraction of the original resolution to use to reduce memory consumption.
frac_res = 0.25
dataset_t1 = dc.load(**load_params, product=product_1, time=time_extents_t1)
clean_mask_t1 = landsat_clean_mask_full(dc, dataset_t1, product=product_1, platform=platform_1,
collection=collection_1, level=level_1)
composite_t1 = create_median_mosaic(dataset_t1, clean_mask_t1.data).compute()
composite_t1 = xr_scale_res(composite_t1, frac_res=frac_res)
composite_t1.attrs = dataset_t1.attrs
del dataset_t1, clean_mask_t1
dataset_t2 = dc.load(**load_params, product=product_2, time=time_extents_t2)
clean_mask_t2 = landsat_clean_mask_full(dc, dataset_t2, product=product_2, platform=platform_2,
collection=collection_2, level=level_2)
composite_t2 = create_median_mosaic(dataset_t2, clean_mask_t2.data).compute()
composite_t2 = xr_scale_res(composite_t2, frac_res=frac_res)
composite_t2.attrs = dataset_t2.attrs
del dataset_t2, clean_mask_t2
"""
Explanation: <a id="false_color_composites"></a>Build Composites for the First and Last Years ▴
End of explanation
"""
from utils.data_cube_utilities.dc_rgb import rgb
rgb(composite_t1, bands = ["nir","swir1","blue"], width = 15)
plt.title('Year {}'.format(first_year))
plt.show()
"""
Explanation: First Year
False Color Composite [nir, swir1, blue]
End of explanation
"""
rgb(composite_t2, bands = ["nir","swir1","blue"], width = 15)
plt.title('Year {}'.format(last_year))
plt.show()
"""
Explanation: Last Year
False Color Composite [nir, swir1, blue]
End of explanation
"""
import rasterio.features
from datacube.utils import geometry
import xarray as xr
def generate_mask(loaded_dataset:xr.Dataset,
geo_polygon: datacube.utils.geometry ):
return rasterio.features.geometry_mask(
[geo_polygon],
out_shape = loaded_dataset.geobox.shape,
transform = loaded_dataset.geobox.affine,
all_touched = False,
invert = True)
mask = generate_mask(composite_t1, disjoint_areas)
filtered_composite_t1 = composite_t1.where(mask)
del composite_t1
filtered_composite_t2 = composite_t2.where(mask)
del composite_t2
"""
Explanation: <a id="filter_survey_region"></a>Filter Out Everything Except the Survey Region ▴
End of explanation
"""
rgb(filtered_composite_t1, bands = ["nir","swir1","blue"],width = 15)
plt.show()
"""
Explanation: First Year Survey Region
False Color Composite [nir, swir1, blue]
End of explanation
"""
rgb(filtered_composite_t2, bands = ["nir","swir1","blue"],width = 15)
plt.show()
"""
Explanation: Last Year Survey Region
False Color Composite [nir, swir1, blue]
End of explanation
"""
def NDBI(dataset):
return (dataset.swir1 - dataset.nir)/(dataset.swir1 + dataset.nir)
"""
Explanation: <a id="urban_extent"></a>Determine Urban Extent ▴
Urbanization Index Option 1: NDBI
The Normalized Difference Built-up Index (NDBI) is quick to calculate, but is sometimes inaccurate (e.g. in very arid regions).
End of explanation
"""
from utils.data_cube_utilities.dc_fractional_coverage_classifier import frac_coverage_classify
"""
Explanation: Urbanization Index Option 2: Fractional Cover Bare Soil
The fractional cover bare soil index is very slow to calculate in its current implementation, but is often more accurate than NDBI.
End of explanation
"""
# Can be 'NDBI' or 'Fractional Cover Bare Soil'.
urbanization_index = 'Fractional Cover Bare Soil'
urban_index_func = None
urban_index_range = None
if urbanization_index == 'NDBI':
urban_index_func = NDBI
urban_index_range = [-1, 1]
if urbanization_index == 'Fractional Cover Bare Soil':
urban_index_func = lambda dataset: frac_coverage_classify(dataset).bs
urban_index_range = [0, 100]
plot_kwargs = dict(vmin=urban_index_range[0], vmax=urban_index_range[1])
"""
Explanation: Choose the Urbanization Index to Use
End of explanation
"""
urban_composite_t1 = urban_index_func(filtered_composite_t1)
plt.figure(figsize = (19.5, 14))
urban_composite_t1.plot(**plot_kwargs)
plt.show()
"""
Explanation: First Year Urban Composite
End of explanation
"""
urban_composite_t2 = urban_index_func(filtered_composite_t2)
plt.figure(figsize = (19.5, 14))
urban_composite_t2.plot(**plot_kwargs)
plt.show()
"""
Explanation: Last Year Urban Composite
End of explanation
"""
def urbanizaton(urban_index: xr.Dataset, urbanization_index) -> xr.DataArray:
bounds = None
if urbanization_index == 'NDBI':
bounds = (0,0.3)
if urbanization_index == 'Fractional Cover Bare Soil':
bounds = (20, 100)
urban = np.logical_and(urban_index > min(bounds), urban_index < max(bounds))
is_clean = np.isfinite(urban_index)
urban = urban.where(is_clean)
return urban
urban_product_t1 = urbanizaton(urban_composite_t1, urbanization_index)
urban_product_t2 = urbanizaton(urban_composite_t2, urbanization_index)
"""
Explanation: Defining Binary Urbanization
End of explanation
"""
rgb(filtered_composite_t1,
bands = ["nir","swir1","blue"],
paint_on_mask = [(np.logical_and(urban_product_t1.astype(bool), mask), [255,0,0])],
width = 15)
plt.show()
"""
Explanation: First Year
Urbanization product overlayed on false color composite
End of explanation
"""
rgb(filtered_composite_t2,
bands = ["nir","swir1","blue"],
paint_on_mask = [(np.logical_and(urban_product_t2.astype(bool), mask),[255,0,0])],
width = 15)
plt.show()
"""
Explanation: Last Year
Urbanization Product overlayed on false color composite
End of explanation
"""
fig = plt.figure(figsize = (15,5))
#T1 (LEFT)
ax1 = fig.add_subplot(121)
urban_product_t1.plot(cmap = "Reds")
ax1.set_title("Urbanization Extent {}".format(first_year))
#T2 (RIGHT)
ax2 = fig.add_subplot(122)
urban_product_t2.plot(cmap = "Reds")
ax2.set_title("Urbanization Extent {}".format(last_year))
plt.show()
comp_lat = filtered_composite_t1.latitude
meters_per_deg_lat = 111000 # 111 km per degree latitude
deg_lat = np.abs(np.diff(comp_lat[[0, -1]])[0])
meters_lat = meters_per_deg_lat * deg_lat
sq_meters_per_px = (meters_lat / len(comp_lat))**2
# Calculation the square meters of urbanized area.
urbanized_area_t1 = float( urban_product_t1.sum() * sq_meters_per_px )
urbanized_area_t2 = float( urban_product_t2.sum() * sq_meters_per_px )
consumption_rate = land_consumption_rate(area_t1 = urbanized_area_t1, area_t2 = urbanized_area_t2, y = time_range)
print("Land Consumption Rate of the Survey Region: {:.2f} Square Meters per Year".format(consumption_rate))
"""
Explanation: Urbanization Change
End of explanation
"""
indicator_val = sdg_11_3_1(consumption_rate,pop_growth)
print("The UN SDG 11.3.1 Indicator value (ratio of land consumption rate to population growth rate) "\
"for this survey region for the specified parameters "\
"is {:.2f} square meters per person.".format(indicator_val))
print("")
print("In other words, on average, according to this analysis, every new person is consuming {:.2f} square meters of land in total.".format(indicator_val))
"""
Explanation: <a id="indicator"></a>SDG Indicator 11.3.1 ▴
End of explanation
"""
|
bhargavvader/pycobra | docs/notebooks/regression.ipynb | mit | from pycobra.cobra import Cobra
from pycobra.diagnostics import Diagnostics
import numpy as np
%matplotlib inline
"""
Explanation: Playing with Regression
This notebook will help us with testing different regression techniques, and demonstrate the diagnostic class which can be used to find the optimal parameters for COBRA.
So for now we will generate a random data-set and try some of the popular regression techniques on it, after it has been loaded to COBRA.
Imports
End of explanation
"""
# setting up our random data-set
rng = np.random.RandomState(1)
# D1 = train machines; D2 = create COBRA; D3 = calibrate epsilon, alpha; D4 = testing
n_features = 20
D1, D2, D3, D4 = 200, 200, 200, 200
D = D1 + D2 + D3 + D4
X = rng.uniform(-1, 1, D * n_features).reshape(D, n_features)
Y = np.power(X[:,1], 2) + np.power(X[:,3], 3) + np.exp(X[:,10])
# Y = np.power(X[:,0], 2) + np.power(X[:,1], 3)
# training data-set
X_train = X[:D1 + D2]
X_test = X[D1 + D2 + D3:D1 + D2 + D3 + D4]
X_eps = X[D1 + D2:D1 + D2 + D3]
# for testing
Y_train = Y[:D1 + D2]
Y_test = Y[D1 + D2 + D3:D1 + D2 + D3 + D4]
Y_eps = Y[D1 + D2:D1 + D2 + D3]
"""
Explanation: Setting up data set
End of explanation
"""
cobra = Cobra(random_state=0, epsilon=0.5)
cobra.fit(X_train, Y_train, default=False)
"""
Explanation: Setting up COBRA
Let's up our COBRA machine with the data.
End of explanation
"""
cobra.split_data(D1, D1 + D2, shuffle_data=True)
"""
Explanation: When we are fitting, we initialise COBRA with an epsilon value of $0.5$ - this is because we are aware of the distribution and 0.5 is a fair guess of what would be a "good" epsilon value, because the data varies from $-1$ to $1$.
If we do not pass the $\epsilon$ parameter, we perform a CV on the training data for an optimised epsilon.
It can be noticed that the default parameter is set as false: this is so we can walk you through what happens when COBRA is set-up, instead of the deafult settings being used.
We're now going to split our dataset into two parts, and shuffle data points.
End of explanation
"""
cobra.load_default()
"""
Explanation: Let's load the default machines to COBRA.
End of explanation
"""
query = X_test[9].reshape(1, -1)
cobra.machines_
cobra.machines_['lasso'].predict(query)
cobra.machines_['tree'].predict(query)
cobra.machines_['ridge'].predict(query)
cobra.machines_['random_forest'].predict(query)
"""
Explanation: We note here that further machines can be loaded using either the loadMachine() and loadSKMachine() methods. The only prerequisite is that the machine has a valid predict() method.
Using COBRA's machines
We've created our random dataset and now we're going to use the default sci-kit machines to see what the results look like.
End of explanation
"""
cobra.load_machine_predictions()
cobra.predict(query)
Y_test[9]
"""
Explanation: Aggregate!
By using the aggregate function we can combine our predictors.
You can read about the aggregation procedure either in the original COBRA paper or look around in the source code for the algorithm.
We start by loading each machine's predictions now.
End of explanation
"""
cobra_diagnostics = Diagnostics(cobra, X_test, Y_test, load_MSE=True)
cobra_diagnostics.machine_MSE
"""
Explanation: Optimizing COBRA
To squeeze the best out of COBRA we make use of the COBRA diagnostics class. With a grid based approach to optimizing hyperparameters, we can find out the best epsilon value, number of machines (alpha value), and combination of machines.
Let's check the MSE for each of COBRAs machines:
End of explanation
"""
cobra_diagnostics.error_bound
"""
Explanation: This error is bound by the value $C\mathscr{l}^{\frac{-2}{M + 2}}$ upto a constant $C$, which is problem dependant. For more details, we refer the user to the original paper.
End of explanation
"""
cobra_diagnostics.optimal_split(X_eps, Y_eps)
"""
Explanation: Playing with Data-Splitting
When we initially started to set up COBRA, we split our training data into two further parts - $D_k$, and $D_l$.
This split was done 50-50, but it is upto us how we wish to do this.
The following section will compare 20-80, 60-40, 50-50, 40-60, 80-20 and check for which case we get the best MSE values, for a fixed Epsilon (or use a grid).
End of explanation
"""
split = [(0.05, 0.95), (0.10, 0.90), (0.20, 0.80), (0.40, 0.60), (0.50, 0.50), (0.60, 0.40), (0.80, 0.20), (0.90, 0.10), (0.95, 0.05)]
cobra_diagnostics.optimal_split(X_eps, Y_eps, split=split, info=True, graph=True)
"""
Explanation: What we saw was the default result, with the optimal split ratio and the corresponding MSE. We can do a further analysis here by enabling the info and graph options, and using more values to split on.
End of explanation
"""
cobra_diagnostics.optimal_epsilon(X_eps, Y_eps, line_points=100)
"""
Explanation: Alpha, Epsilon and Machines
The following are methods to idetify the optimal epislon values, alpha values, and combination of machines.
The grid methods allow for us to predict for a single point the optimal alpha/machines and epsilon combination.
Epsilon
The epsilon paramter controls how "strict" cobra should behave while choosing the points to aggregate.
End of explanation
"""
cobra_diagnostics.optimal_alpha(X_eps, Y_eps, info=True)
"""
Explanation: Alpha
The alpha parameter decides how many machines must a point be picked up by before being added to an aggregate. The default value is 4.
End of explanation
"""
cobra_diagnostics.optimal_machines(X_eps, Y_eps, info=True)
cobra_diagnostics.optimal_alpha_grid(X_eps[0], Y_eps[0], line_points=100)
cobra_diagnostics.optimal_machines_grid(X_eps[0], Y_eps[0], line_points=100)
"""
Explanation: In this particular case, the best performance is obtained by seeking consensus over all 4 machines.
Machines
Decide which subset of machines to select for the aggregate.
End of explanation
"""
|
geoffbacon/semrep | semrep/evaluate/qvec/qvec.ipynb | mit | import os
import csv
import pandas as pd
import numpy as np
from scipy import stats
data_path = '../../data'
tmp_path = '../../tmp'
feature_path = os.path.join(data_path, 'evaluation/semcor/tsvetkov_semcor.csv')
subset = pd.read_csv(feature_path, index_col=0)
subset.columns = [c.replace('semcor.', '') for c in subset.columns]
subset.head()
"""
Explanation: QVEC
This notebook is a replication of Tsvetkov et al. (2015) Evaluation of Word Vector Representations by Subspace Alignment, which introduces QVEC. QVEC is an intrinsic evaluation method of word embeddings, measuring the correlation between dimensions of the embeddings and linguistic features. The original code is available, but I'm replicating it for two reasons: i) as a learning exercise and ii) the original implementation looks messy.
To implement QVEC, I'm going to need two things:
- Gold standard linguistic features
- The QVEC model
The linguistic features used in the original paper come from SemCor, a WordNet annotated subset of the Brown corpus. This is done in here.
End of explanation
"""
subset.set_index('words', inplace=True)
#subset.drop('count_in_semcor', inplace=True, axis=1)
subset = subset.T
subset.head()
"""
Explanation: QVEC model
QVEC finds an alignment between dimensions of learnt word embeddings and dimensions (features) of linguistic features by maximising the cumulative correlation.
$N$ is the size of the vocabulary (in common between the embeddings and the linguistic features).
$D$ is the dimensionality of the embeddings.
$X \in \mathbb{R}^{D \times N}$ is the matrix of embeddings. Note that a word's embedding is a column, rows are individual dimensions.
$P$ is the number of linguistic features.
$S \in \mathbb{R}^{P \times N}$ is the matrix of linguistic features, created above. Again, each word is a column and rows are individual features.
QVEC finds an alignment between the rows of $X$ and the rows of $S$ that maximises the correlation between the aligned rows. Each row of $X$ is aligned to at most one row of $S$, but each row of $S$ may be aligned to more than one row of $X$.
$A \in {0,1}^{D \times P}$ holds the alignments. $x_{ij}$ is 1 if dimension $i$ of $X$ is aligned with linguistic feature $j$.
The sum of correlations (which can be arbitrarily large with more dimensions or features) is their measure of the quality of the word embeddings in $X$.
$QVEC = \max_{A|\sum_{j}a_{ij} \leq 1}\sum_{i=1}^{D}\sum_{j=1}^{P}r(x_i, s_j) \times a_{ij}$
In words, for any possible alignment $A$, subject to the constraint that each embedding dimension is aligned to 0 or 1 linguistic features, sum up the correlations. The sum for the best alignment is the measure of the embeddings.
Crucially, this assumes that the dimensions of the embeddings end up encoding linguistic features. The authors justify this by the effectiveness of using word embeddings in linear models in downstream tasks.
First things first, transform my linguistic features into the format mentioned above (i.e., the matrix $S$).
End of explanation
"""
size = 50
fname = 'embeddings/glove.6B.{}d.txt'.format(size)
embedding_path = os.path.join(data_path, fname)
embeddings = pd.read_csv(embedding_path, sep=' ', header=None, index_col=0, quoting=csv.QUOTE_NONE).T
embeddings.head()
common_words = embeddings.columns.intersection(subset.columns)
embeddings = embeddings[common_words]
fname = os.path.join(tmp_path, 'glove_embeddings.csv')
embeddings.to_csv(fname)
from sklearn.metrics.pairwise import cosine_similarity as cos
pairwise = cos(embeddings.T)
distances = pd.DataFrame(pairwise, columns=common_words, index=common_words)
distances.to_csv(os.path.join(data_path, 'pairwise_sim.csv'))
"""
Explanation: Learnt word embeddings
The original paper trains various different models of varying sizes. At a later stage I could do that, but for now I'm happy with using pre-trained embeddings.
End of explanation
"""
S = subset[common_words]
X = embeddings[common_words]
"""
Explanation: The Python variables S and X refer to $S$ and $X$ exactly as above.
End of explanation
"""
correlations = pd.DataFrame({i:X.corrwith(S.iloc[i], axis=1) for i in range(len(S))})
correlations.columns = S.index
"""
Explanation: Now we want the correlation between the rows of S and the rows of X. This may not be the easiest way to do it but it works.
End of explanation
"""
alignments = correlations.idxmax(axis=1)
correlations.max(axis=1).head()
"""
Explanation: For each row of this correlation matrix (i.e. for each of the dimenions of the embeddings), we want the linguistic feature that it is most correlated with. We also get the value of that correlation.
End of explanation
"""
qvec = correlations.max(axis=1).sum()
qvec
"""
Explanation: The score of the embeddings relative to the linguistic features is the sum of the maximum correlations. Note how this value depends on how many dimensions in the embeddings there are. For 300 dimension vectors trained (by them) from GloVe, the authors get 34.4, while I get 32.4. Note that our linguistic features are still different, so the fact that the discrepancy here is not too big is encouraging.
End of explanation
"""
A = pd.DataFrame(0, index=range(len(X)), columns=S.index)
for dim, feat in alignments.iteritems():
A[feat][dim] = 1
A.head()
"""
Explanation: We don't really need it, but just to be explicit let's get the matrix $A$ of alignments.
End of explanation
"""
from sklearn.cross_decomposition import CCA
cca = CCA(n_components=1)
cca = cca.fit(X.T, S.T)
"""
Explanation: The rest of the paper is a series of experiments training large models and evaluating them on both instrinic and extrinsic tasks, including QVEC. I'm not going to replicate that here, but the QVEC implementation is complete.
Canonical Correlation Analysis
In a follow-up 2016 paper, a subset of the original authors introduce QVEC-CCA. It's really just QVEC except instead of summing the highest row-wise correlations, they use canonical correlation analysis. I didn't know what that was, but after reading a bit I have a reasonable grasp of it. I'm going to replicate that 2016 paper, or at least the most important part of it which is the use of CCA. Note that the other new thing in the 2016 paper is the use of syntactic features, in addition to semantic, which I won't do right now.
Scikit-learn has an implementation of CCA. It took me a while to figure out what are the learnt parameters that I want, and I'm only 80% confident I have it right.
End of explanation
"""
a = np.dot(X.T, cca.x_weights_)
b = np.dot(S.T, cca.y_weights_)
stats.pearsonr(a, b)
"""
Explanation: I believe the linear combinations I want are stored in the x_weights_ and y_weights_ attributes.
End of explanation
"""
def qvec(features, embeddings):
"""
Returns correlations between columns of `features` and `embeddings`.
The aligned feature is the one with the highest correlation.
The qvec score is the sum of correlations of aligned features.
"""
common_words = embeddings.columns.intersection(subset.columns)
S = features[common_words]
X = embeddings[common_words]
correlations = pd.DataFrame({i:X.corrwith(S.iloc[i], axis=1) for i in range(len(S))})
correlations.columns = S.index
return correlations
qvec(subset, embeddings).head()
"""
Explanation: Succint implementation
End of explanation
"""
|
nslatysheva/data_science_blogging | model_optimization/model_optimization.ipynb | gpl-3.0 | import wget
import pandas as pd
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/spam/spam_dataset.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=",")
# Take a peak at the data
dataset.head()
"""
Explanation: How to tune machine learning algorithms to your dataset
Introduction
When doing machine learning using Python's scikit-learn library, we can often get reasonable predictions by using out-of-the-box algorithms with default settings. However, it is a much better idea to do at least some tuning of the algorithms to your specific problem and dataset. In this post, we will explore how hyperparameter optimization can be used to tune models, different strategies for traversing hyperparameter space, and several related concepts such as overfitting and cross-validation. We also demonstrate the process of tuning, training, and testing three different algorithms - a random forest, a support vector machine and a logistic regression classifier.
You'll be working with the famous (well, machine learning famous!) spam dataset, which contains loads of NLP-mined features of spam and non-spam emails, like the frequencies of the words "money", "free" and "viagra". Our goal is to tune and apply different algorithms to these features in order to predict whether a given email is spam.
The steps we'll cover in this blog post can be summarized as follows:
In the next blog post, you will learn how to take different tuned machine learning algorithms and combine them to build an ensemble model, which is a type of aggregated, meta-model that often has higher accuracy and less overfitting than its individual constituent models.
Let's get cracking.
Loading and exploring the dataset
We start off by retrieving the dataset. It can be found both online and (in a slightly nicer form) in our GitHub repository, so we can just fetch it via wget (note: make sure you first install wget by typing pip install wget into your Terminal since it is not a preinstalled Python library). The command will download a copy of the dataset to your current working directory.
End of explanation
"""
# Examine shape of dataset and some column names
print (dataset.shape)
print (dataset.columns.values)
# Summarise feature values
dataset.describe()
"""
Explanation: Let's examine the shape of the dataset (the number of rows and columns), the types of features it contains, and some summary statistics for each feature.
End of explanation
"""
import numpy as np
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
"""
Explanation: Next up, let's convert the pandas dataframe into a numpy array and isolate the outcome variable we'd like to predict (here, 0 means 'non-spam', 1 means 'spam'):
End of explanation
"""
from sklearn.cross_validation import train_test_split
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
"""
Explanation: Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our predictive models. The test will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well your models would perform in the wild on unseen data.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
# Create and train random forest classifer
rf = RandomForestClassifier()
rf.fit(XTrain, yTrain)
# Predict classes on the test set
rf_predictions = rf.predict(XTest)
# Output performance metrics of the model
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),2))
"""
Explanation: First, we are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to the theory behind random forests. Briefly, random forests build a collection of classification trees, where each tree tries to predict classes by recursively splitting the data on the features (and feature values) that split the classes 'best'. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced when constructing a variety of different trees, and the random forest ensembles these base learners together. Have a read of Chapter 8 of the ISLR book if you're interested in the inner workings of random forests, but you don't need to know the theory to understand the rest of this blog post.
Out of the box, scikit's random forest classifier already performs quite well on the spam dataset:
End of explanation
"""
# Defining a couple of HP options
n_estimators = np.array([5, 100])
max_features = np.array([10, 50])
"""
Explanation: An overall accuracy of 0.95 is very good, but keep in mind that this is a heavily idealized dataset. Next up, we are going to learn how to pick the best settings for the random forest algorithm (as well as for an SVM and logistic regression classifier) in order to get better models with (hopefully!) improved accuracy.
Better modelling through hyperparameter optimization
We've glossed over what a hyperparameter actually is. Let's explore the topic now. Often, when setting out to train a machine learning algorithm on your dataset, you must first set a number of arguments or hyperparameters (HPs). A hyperparameter is just a variable that influences the performance of your model, but isn't directly tuned during the training phase. Most machine learning algorithms have hyperparameters. For example, when using the k-nearest neighbours algorithm to do classification, the value of k (the number of nearest neighbours the model considers) is a hyperparameter that must be supplied in advance. As another example, when building a neural network, the number of layers in the network and the number of neurons per layer are both hyperparameters that must be specified before training commences. By contrast, the weights and biases in a neural network are parameters (not hyperparameters) because they are explicitly tuned during training. But how can we know what values to set the hyperparameters to in order to get the best performance from our learning algorithms?
Actually, scikit-learn generally provides reasonable HP default values, such that it is possible to quickly build an e.g. kNN classifier by simply typing kNN_clfr=sklearn.neighbors.KNeighborsClassifier() and then fitting it to your data. Behind the scenes, we can see the hyperparameter values that the classifier has automatically assigned, such as setting the number of nearest neighbours hyperparameter to 5 (n_neighbors=5), giving all datapoints equal importance (weights=uniform), and so on. Often, the default HP values will do a decent job (as we saw above), so it may be tempting to skip the topic of model tuning completely. However, it is basically always a good idea to do at least some level of hyperparameter optimization, due to the potential for substantial improvements in a learning algorithm's performance.
We optimize hyperparameters in exactly the way that you might expect - we try different values and see what works best. However, some care is needed when deciding how exactly to measure if certain values work well, and which strategy to use to systematically explore
hyperparameter space.
In a later post, we will introduce model ensembling, in which individual models can in a sense be considered 'hyper-hyper parameters' (™; ©; ®; patent pending; T-shirts printing).
The perils of overfitting
In order to build the best possible model that does a good job at describing the underlying trends in the dataset, we need to pick the right HP values. In the following examples, we will introduce different strategies of searching for the set of HPs that define the best model, but we will first need to make a slight detour to explain how to avoid a major pitfall when it comes to tuning models - overfitting.
As we mentioned above, HPs are not optimised while a learning algorithm is learning. Hence, we need other strategies to optimise them. The most basic way would just be to test different possible values for the HPs and see how the model performs. In a random forest, the key HPs we need to optimise are n_estimators and max_features. n_estimators controls the number of trees in the forest - generally, the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
Let's try out some HP values for a random forest.
End of explanation
"""
import itertools
# Get grid of all possible combinations of hp values
hp_combinations = list(itertools.product(n_estimators, max_features))
for hp_combo in range(len(hp_combinations)):
print (hp_combinations[hp_combo])
# Train and output accuracies
rf = RandomForestClassifier(n_estimators=hp_combinations[hp_combo][0],
max_features=hp_combinations[hp_combo][1])
rf.fit(XTrain, yTrain)
RF_predictions = rf.predict(XTest)
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
"""
Explanation: We can manually write a small loop ourselves to test out how well the different combinations of these fare (later, we'll find out better ways to do this):
End of explanation
"""
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
# Search for good hyperparameter values
# Specify values to grid search over
n_estimators = list(np.arange(25, 45, 5))
max_features = list(np.arange(10, X.shape[1], 20))
hyperparameters = {'n_estimators': n_estimators,
'max_features': max_features}
print (hyperparameters)
# Grid search using cross-validation
gridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4)
gridCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_n_estim = gridCV.best_params_['n_estimators']
best_max_features = gridCV.best_params_['max_features']
print("The best performing n_estimators value is: {:5.1f}".format(best_n_estim))
print("The best performing max_features value is: {:5.1f}".format(best_max_features))
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from gridCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim,
max_features=best_max_features)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
"""
Explanation: Looks like the higher value of n_estimators and the lower value of max_features did better. However, manually searching for the best HPs in this way is not efficient and could potentially lead to models that perform well on this specific data, but do not generalise well to a new dataset, which is what we are actually interested in. This phenomenon of building models that do not generalise well, or that are fitting too closely to the current dataset, is called overfitting. This is a key concept in machine learning and it is very much worth getting a better understanding of what it is. Let's briefly discuss the bias-variance tradeoff.
The bias-variance trade-off
When we train machine learning algorithms, what we are really interested in is how our model will perform on an independent dataset. It is not enough to predict whether emails in our training set are spam - how well would the model fare when predicting if a completely new, previously unseen datapoint is spam or not?
This is the idea behind splitting your dataset into a training set (on which models can be trained) and a test set (which is held out until the very end of your analysis, and provides an accurate measure of model performance). Essentially, we are only interested in building models that are generalizable - getting 100% accuracy on the training set is not impressive, and is simply an indicator of overfitting. Overfitting is the situation in which we have fitted our model too closely to the data, and have tuned to the noise instead of just to the signal.
In fact, this concept of wanting to fit algorithms to the training data well, but not so tightly that the model doesn't generalize, is a pervasive problem in machine learning. A common term for this balancing act is the bias-variance trade-off. Here is a nice introductory article on the topic that goes into more depth.
Have a look at how underfitting (high bias, low variance), properly fitting, and overfitting (low bias, high variance) models fare on the training compared to the test sets.
"Bias" and "variance" have got to be some of the least helpful terms in machine learning. One way to think of them is: a model that underfits (e.g. the straight line) is quite a bit wrong - it models the underlying generative process too simply, and this simple model is highly biased away from the ground truth (high bias). But, the straight line fit is not going to change very much across different datasets (low variance). The opposite trend applies to overfitted models.
Hence, we never try to optimize the model's perfomance on the training data because it is a misguided endeavour. But wait, didn't we also say that the test set is not meant to be touched until we are completely done training our model? We often don't have the luxury of lots of extra data we can use just to fit loads of models. So how are we meant to optimize our hyperparameters?
k-fold cross validation to the rescue
Enter k-fold cross-validation, which is a handy technique for simulating having an abundance of test datasets available, and is used to measure a model's performance using only the training set. k-fold CV is a general method, and is not specific to hyperparameter optimization, but is very useful for that purpose. Say that we want to do e.g. 10-fold cross-validation. The process is as follows: we randomly partition the training set into 10 equal sections. Then, we train an algorithm on 9/10ths (i.e. 9 out of the 10 sections) of that training set. We then evaluate its performance on the remaining 1 section. This gives us some measure of the model's performance (e.g. overall accuracy). We then train the same algorithm on a different 9/10ths of the training set, and evaluate on the other (different from before) remaining 1 section. We continue the process 10 times, get 10 different measures of model performance, and average these values to get an overall measure. Of course, we could have chosen some number other than 10. To keep on with this example, the process behind 10-fold CV looks like this:
We can use k-fold cross validation to optimize HPs. Say we are deciding whether to use 1, 3 or 5 nearest-neighbours in our nearest-neighbours classifier. We can start by setting the n_neighbours HP in our classifier object to 1, running 10-fold CV, and getting a measurement of the model's performance. Repeating the process with the other HP values will lead to different levels of performance, and we can simply choose the n_neighbours value that worked best.
In the context of HP optimization, we perform k-fold cross validation together with grid search or randomized search to get a more robust estimate of the model performance associated with specific HP values.
Sweeping through hyperparameter space using grid search
Traditionally and perhaps most intuitively, scanning for HPs is done using grid search (also called parameter sweep). This strategy exhaustively searches through some manually prespecified HP values and reports the best option. It is common to try to optimize multiple HPs simultaneously - grid search tries each combination in turn, hence the name. It works like this:
The combination of grid search and k-fold cross validation is very popular for finding the models with the best possible performance and generalisability. So, in HP optimisation we are actually trying to do two things: (i) find the best possible combination of HPs that define a model and (ii) make sure that the model generalises well to new data. In order to address the second concern, CV is often the method of choice. Scikit-learn makes this process of HP optimization using k-fold CV very easy and slick, and even supports parallel distributing of the HP search (via the n_jobs argument).
Note that grid search with k-fold CV simply returns the best HP values out of the available options, and is therefore not guaranteed to return a global optimum. It makes sense to choose a somewhat different collection of possible values that is centred around an empirically sensible default.
End of explanation
"""
from scipy.stats import uniform
from scipy.stats import norm
from sklearn.grid_search import RandomizedSearchCV
# Designate distributions to sample hyperparameters from
# Uniform distribution for n_estimators, Gaussian for max_features
n_estimators = np.random.uniform(25, 45, 5).astype(int)
max_features = np.random.normal(20, 10, 5).astype(int)
hyperparameters = {'n_estimators': list(n_estimators),
'max_features': list(max_features)}
print hyperparameters
"""
Explanation: An alternative to grid search: randomized search
Grid search is quite commonly used, but another way to search through hyperparameter space to find optima is by using randomized search. With randomized search, we don't build a grid of HP values we want to try out. Instead, we specify the regions of hyperparameter space we are interested in by specifying distributions that we want to sample from. Importantly, we also specify a computational budget (n_iter) which controls the number of HP combinations we will test out using CV. So, rather than the brute force approach of grid search, where we are forced to try out every combination, we can trade off between the computational resources we want to spend and the chances that we'll find the optimal combination of HP values. But because we prespecify regions where we think the correct HP values will lie, we are likely to cover some good possibilities even with a modest computational budget.
There is evidence that randomized search is more efficient than grid search, because grid search effectively wastes time by exhaustively checking each option when it might not be necessary. By contrast, the random experiments utilized by randomized search can explore the important dimensions of hyperparameter space with more coverage. If we were to use uniform distributions for all of our HPs and allow as many sampling iterations as there are combinations of the HPs, then randomized search just becomes grid search.
To use randomized search to tune random forests, we first specify the HP distributions:
End of explanation
"""
# Run randomized search
# n_iter controls the number of HP combinations we try out
randomCV = RandomizedSearchCV(RandomForestClassifier(), param_distributions=hyperparameters, n_iter=10)
randomCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_n_estim = randomCV.best_params_['n_estimators']
best_max_features = randomCV.best_params_['max_features']
print("The best performing n_estimators value is: {:5.1f}".format(best_n_estim))
print("The best performing max_features value is: {:5.1f}".format(best_max_features))
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from randomCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim,
max_features=best_max_features)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
"""
Explanation: We then run the randomized search:
End of explanation
"""
from sklearn.svm import SVC
# Search for good hyperparameter values
# Specify values to grid search over
gamma_range = 2. ** np.arange(-15, 5, step=10)
C_range = 2. ** np.arange(-5, 15, step=10)
hyperparameters = [{'gamma': gamma_range,
'C': C_range}]
print hyperparameters
from sklearn.svm import SVC
# Grid search using cross-validation
gridCV = GridSearchCV(SVC(), param_grid=hyperparameters, cv=5)
gridCV.fit(XTrain, yTrain)
best_gamma = gridCV.best_params_['gamma']
best_C = gridCV.best_params_['C']
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', gamma=best_gamma, C=best_C)
rbfSVM.fit(XTrain, yTrain)
SVM_predictions = rbfSVM.predict(XTest)
print metrics.classification_report(yTest, SVM_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
"""
Explanation: We tuned our random forest classifier!
So, that was an overview of the concepts and practicalities involved when tuning a random forest classifer. We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' the bifurcations we choose are, as judged by how well they separate the classes in our dataset). The two we chose to tune are generally regarded as the most important. Either grid search or randomized search is probably fine for tuning random forests.
Fancier techniques for hyperparameter optimization include methods based on gradient descent, grad student descent, and Bayesian approaches which make smart decisions about what part of hyperparameter space to try next based on the performance of previous combinations (see Spearmint and hyperopt).
Note that the toy spam dataset we were working on is unusually straightforward, clean, and easy, and we were getting very high accuracies. It is rare to encounter such simple datasets in real life. Also, in real life we would expect to see a much bigger performance boost as our tuning strategy improves.
Let's finish off by showing how to tune two other predictors.
Tuning a support vector machine
Let's train our second algorithm, support vector machines (SVMs) to do the same exact prediction task. A great introduction to the theory behind SVMs can be read here. Briefly, SVMs search for hyperplanes in the feature space which best divide the different classes in your dataset. Crucially, SVMs can find non-linear decision boundaries between classes using a process called kernelling, which projects the data into a higher-dimensional space. This sounds a bit abstract, but if you've ever fit a linear regression to power-transformed variables (e.g. maybe you used x^2, x^3 as features), you're already familiar with the concept. Do take a look at the guide we linked above.
SVMs can use different types of kernels, like Gaussian or radial ones, to throw the data into a different space. Let's use the latter. The main hyperparameters we must tune for SVMs are gamma (a kernel parameter, controlling how far we 'throw' the data into the new feature space) and C (which controls the bias-variance tradeoff of the model). Change the step size in the hyperparameter range specification to do better; we just chose a couple so that the code block runs quickly.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# Search for good hyperparameter values
# Specify values to grid search over
penalty = ["l1", "l2"]
C_range = np.arange(0.1, 1.1, 0.1)
hyperparameters = [{'penalty': penalty,
'C': C_range}]
# Grid search using cross-validation
grid = GridSearchCV(LogisticRegression(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
bestPenalty = grid.best_params_['penalty']
bestC = grid.best_params_['C']
print bestPenalty
print bestC
# Train model and output predictions
classifier_logistic = LogisticRegression(penalty=bestPenalty, C=bestC)
classifier_logistic_fit = classifier_logistic.fit(XTrain, yTrain)
logistic_predictions = classifier_logistic_fit.predict(XTest)
print metrics.classification_report(yTest, logistic_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, logistic_predictions),2)
"""
Explanation: How does this compare an untuned SVM? What about an SVM with especially badly tuned hyperparams?
Tuning a logistic regression classifier
The last algorithm we'll tune and apply to predict spam emails is a logistic regression classifier. This is a type of regression model which is used for predicting binary outcomes (like spam/non-spam). We fit a straight line through our transformed data, where the x axis remain the same but the dependent variable is the log odds of data points being one of the two classes. So essentialy, logistic regression is just a transformed version of linear regression.
One topic you will often encounter in machine learning is regularization, which is a class of techniques to reduce overfitting. The idea is that we often don't just want to maximize model fit, but also penalize the model for e.g. using too many parameters, or assigning coefficients or weights that are too big. Read more about regularized regression here. We can adjust just how much regularization we want by adjusting regularization hyperparameters, and scikit-learn comes with some models that can very efficiently fit data for a range of regulatization hyperparameter values. This is the case for regularized linear regression models like Lasso regression and ridge regression. These classes are shortcuts to doing cross-validated selection of models with different level of regularization.
But we can also optimize how much regularization we want ourselves, as well as tuning the values of other hyperparameters, in the same manner as we've been doing.
End of explanation
"""
|
molpopgen/fwdpy | docs/examples/FixationTimes1.ipynb | gpl-3.0 | %load_ext rpy2.ipython
import fwdpy as fp
import numpy as np
import pandas as pd
"""
Explanation: Distribution of fixation times with background selection
This example mixes the simulation of positive selection with strongly-deleterious mutations (background selection, or "BGS" for short).
The setup of the BGS model is the same as the other example. This example adds the following:
Include a class of beneficial mutations ($s>0$) and fitnesses $1, 1+s, 1+2s$ for the three genotypes.
We will track the frequency trajectories of all selected mutations during every simulation
From those trajectories, we will get the fixation times of all beneficial mutations.
These fixation times will be recorded in a pandas DataFrame.
This is the really cool part:
We will send that DataFrame to R for plotting using ggplot.
End of explanation
"""
#We will simulate no neutral mutations
nregions = []
#These are our "BGS loci"
sregions = [fp.ConstantS(beg=-1,end=0,weight=1,s=-0.05,h=1),
fp.ConstantS(beg=1,end=2,weight=1,s=-0.05,h=1)]
#Recombination is uniform across whole region
recregions = [fp.Region(beg=-1,end=2,weight=1)]
#Population size
N=1000
##Evolve for 20N generations with constant N
nlist = np.array([N]*20*N,dtype=np.uint32)
#Random number generator
rng = fp.GSLrng(101)
"""
Explanation: Our simulation is set up in the same manner that Hudson, Kaplan, and colleagues used to study the structured coalescent:
Our locus of interest has mutations occurring along the interval $[0,1)$.
That locus is flanked by loci where mutations causing BGS occur.
The relevant details here are:
We will simulate no neutral variants.
Our positively-selected variants will occur in the "locus" of interest.
Recombination will take place as a uniform process across all regions.
End of explanation
"""
def get_fixation_times(trajectories):
"""
Takes a set of trajectories, creates a list of fixation times, which is
returned.
The elements in trajectories are a list, with element 0 a 'dict' containing
info about each variant, and element 1 being a list of frequencies over time.
"""
if len(trajectories.index)==0:
return []
groups=trajectories.groupby(['pos','esize','origin'])
for n,g in groups:
if g.freq.max() < 1.:
raise RuntimeError("this group is not a fixation")
return [len(g.index) for n,g in groups]
#return[len(i[1]) for i in trajectories if max(i[1])==1 and i[0][b'esize']>0]
"""
Explanation: We need to define a function to go from trajectories of selected mutations to lists of fixation times. This function is trivial with Python's "list comprehensions":
End of explanation
"""
#This will be our range of selection coefficients
svals=[1e-3,1e-2,5e-2,1e-1]
#This will be our number of populations/replicates
NPOPS=40
#A list to collect our intermediate pandas DataFrames
df=[]
for s in svals:
#Copy sregions from above
sregions_current=sregions
#Add a new region with +ve-ly selected variants.
#NEED TO ADD A COMMENT ABOUT WHAT THE WEIGHT HERE MEANS
sregions_current.append(fp.ConstantS(beg=0,end=1,weight=1e-3,s=s,h=1))
#Create a vector of 40 pops.
#This means that fwdpy will use 40 threads to simulate the 40 replicates.
pops = fp.SpopVec(NPOPS,N)
sampler=fp.FreqSampler(len(pops))
traj = fp.evolve_regions_sampler(rng,
pops,
sampler,
nlist[0:], #List of population sizes over time.
0.0, #Neutral mutation rate = 0 (per gamete, per generation)
0.001, #Mutation rate to selected variants(per gamete, per generation)
0.005, #Recombination rate (per diploid, per generation)
nregions, #Defined above
sregions_current, #Defined above
recregions, #Defined above
1)#update mutation frequency trajectories every generation
#We now have a list of trajectory objects,
#and our task is to collect the fixations from
#them.
raw_ftimes = [get_fixation_times(sampler.fetch(i,freq_filter = lambda x : x[-1][1]==1. )) for i in range(len(sampler))]
for i in raw_ftimes:
#Create a pandas DataFrame
if len(i)>0:
df.append(pd.DataFrame({'s':[s]*len(i),'ftimes':i}))
#catenate all the DataFrames, and we'll send them to R for plotting.
dataForR=pd.concat(df)
%R require(ggplot2)
%%R -i dataForR
p = ggplot(dataForR,aes(x=ftimes,y=..density..)) +
geom_histogram() +
facet_wrap( ~s,nrow=2) +
xlab("Fixation time (generations)")
print(p)
#Take a look at the mean time to fixation
dataForR.groupby(['s']).mean()
"""
Explanation: Now, run the simulation itself.
Note: I'm only doing 40 replicates for each $s$, which is of course limiting.
This example runs in a few minutes on my machine.
End of explanation
"""
|
bakanchevn/DBCourseMirea2017 | Неделя 1/Задание в классе/Лабораторная 2-1.ipynb | gpl-3.0 | %sql select * from product;
"""
Explanation: Лабораторная 2-1:
Простые табличные запросы
Задание #1
Попробуйте записать запрос, чтобы получить на выходе все продукты, с "Touch" в имени. Укажите их имя и цену и отсортируйте в алфавитном порядке по производителю
End of explanation
"""
%%sql
PRAGMA case_sensitive_like=ON;
select * from product
where pname LIKE '%Touch'
"""
Explanation: Напишите запрос:
End of explanation
"""
%%sql
select distinct manufacturer
from product
where pname = 'Gizmo';
"""
Explanation: Напишите запрос, возвращающий уникальные названия компаний, которые делают продукцию Gizmo:
End of explanation
"""
%sql SELECT DISTINCT category FROM product ORDER BY category;
%sql SELECT category FROM product ORDER BY pname;
%sql SELECT DISTINCT category FROM product ORDER BY pname;
"""
Explanation: Задание #2:
ORDER BY
Попробуйте выполнить запросы, но сначала предположите, что они должны вернуть
End of explanation
"""
|
hktxt/MachineLearning | Kmeans.ipynb | gpl-3.0 | #produce data set near the center
import numpy as np
import matplotlib.pyplot as plt
real_center = [(1,1),(1,2),(2,2),(2,1)]
point_number = 50
points_x = []
points_y = []
for center in real_center:
offset_x, offset_y = np.random.randn(point_number) * 0.3, np.random.randn(point_number) * 0.25
x_val, y_val = center[0] + offset_x, center[1] + offset_y
points_x.append(x_val)
points_y.append(y_val)
points_x = np.concatenate(points_x)
points_y = np.concatenate(points_y)
# 绘制点图
plt.scatter(points_x, points_y, color='green', marker='+')
# 绘制中心点
center_x, center_y = zip(*real_center)
plt.scatter(center_x, center_y, color='red', marker='^')
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.show()
"""
Explanation: Kmeans from scratch
1.data production
End of explanation
"""
# 第一步,随机选择 K 个点
K = 4
p_list = np.stack([points_x, points_y], axis=1)
index = np.random.choice(len(p_list), size=K)
centeroid = p_list[index]
# 以下是画图部分
for p in centeroid:
plt.scatter(p[0], p[1], marker='^')
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.show()
# 第二步,遍历所有点 P,将 P 放入最近的聚类中心的集合中
points_set = {key: [] for key in range(K)}
for p in p_list:
nearest_index = np.argmin(np.sum((centeroid - p) ** 2, axis=1) ** 0.5)
points_set[nearest_index].append(p)
# 以下是画图部分
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
plt.scatter(p_xs, p_ys, color='C{}'.format(k_index))
for ix, p in enumerate(centeroid):
plt.scatter(p[0], p[1], color='C{}'.format(ix), marker='^', edgecolor='black', s=128)
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.show()
# 第三步,遍历每一个点集,计算新的聚类中心
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
centeroid[k_index, 0] = sum(p_xs) / len(p_set)
centeroid[k_index, 1] = sum(p_ys) / len(p_set)
# 第四步,重复进行以上步骤
for i in range(10):
points_set = {key: [] for key in range(K)}
for p in p_list:
nearest_index = np.argmin(np.sum((centeroid - p) ** 2, axis=1) ** 0.5)
points_set[nearest_index].append(p)
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
centeroid[k_index, 0] = sum(p_xs) / len(p_set)
centeroid[k_index, 1] = sum(p_ys) / len(p_set)
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
plt.scatter(p_xs, p_ys, color='C{}'.format(k_index))
for ix, p in enumerate(centeroid):
plt.scatter(p[0], p[1], color='C{}'.format(ix), marker='^', edgecolor='black', s=128)
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.annotate('{} episode'.format(i + 1), xy=(2, 2.5), fontsize=14)
plt.show()
print(centeroid)
"""
Explanation: 我们以(1, 1), (1, 2), (2, 2), (2, 1)四个点为中心产生了随机分布的点,如果我们的聚类算法正确的话,我们找到的中心点应该和这四个点很接近。先用简单的语言描述 kmeans 算法步骤:
第一步 - 随机选择 K 个点作为点的聚类中心,这表示我们要将数据分为 K 类。
第二步 - 遍历所有的点 P, 算出 P 到每个聚类中心的距离,将 P 放到最近的聚类中心的点集中。遍历结束后我们将得到 K 个点集。
第三步 - 遍历每一个点集,算出每一个点集的中心位置,将其作为新的聚类中心。
第四步 - 重复步骤 2 和步骤 3,直到聚类中心位置不再移动。
End of explanation
"""
from sklearn.cluster import KMeans
loss = []
for i in range(1, 10):
kmeans = KMeans(n_clusters=i, max_iter=100).fit(p_list)
loss.append(kmeans.inertia_ / point_number / K)
plt.plot(range(1, 10), loss)
plt.show()
"""
Explanation: 寻找 K 值
以上已经介绍了 KMeans 方法的具体流程,但是我们还面临一个问题,如何确定 K 值——在以上的演示中,由于数据是我们自己生成的,所以我们很容易就确定了 K 值,但是真实的环境下,我们往往不能立马清楚 K 值的大小。
一种比较通用的解决方法是计算每个点到自己的聚类中心的平均距离,虽然说,K 值越大,理论上这个平均距离会越小。但是当我们画出平均距离随K值的变化曲线后,会发现其中存在一个肘点——在这个肘点前,平均距离随K值变大迅速下降,而在这个肘点后,平均距离的下降将变得缓慢。现在我们使用 sklearn 库中的 KMeans 方法来跑一下聚类过程,然后将到聚类中心的平均值变化作图。
End of explanation
"""
|
DistrictDataLabs/yellowbrick | examples/gokriznastic/Iris - clustering example.ipynb | apache-2.0 | # Load iris flower dataset
iris = datasets.load_iris()
X = iris.data #clustering is unsupervised learning hence we load only X(i.e.iris.data) and not Y(i.e. iris.target)
"""
Explanation: Yellowbrick — Clustering Evaluation Examples
The Yellowbrick library is a diagnostic visualization platform for machine learning that allows data scientists to steer the model selection process. It extends the scikit-learn API with a new core object: the Visualizer. Visualizers allow models to be fit and transformed as part of the scikit-learn pipeline process, providing visual diagnostics throughout the transformation of high-dimensional data.
In machine learning, clustering models are unsupervised methods that attempt to detect patterns in unlabeled data. There are two primary classes of clustering algorithms: agglomerative clustering which links similar data points together, and centroidal clustering which attempts to find centers or partitions in the data.
Currently, Yellowbrick provides two visualizers to evaluate centroidal mechanisms, particularly K-Means clustering, that help users discover an optimal $K$ parameter in the clustering metric:
- KElbowVisualizer visualizes the clusters according to a scoring function, looking for an "elbow" in the curve.
- SilhouetteVisualizer visualizes the silhouette scores of each cluster in a single model.
Load the Data
For the following examples, we'll use the widely famous Iris dataset. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. You can learn more about it here: Iris Data Set
The dataset is loaded using scikit-learn's datasets.load_iris() function to create a sample two-dimensional dataset with 8 random clusters of points.
End of explanation
"""
# Converting the data into dataframe
feature_names = iris.feature_names
iris_dataframe = pd.DataFrame(X, columns=feature_names)
iris_dataframe.head(10)
"""
Explanation: Let's have a look at the dataset
Before we dive into how this data can be evaluated efficiently using Yellowbrick, let's have a look at how the clusters actually look.
End of explanation
"""
# Fitting the model with a dummy model, with 3 clusters (we already know there are 3 classes in the Iris dataset)
k_means = KMeans(n_clusters=3)
k_means.fit(X)
# Plotting a 3d plot using matplotlib to visualize the data points
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111, projection='3d')
# Setting the colors to match cluster results
colors = ['red' if label == 0 else 'purple' if label==1 else 'green' for label in k_means.labels_]
ax.scatter(X[:,3], X[:,0], X[:,2], c=colors)
"""
Explanation: K-Means Algorithm
K-Means is a simple unsupervised machine learning algorithm that groups data into the number $K$ of clusters specified by the user, even if it is not the optimal number of clusters for the dataset.
End of explanation
"""
# Instantiate the clustering model and visualizer
model = KMeans()
visualizer = KElbowVisualizer(model, k=(2,11))
visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Draw/show/show the data
"""
Explanation: In the above example plot, one of the clusters is linearly seperable and at a good seperation from other two clusters. Two of the clusters are close by and not linearly seperable.
Also the dataset is 4-dimensional i.e. it has 4 features, but for the sake of visualization using matplotlib, one of dimensions has been ignored. Therefore, it can be said that just visualization of data-points is not always enough for knowing optimal number of clusters $K$.
Elbow Method
Yellowbrick's KElbowVisualizer implements the “elbow” method of selecting the optimal number of clusters by fitting the K-Means model with a range of values for $K$. If the line chart looks like an arm, then the “elbow” (the point of inflection on the curve) is a good indication that the underlying model fits best at that point.
In the following example, the KElbowVisualizer fits the model for a range of $K$ values from 2 to 10, which is set by the parameter k=(2,11). When the model is fit with 3 clusters we can see an "elbow" in the graph, which in this case we know to be the optimal number since our dataset has 3 clusters of points.
End of explanation
"""
# Instantiate the clustering model and visualizer
model = KMeans()
visualizer = KElbowVisualizer(model, k=(2,11), metric='calinski_harabaz', timings=False)
visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Draw/show/show the data
"""
Explanation: By default, the scoring parameter metric is set to distortion, which computes the sum of squared distances from each point to its assigned center. However, two other metrics can also be used with the KElbowVisualizer—silhouette and calinski_harabaz. The silhouette score is the mean silhouette coefficient for all samples, while the calinski_harabaz score computes the ratio of dispersion between and within clusters.
The KElbowVisualizer also displays the amount of time to fit the model per $K$, which can be hidden by setting timings=False. In the following example, we'll use the calinski_harabaz score and hide the time to fit the model.
End of explanation
"""
# Instantiate the clustering model and visualizer
model = KMeans(3)
visualizer = SilhouetteVisualizer(model)
visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Draw/show/show the data
"""
Explanation: It is important to remember that the Elbow method does not work well if the data is not very clustered. In such cases, you might see a smooth curve and the optimal value of $K$ will be unclear.
You can learn more about the Elbow method at Robert Grove's Blocks.
Silhouette Visualizer
Silhouette analysis can be used to evaluate the density and separation between clusters. The score is calculated by averaging the silhouette coefficient for each sample, which is computed as the difference between the average intra-cluster distance and the mean nearest-cluster distance for each sample, normalized by the maximum value. This produces a score between -1 and +1, where scores near +1 indicate high separation and scores near -1 indicate that the samples may have been assigned to the wrong cluster.
The SilhouetteVisualizer displays the silhouette coefficient for each sample on a per-cluster basis, allowing users to visualize the density and separation of the clusters. This is particularly useful for determining cluster imbalance or for selecting a value for $K$ by comparing multiple visualizers.
Since we created the sample dataset for these examples, we already know that the data points are grouped into 8 clusters. So for the first SilhouetteVisualizer example, we'll set $K$ to 3 in order to show how the plot looks when using the optimal value of $K$.
Notice that graph contains homogeneous and long silhouettes. In addition, the vertical red-dotted line on the plot indicates the average silhouette score for all observations.
End of explanation
"""
# Instantiate the clustering model and visualizer
model = KMeans(6)
visualizer = SilhouetteVisualizer(model)
visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Draw/show/show the data
"""
Explanation: For the next example, let's see what happens when using a non-optimal value for $K$, in this case, 6.
Now we see that the width of clusters 1 to 6 have become narrow, of unequal width and their silhouette coefficient scores have dropped. This occurs because the width of each silhouette is proportional to the number of samples assigned to the cluster. The model is trying to fit our data into a larger than optimal number of clusters, making some of the clusters narrower but much less cohesive as seen from the drop in average-silhouette score.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from yellowbrick.style import color_palette
from yellowbrick.cluster.base import ClusteringScoreVisualizer
from sklearn.metrics import silhouette_score, silhouette_samples
## Packages for export
__all__ = [
"SilhouetteVisualizer"
]
##########################################################################
## Silhouette Method for K Selection
##########################################################################
class SilhouetteVisualizer(ClusteringScoreVisualizer):
"""
The Silhouette Visualizer displays the silhouette coefficient for each
sample on a per-cluster basis, visually evaluating the density and
separation between clusters. The score is calculated by averaging the
silhouette coefficient for each sample, computed as the difference
between the average intra-cluster distance and the mean nearest-cluster
distance for each sample, normalized by the maximum value. This produces a
score between -1 and +1, where scores near +1 indicate high separation
and scores near -1 indicate that the samples may have been assigned to
the wrong cluster.
In SilhouetteVisualizer plots, clusters with higher scores have wider
silhouettes, but clusters that are less cohesive will fall short of the
average score across all clusters, which is plotted as a vertical dotted
red line.
This is particularly useful for determining cluster imbalance, or for
selecting a value for K by comparing multiple visualizers.
Parameters
----------
model : a Scikit-Learn clusterer
Should be an instance of a centroidal clustering algorithm (``KMeans``
or ``MiniBatchKMeans``).
ax : matplotlib Axes, default: None
The axes to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
Attributes
----------
silhouette_score_ : float
Mean Silhouette Coefficient for all samples. Computed via scikit-learn
`sklearn.metrics.silhouette_score`.
silhouette_samples_ : array, shape = [n_samples]
Silhouette Coefficient for each samples. Computed via scikit-learn
`sklearn.metrics.silhouette_samples`.
n_samples_ : integer
Number of total samples in the dataset (X.shape[0])
n_clusters_ : integer
Number of clusters (e.g. n_clusters or k value) passed to internal
scikit-learn model.
Examples
--------
>>> from yellowbrick.cluster import SilhouetteVisualizer
>>> from sklearn.cluster import KMeans
>>> model = SilhouetteVisualizer(KMeans(10))
>>> model.fit(X)
>>> model.show()
"""
def __init__(self, model, ax=None, **kwargs):
super(SilhouetteVisualizer, self).__init__(model, ax=ax, **kwargs)
# Visual Properties
# TODO: Fix the color handling
self.colormap = kwargs.get('colormap', 'set1')
self.color = kwargs.get('color', None)
def fit(self, X, y=None, **kwargs):
"""
Fits the model and generates the silhouette visualization.
"""
# TODO: decide to use this method or the score method to draw.
# NOTE: Probably this would be better in score, but the standard score
# is a little different and I'm not sure how it's used.
# Fit the wrapped estimator
self.estimator.fit(X, y, **kwargs)
# Get the properties of the dataset
self.n_samples_ = X.shape[0]
self.n_clusters_ = self.estimator.n_clusters
# Compute the scores of the cluster
labels = self.estimator.predict(X)
self.silhouette_score_ = silhouette_score(X, labels)
self.silhouette_samples_ = silhouette_samples(X, labels)
# Draw the silhouette figure
self.draw(labels)
# Return the estimator
return self
def draw(self, labels):
"""
Draw the silhouettes for each sample and the average score.
Parameters
----------
labels : array-like
An array with the cluster label for each silhouette sample,
usually computed with ``predict()``. Labels are not stored on the
visualizer so that the figure can be redrawn with new data.
"""
# Track the positions of the lines being drawn
y_lower = 10 # The bottom of the silhouette
# Get the colors from the various properties
# TODO: Use resolve_colors instead of this
colors = color_palette(self.colormap, self.n_clusters_)
# For each cluster, plot the silhouette scores
for idx in range(self.n_clusters_):
# Collect silhouette scores for samples in the current cluster .
values = self.silhouette_samples_[labels == idx]
values.sort()
# Compute the size of the cluster and find upper limit
size = values.shape[0]
y_upper = y_lower + size
color = colors[idx]
self.ax.fill_betweenx(
np.arange(y_lower, y_upper), 0, values,
facecolor=color, edgecolor=color, alpha=0.5
)
# Label the silhouette plots with their cluster numbers
self.ax.text(-0.05, y_lower + 0.5 * size, str(idx))
# Compute the new y_lower for next plot
y_lower = y_upper + 10
# The vertical line for average silhouette score of all the values
self.ax.axvline(
x=self.silhouette_score_, color="red", linestyle="--"
)
return self.ax
def finalize(self):
"""
Prepare the figure for rendering by setting the title and adjusting
the limits on the axes, adding labels and a legend.
"""
# Set the title
self.set_title((
"Silhouette Plot of {} Clustering for {} Samples in {} Centers"
).format(
self.name, self.n_samples_, self.n_clusters_
))
# Set the X and Y limits
# The silhouette coefficient can range from -1, 1;
# but here we scale the plot according to our visualizations
# l_xlim and u_xlim are lower and upper limits of the x-axis,
# set according to our calculated maximum and minimum silhouette score along with necessary padding
l_xlim = max(-1, min(-0.1, round(min(self.silhouette_samples_) - 0.1, 1)))
u_xlim = min(1, round(max(self.silhouette_samples_) + 0.1, 1))
self.ax.set_xlim([l_xlim, u_xlim])
# The (n_clusters_+1)*10 is for inserting blank space between
# silhouette plots of individual clusters, to demarcate them clearly.
self.ax.set_ylim([0, self.n_samples_ + (self.n_clusters_ + 1) * 10])
# Set the x and y labels
self.ax.set_xlabel("silhouette coefficient values")
self.ax.set_ylabel("cluster label")
# Set the ticks on the axis object.
self.ax.set_yticks([]) # Clear the yaxis labels / ticks
self.ax.xaxis.set_major_locator(plt.MultipleLocator(0.1)) # Set the ticks at multiples of 0.1
# Instantiate the clustering model and visualizer
model = KMeans(6)
visualizer = SilhouetteVisualizer(model)
visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Draw/show/show the data
"""
Explanation: After Gopal's improvements to Silhouette Visualizer
End of explanation
"""
|
NGSchool2016/ngschool2016-materials | jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb | gpl-3.0 | %pylab inline
"""
Explanation: Set the matplotlib magic to notebook enable inline plots
End of explanation
"""
import subprocess
import matplotlib.pyplot as plt
import random
import numpy as np
"""
Explanation: Calculate the Nonredundant Read Fraction (NRF)
SAM format example:
SRR585264.8766235 0 1 4 15 35M * 0 0 CTTAAACAATTATTCCCCCTGCAAACATTTTCAAT GGGGGGGGGGGGGGGGGGGGGGFGGGGGGGGGGGG XT:A:U NM:i:1 X0:i:1 X1:i:6 XM:i:1 XO:i:0 XG:i:0 MD:Z:8T26
Import the required modules
End of explanation
"""
plt.style.use('ggplot')
figsize(10,5)
"""
Explanation: Make figures prettier and biger
End of explanation
"""
file = "/ngschool/chip_seq/bwa/input.sorted.bam"
"""
Explanation: Parse the SAM file and extract the unique start coordinates.
First store the file name in the variable
End of explanation
"""
p = subprocess.Popen(["samtools", "view", "-q10", "-F260", file],
stdout=subprocess.PIPE)
coords = []
for line in p.stdout:
flag, ref, start = line.decode('utf-8').split()[1:4]
coords.append([flag, ref, start])
coords[:3]
"""
Explanation: Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.
End of explanation
"""
len(coords)
"""
Explanation: What is the total number of our unique reads?
End of explanation
"""
random.seed(1234)
sample = random.sample(coords, 1000000)
len(sample)
"""
Explanation: Randomly sample the coordinates to get 1M for NRF calculations
End of explanation
"""
uniqueStarts = {'watson': set(), 'crick': set()}
for coord in sample:
flag, ref, start = coord
if int(flag) & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
"""
Explanation: How many of those coordinates are unique? (We will use the set python object which only the unique items.)
End of explanation
"""
len(uniqueStarts['watson'])
"""
Explanation: How many on the Watson strand?
End of explanation
"""
len(uniqueStarts['crick'])
"""
Explanation: And on the Crick?
End of explanation
"""
NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
print(NRF_input)
"""
Explanation: Calculate the NRF
End of explanation
"""
def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234):
p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath],
stdout=subprocess.PIPE)
coordType = np.dtype({'names': ['flag', 'ref', 'start'],
'formats': ['uint16', 'U10', 'uint32']})
coordArray = np.empty(10000000, dtype=coordType)
i = 0
for line in p.stdout:
if i >= len(coordArray):
coordArray = np.append(coordArray, np.empty(1000000, dtype=coordType), axis=0)
fg, rf, st = line.decode('utf-8').split()[1:4]
coordArray[i] = np.array((fg, rf, st), dtype=coordType)
i += 1
coordArray = coordArray[:i]
sample = coordArray
if pickSample and len(coordArray) > sampleSize:
np.random.seed(seed)
sample = np.random.choice(coordArray, sampleSize, replace=False)
uniqueStarts = {'watson': set(), 'crick': set()}
for read in sample:
flag, ref, start = read
if flag & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
NRF = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
return NRF
"""
Explanation: Lets create a function from what we did above and apply it to all of our files!
To use our function on the real sequencing datasets (not only on a small subset) we need to optimize our method a bit- we will use python module called numpy.
End of explanation
"""
NRF_chip = calculateNRF("/ngschool/chip_seq/bwa/sox2_chip.sorted.bam", sampleSize=1000000)
print(NRF_chip)
"""
Explanation: Calculate the NRF for the chip-seq sample
End of explanation
"""
plt.bar([0,2],[NRF_input, NRF_chip], width=1)
plt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP'])
plt.xlabel('Sample')
plt.ylabel('NRF')
plt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2))
plt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed')
plt.show()
"""
Explanation: Plot the NRF!
End of explanation
"""
countList = []
with open('/ngschool/chip_seq/bedtools/input_coverage.bed', 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
countList[0:6]
countList[-15:]
"""
Explanation: Calculate the Signal Extraction Scaling
Load the results from the coverage calculations
End of explanation
"""
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
"""
Explanation: Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.
End of explanation
"""
countList.sort()
countList[0:6]
"""
Explanation: Now sort the list- order the windows based on the tag count
End of explanation
"""
countSum = sum(countList)
countSum
"""
Explanation: Sum all the aligned tags
End of explanation
"""
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
"""
Explanation: Calculate the summaric fraction of tags along the ordered windows.
End of explanation
"""
countFraction[-5:]
"""
Explanation: Look at the last five items of the list:
End of explanation
"""
winNumber = len(countFraction)
winNumber
"""
Explanation: Calculate the number of windows.
End of explanation
"""
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
"""
Explanation: Calculate what fraction of a whole is the position of each window.
End of explanation
"""
winFraction[-5:]
"""
Explanation: Look at the last five items of our new list:
End of explanation
"""
def calculateSES(filePath):
countList = []
with open(filePath, 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
countList.sort()
countSum = sum(countList)
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
winNumber = len(countFraction)
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
return [winFraction, countFraction]
"""
Explanation: Now prepare the function!
End of explanation
"""
chipSes = calculateSES("/ngschool/chip_seq/bedtools/sox2_chip_coverage.bed")
"""
Explanation: Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample:
End of explanation
"""
plt.plot(winFraction, countFraction, label='input')
plt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP')
plt.ylim([0,1])
plt.xlabel('Ordered window franction')
plt.ylabel('Genome coverage fraction')
plt.legend(loc='best')
plt.show()
"""
Explanation: Now we can plot the calculated fractions for both the input and ChIP sample:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/112f45fdd43e503d5a44dfeb8227317e/plot_read_proj.ipynb | bsd-3-clause | # Author: Joan Massich <mailsik@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import read_proj
from mne.io import read_raw_fif
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'
"""
Explanation: Read and visualize projections (SSP and other)
This example shows how to read and visualize Signal Subspace Projectors (SSP)
vector. Such projections are sometimes referred to as PCA projections.
End of explanation
"""
raw = read_raw_fif(fname)
empty_room_proj = raw.info['projs']
# Display the projections stored in `info['projs']` from the raw object
raw.plot_projs_topomap()
"""
Explanation: Load the FIF file and display the projections present in the file. Here the
projections are added to the file during the acquisition and are obtained
from empty room recordings.
End of explanation
"""
n_cols = len(empty_room_proj)
fig, axes = plt.subplots(1, n_cols, figsize=(2 * n_cols, 2))
for proj, ax in zip(empty_room_proj, axes):
proj.plot_topomap(axes=ax, info=raw.info)
"""
Explanation: Display the projections one by one
End of explanation
"""
assert isinstance(empty_room_proj, list)
mne.viz.plot_projs_topomap(empty_room_proj, info=raw.info)
"""
Explanation: Use the function in mne.viz to display a list of projections
End of explanation
"""
# read the projections
ecg_projs = read_proj(ecg_fname)
# add them to raw and plot everything
raw.add_proj(ecg_projs)
raw.plot_projs_topomap()
"""
Explanation: .. TODO: add this when the tutorial is up: "As shown in the tutorial
:doc:../auto_tutorials/preprocessing/plot_projectors, ..."
The ECG projections can be loaded from a file and added to the raw object
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/ols.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
np.random.seed(9876789)
"""
Explanation: Ordinary Least Squares
End of explanation
"""
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x ** 2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
"""
Explanation: OLS estimation
Artificial data:
End of explanation
"""
X = sm.add_constant(X)
y = np.dot(X, beta) + e
"""
Explanation: Our model needs an intercept so we add a column of 1s:
End of explanation
"""
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
print("Parameters: ", results.params)
print("R2: ", results.rsquared)
"""
Explanation: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:
End of explanation
"""
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x - 5) ** 2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.0]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
"""
Explanation: OLS non-linear curve but linear in parameters
We simulate artificial data with a non-linear relationship between x and y:
End of explanation
"""
res = sm.OLS(y, X).fit()
print(res.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
print("Parameters: ", res.params)
print("Standard errors: ", res.bse)
print("Predicted values: ", res.predict())
"""
Explanation: Extract other quantities of interest:
End of explanation
"""
pred_ols = res.get_prediction()
iv_l = pred_ols.summary_frame()["obs_ci_lower"]
iv_u = pred_ols.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="data")
ax.plot(x, y_true, "b-", label="True")
ax.plot(x, res.fittedvalues, "r--.", label="OLS")
ax.plot(x, iv_u, "r--")
ax.plot(x, iv_l, "r--")
ax.legend(loc="best")
"""
Explanation: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
End of explanation
"""
nsample = 50
groups = np.zeros(nsample, int)
groups[20:40] = 1
groups[40:] = 2
# dummy = (groups[:,None] == np.unique(groups)).astype(float)
dummy = pd.get_dummies(groups).values
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:, 1:]))
X = sm.add_constant(X, prepend=False)
beta = [1.0, 3, -3, 10]
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + e
"""
Explanation: OLS with dummy variables
We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
End of explanation
"""
print(X[:5, :])
print(y[:5])
print(groups)
print(dummy[:5, :])
"""
Explanation: Inspect the data:
End of explanation
"""
res2 = sm.OLS(y, X).fit()
print(res2.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
pred_ols2 = res2.get_prediction()
iv_l = pred_ols.summary_frame()["obs_ci_lower"]
iv_u = pred_ols.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="Data")
ax.plot(x, y_true, "b-", label="True")
ax.plot(x, res2.fittedvalues, "r--.", label="Predicted")
ax.plot(x, iv_u, "r--")
ax.plot(x, iv_l, "r--")
legend = ax.legend(loc="best")
"""
Explanation: Draw a plot to compare the true relationship to OLS predictions:
End of explanation
"""
R = [[0, 1, 0, 0], [0, 0, 1, 0]]
print(np.array(R))
print(res2.f_test(R))
"""
Explanation: Joint hypothesis test
F test
We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
End of explanation
"""
print(res2.f_test("x2 = x3 = 0"))
"""
Explanation: You can also use formula-like syntax to test hypotheses
End of explanation
"""
beta = [1.0, 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
res3 = sm.OLS(y, X).fit()
print(res3.f_test(R))
print(res3.f_test("x2 = x3 = 0"))
"""
Explanation: Small group effects
If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
End of explanation
"""
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
"""
Explanation: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
End of explanation
"""
ols_model = sm.OLS(y, X)
ols_results = ols_model.fit()
print(ols_results.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
norm_x = X.values
for i, name in enumerate(X):
if name == "const":
continue
norm_x[:, i] = X[name] / np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T, norm_x)
"""
Explanation: Condition number
One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
End of explanation
"""
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
"""
Explanation: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
End of explanation
"""
ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit()
print(
"Percentage change %4.2f%%\n"
* 7
% tuple(
[
i
for i in (ols_results2.params - ols_results.params)
/ ols_results.params
* 100
]
)
)
"""
Explanation: Dropping an observation
Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
End of explanation
"""
infl = ols_results.get_influence()
"""
Explanation: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
End of explanation
"""
2.0 / len(X) ** 0.5
print(infl.summary_frame().filter(regex="dfb"))
"""
Explanation: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
End of explanation
"""
|
srippa/nn_deep | tf/tf_hellow.ipynb | mit | import tensorflow as tf
#----------------------------------------------------------
# Basic graph structure and operations
# tf.add , tf.sub, tf.mul , tf.div , tf.mod , tf.poe
# tf.less , tf.greater , tf.less_equal , tf.greater_equal
# tf.logical_and , tf.logical_or , logical.xor
#------------------------------------------------------------
tf.reset_default_graph()
print tf.add(1,2)
print tf.mul(7,9)
graph = tf.get_default_graph()
for op in graph.get_operations():
print op.name
sess = tf.Session() # For regular python code
tf.initialize_all_variables()
print 'Addition is: {} + {} = {} '.format(sess.run('Add/x:0'),sess.run('Add/y:0'),sess.run('Add:0'))
print 'Multiplication: {} * {} = {}'.format(sess.run('Mul/x:0'),sess.run('Mul/y:0'),sess.run('Mul:0'))
"""
Explanation: Installation tips
Create Anaconda virtual environment with ipython notebook support
conda create -n tf ipython-notebook --yes
The set up as explained in the official site failed for me. Something to do with failure to update setup tools. The remedy was doing as explained in here:
pip install --ignore-installed --upgrade pip setuptools
Hellow TensorFlow
Basic graph creation and how to inspect the elements of the graph
End of explanation
"""
tf.reset_default_graph()
m1 = tf.constant([[1., 2.], [3.,4]])
m2 = tf.constant([[5.,6.],[7.,8.]])
m3 = tf.matmul(m1, m2)
# have to run the graph using a session
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print 'm3 = ',sess.run(m3)
sess.close()
"""
Explanation: Constants
End of explanation
"""
tf.reset_default_graph()
v1 = tf.Variable(1, name="my_variable")
v2 = tf.Variable(tf.zeros([3,5]),name='5_zeros') # Variable with innitializer
c1 = tf.random_normal([4, 4], mean=0.0, stddev=1.0) # 4x4 matrixwith normal random variables
v3 = tf.Variable(c1,name='RandomMatrix')
v4 = tf.Variable(tf.ones(6))
counter = tf.Variable(0)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print 'v1 =',sess.run(v1)
print 'v2 =',sess.run(v2)
print 'v3=',sess.run(v3)
print 'v4=',sess.run(v4)
# Changing the value of a variable
print 'Changed v1 =',sess.run(v1.assign(v1 + 7))
print 'v1 new val =',sess.run(v1)
print sess.run(counter.assign_add(1))
print sess.run(counter.assign_add(1))
sess.close()
"""
Explanation: Variables
End of explanation
"""
tf.reset_default_graph()
v1 = tf.add(1,2,name='add')
with tf.name_scope("Scope1"):
with tf.name_scope("Scope_nested"):
vs = tf.mul(5, 5,name='mul')
print v1.name
print vs.name
tf.reset_default_graph()
graph = tf.get_default_graph()
graph.get_operations()
# Model of a simple neuron: y <-- x * w
x = tf.constant(1.0,name='x')
w = tf.Variable(0.8,name='w')
y = tf.mul(w , x, name='y')
y_ = tf.constant(0.0,name='y_train')
loss = (y-y_)**2
tf.reset_default_graph()
graph = tf.get_default_graph()
graph.get_operations()
# Model of a simple neuron: y <-- x * w
x = tf.constant(1.0,name='x')
w = tf.Variable(0.8,name='w')
y = tf.mul(w , x, name='y')
y_ = tf.constant(0.0,name='y_train')
loss = (y-y_)**2
#--------------------------------------------------------------
# Print the nodes of teh graph, also called 'operations' or 'ops'
#--------------------------------------------------------------
print 'Operations in graph \n==========================='
for op in graph.get_operations():
print op.name
"""
Explanation: Scopes
End of explanation
"""
import tensorflow as tf
x = tf.constant(1.0, name='input')
w = tf.Variable(0.8, name='weight')
y = tf.mul(w, x, name='output')
y_ = tf.constant(0.0, name='correct_value')
loss = tf.pow(y - y_, 2, name='loss')
train_step = tf.train.GradientDescentOptimizer(0.025).minimize(loss)
for value in [x, w, y, y_, loss]:
tf.scalar_summary(value.op.name, value)
summaries = tf.merge_all_summaries()
sess = tf.Session()
summary_writer = tf.train.SummaryWriter('log_simple_stats', sess.graph)
sess.run(tf.initialize_all_variables())
for i in range(100):
summary_writer.add_summary(sess.run(summaries), i)
sess.run(train_step)
"""
Explanation: Training and visualization
To see the graphs invoke the command:
tensorboard --logdir=log_simple_stat
which can then be viewed in the browser at
localhost:6006/#events
End of explanation
"""
|
adriantorrie/adriantorrie.github.io_src | content/downloads/notebooks/udacity/deep_learning_foundations_nanodegree/project_2_notes_convolutional_neural_networks.ipynb | mit | %run ../../../code/version_check.py
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Summary" data-toc-modified-id="Summary-1"><span class="toc-item-num">1 </span>Summary</a></div><div class="lev1 toc-item"><a href="#Version-Control" data-toc-modified-id="Version-Control-2"><span class="toc-item-num">2 </span>Version Control</a></div><div class="lev1 toc-item"><a href="#Change-Log" data-toc-modified-id="Change-Log-3"><span class="toc-item-num">3 </span>Change Log</a></div><div class="lev1 toc-item"><a href="#Setup" data-toc-modified-id="Setup-4"><span class="toc-item-num">4 </span>Setup</a></div><div class="lev1 toc-item"><a href="#Convolutional-Neural-Networks" data-toc-modified-id="Convolutional-Neural-Networks-5"><span class="toc-item-num">5 </span>Convolutional Neural Networks</a></div><div class="lev2 toc-item"><a href="#CS231n-Winter-2016:-Lecture-7:-Convolutional-Neural-Networks-for-Visual-Recognition" data-toc-modified-id="CS231n-Winter-2016:-Lecture-7:-Convolutional-Neural-Networks-for-Visual-Recognition-51"><span class="toc-item-num">5.1 </span>CS231n Winter 2016: Lecture 7: Convolutional Neural Networks for Visual Recognition</a></div><div class="lev1 toc-item"><a href="#Key-Terms" data-toc-modified-id="Key-Terms-6"><span class="toc-item-num">6 </span>Key Terms</a></div><div class="lev2 toc-item"><a href="#Architecture" data-toc-modified-id="Architecture-61"><span class="toc-item-num">6.1 </span>Architecture</a></div><div class="lev2 toc-item"><a href="#Hubel-and-Wesiel" data-toc-modified-id="Hubel-and-Wesiel-62"><span class="toc-item-num">6.2 </span>Hubel and Wesiel</a></div><div class="lev2 toc-item"><a href="#Additional-Reading" data-toc-modified-id="Additional-Reading-63"><span class="toc-item-num">6.3 </span>Additional Reading</a></div><div class="lev2 toc-item"><a href="#Addional-Videos" data-toc-modified-id="Addional-Videos-64"><span class="toc-item-num">6.4 </span>Addional Videos</a></div><div class="lev3 toc-item"><a href="#CS231n-Winter-2016:-Lecture-6:-Neural-Networks-Part-3-/-Intro-to-ConvNets" data-toc-modified-id="CS231n-Winter-2016:-Lecture-6:-Neural-Networks-Part-3-/-Intro-to-ConvNets-641"><span class="toc-item-num">6.4.1 </span>CS231n Winter 2016: Lecture 6: Neural Networks Part 3 / Intro to ConvNets</a></div><div class="lev2 toc-item"><a href="#Other-resources" data-toc-modified-id="Other-resources-65"><span class="toc-item-num">6.5 </span>Other resources</a></div><div class="lev3 toc-item"><a href="#CIFAR-10-ConvNet-Demo---very-good!" data-toc-modified-id="CIFAR-10-ConvNet-Demo---very-good!-651"><span class="toc-item-num">6.5.1 </span>CIFAR-10 ConvNet Demo - very good!</a></div>
# Summary
Notes taken to help for the second project, image recognition, for the [Deep Learning Foundations Nanodegree](https://www.udacity.com/course/deep-learning-nanodegree-foundation--nd101) course delivered by Udacity.
My Github repo for this project can be found here: [adriantorrie/udacity_dlfnd_project_2](https://github.com/adriantorrie/udacity_dlfnd_project_1)
# Version Control
End of explanation
"""
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from IPython.display import YouTubeVideo
plt.style.use('bmh')
matplotlib.rcParams['figure.figsize'] = (15, 4)
"""
Explanation: Change Log
Date Created: 2017-03-24
Date of Change Change Notes
-------------- ----------------------------------------------------------------
2017-03-24 Initial draft
Setup
End of explanation
"""
YouTubeVideo("LxfUGhug-iQ", height=365, width=650)
"""
Explanation: [Top]
Convolutional Neural Networks
CS231n Winter 2016: Lecture 7: Convolutional Neural Networks for Visual Recognition
Andrej Karpathy
Published 27 Jan 2016
Reddit /r/cs231n
Standard YouTube Licence
Supporting notes
End of explanation
"""
x = np.linspace(start=-10, stop=11, num=100)
y = sigmoid(x)
upper_bound = np.repeat([1.0,], len(x))
success_threshold = np.repeat([0.5,], len(x))
lower_bound = np.repeat([0.0,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success')
plt.title('Sigmoid Function Example')
plt.show()
"""
Explanation: # Key Terms
Filter = Kernel => The filter "slides" over the input image
Slide = Number of pixels the filter moves
Padding = The padding around the image to allow the filter output to be the same dimensions as the input
[Top]
Architecture
The general architecture of a covnet is seen below. It has been influenced by the Hubel and Weseil studies (below).
<img src="https://cs231n.github.io/assets/cnn/cnn.jpeg",width=450,height=200>
## Hubel and Wesiel
Hubel and Wesiel did studies in 1959, 1962, 1968, using cats to determine how the visual cortex worked.
<span style="color:red">Inputs://cs231n.github.io/convolutional-networks/#conv)
<span style="color:blue">1st filter bank with multiple activation mapsW_1 \times H_1 \times D_1$
2nd Dotted layerhe Conv Layer:**
is another filter bank
<span style="color:gray">3rd filter bank with multiple activation mapses four hyperparameters:
All layers are 3-dimensional, with the 3 filter layers above progressing from (left to right) fine detail to large object recognition, the image below shows the same feature hierarchy for object recognition, oriented from bottom to top.
* 1st <span style="color:blue">bluetheir spatial extent $F$
layer is the simple cell / <span style="background-color: #e6e6e6">low level layerces a volume of size $W_2 \times H_2 \times D_2$ where:
in the image below
* 2nd Dotted layer is the complex cell / <span style="background-color: #e6e6e6">mid level layers a volume of size $W_2 \times H_2 \times D_2$ where:
in the image below
* 3rd <span style="color:gray">grayof zero padding $P$
layer is the hyper-complex cell / <span style="background-color: #e6e6e6">high level layerW_1 − F + 2_P) / S + 1$
in the image below
<img src="http://cns-alumni.bu.edu/~slehar/webstuff/pcave/hubel.jpg" />
Directly quoted from here
To summarize, the Conv Layer:
Accepts a volume of size $W_1 \times H_1 \times D_1$
Requires four hyperparameters:
the number of filters $K$
their spatial extent $F$
the stride $S$
the amount of zero padding $P$
Produces a volume of size $W_2 \times H_2 \times D_2$ where:
$W_2 = (W_1 − F + 2_P) / S + 1$
$H_2 = (H_1 − F + 2_P) / S + 1$ (i.e. width and height are computed equally by symmetry)
$D_2 = K$
With parameter sharing, it introduces:
$F \times F \times D_1$ weights per filter
for a total of $F \times F \times D_1 \times K$ weights
and $K$ biases
In the output volume, the $d$-th depth slice (of size $W_2 \times H_2$) is the result of performing:
a valid convolution of the $d$-th filter over the input volume
with a stride of $S$, and then
offset by $d$-th bias
A common setting of the hyperparameters is:
$F = 3$
$S = 1$
$P = 1$
However, there are common conventions and rules of thumb that motivate these hyperparameters. See the ConvNet architectures section ...
[Top]
End of explanation
"""
YouTubeVideo("v=hd_KFJ5ktUc", height=365, width=650)
"""
Explanation: [Top]
Additional Reading
CS231n Convolutional Neural Networks for Visual Recognition
Deep Learning Book - Chapter 9: Convolutional Networks
[Top]
Addional Videos
CS231n Winter 2016: Lecture 6: Neural Networks Part 3 / Intro to ConvNets
Andrej Karpathy
Published 27 Jan 2016
Reddit /r/cs231n
Standard YouTube Licence
End of explanation
"""
YouTubeVideo("v=u6aEYuemt0M", height=365, width=650)
"""
Explanation: [Top]
End of explanation
"""
|
ecervera/mindstorms-nb | task/index.ipynb | mit | from functions import connect
connect() # Executeu, polsant Majúscules + Enter
"""
Explanation: Prova de connexió
Assegureu-vos de que el controlador del robot està en marxa, i proveu el següent codi, fent clic amb el ratolí, i polsant simultàniament les tecles Majúscules i Enter.
End of explanation
"""
from functions import forward, stop # cliqueu ací, i polseu Majúscules + Enter
from time import sleep # per a executar el bloc d'ordres
forward()
sleep(1)
stop()
"""
Explanation: Si apareix un missatge de confirmació, enhorabona, tot funciona. Si no, hi ha algun problema i haureu de cridar el professor de l'aula.
Proveu ara unes ordres senzilles per al robot: moure el robot cap avant durant un segon, i parar.
End of explanation
"""
from functions import disconnect, next_notebook
disconnect()
next_notebook('moviments')
"""
Explanation: Tot correcte? Magnífic! Durant el taller voreu més ordres i aprendreu a combinar-les fent programes més complicats. Ara, abans de passar a la pàgina següent, heu de desconnectar el robot del programa d'esta pàgina (només pot haver una connexió simultània).
End of explanation
"""
|
christophebertrand/ada-epfl | HW01-Intro_to_Pandas/intro-to-pandas-last-exo.ipynb | mit | # load all data and parse the 'date' column
def load_data():
sl_files=glob.glob('Data/ebola/sl_data/*.csv')
guinea_files=glob.glob('Data/ebola/guinea_data/*.csv')
liberia_files=glob.glob('Data/ebola/liberia_data/*.csv')
sl = pd.concat((pd.read_csv(file, parse_dates=['date']) for file in sl_files), ignore_index=True)
guinea = pd.concat((pd.read_csv(file , parse_dates=['Date']) for file in guinea_files), ignore_index=True)
liberia = pd.concat((pd.read_csv(file , parse_dates=['Date']) for file in liberia_files), ignore_index=True)
return (sl, guinea, liberia)
(sl, guinea, liberia) = load_data()
# look at the sl data
sl.columns
sl['variable'].unique()
"""
Explanation: Advanced Exo from 'Intro to Pandas'
End of explanation
"""
sl_variables_to_use = ['new_confirmed', 'death_confirmed']
# look at the guinea data
guinea.columns
guinea['Description'].unique()
guinea_variables_to_use = ['New cases of confirmed', 'New deaths registered today (confirmed)']
# look at the liberia data
liberia.columns
liberia['Variable'].unique()
liberia_variables_to_use = ['New case/s (confirmed)', 'Total death/s in confirmed cases']
def select_features(data, var_name, features):
return data[data[var_name].isin(features)]
# take the relevant variables
sl_relevant = select_features(sl, 'variable', sl_variables_to_use)
guinea_relevant = select_features(guinea, 'Description', guinea_variables_to_use)
liberia_relevant = select_features(liberia, 'Variable', liberia_variables_to_use)
"""
Explanation: we decide to only take the 'confirmed' cases and not the suspected or probable ones since 'suspected' and 'probable' are very subjective terms and may not be the same over the 3 countries.
End of explanation
"""
# rename the columns
var_name = 'vars'
sl_relevant.rename(columns={'variable': var_name}, inplace=True)
guinea_relevant.rename(columns={'Description': var_name, 'Date': 'date'}, inplace=True)
liberia_relevant.rename(columns={'Variable': var_name, 'Date': 'date'}, inplace=True)
#rename the variables
new_infected = 'new_infected'
new_death= 'new_death'
sl_relevant[var_name][sl_relevant[var_name] == sl_variables_to_use[0]] = new_infected
sl_relevant[var_name][sl_relevant[var_name] == sl_variables_to_use[1]] = new_death
guinea_relevant[var_name][guinea_relevant[var_name] == guinea_variables_to_use[0]] = new_infected
guinea_relevant[var_name][guinea_relevant[var_name] == guinea_variables_to_use[1]] = new_death
liberia_relevant[var_name][liberia_relevant[var_name] == liberia_variables_to_use[0]] = new_infected
liberia_relevant[var_name][liberia_relevant[var_name] == liberia_variables_to_use[1]] = new_death
# rename the data
sl_clean = sl_relevant.copy()
guinea_clean = guinea_relevant.copy()
liberia_clean = liberia_relevant.copy()
"""
Explanation: A problem is that the columnames and the variables are not the same over the 3 countries. So we harmonize it somewhat.
End of explanation
"""
#remove al rows and columns that consist only of NaNs
def remove_rows_and_cols_with_only_nan(data):
return data.dropna(axis=1, how='all').dropna(axis=0, thresh=3)
sl_clean = remove_rows_and_cols_with_only_nan(sl_clean)
guinea_clean = remove_rows_and_cols_with_only_nan(guinea_clean)
liberia_clean = remove_rows_and_cols_with_only_nan(liberia_clean)
"""
Explanation: Handle missing data
End of explanation
"""
# replace all NaNs with 0 (inplace)
sl_clean.fillna(value=0, inplace=True)
guinea_clean.fillna(value=0, inplace=True)
liberia_clean.fillna(value=0, inplace=True)
"""
Explanation: Then we can replace all NaN values with 0. We don't know anything about that data to put something else, and removing is no option since there would not be much left if we removed all rows/cols that contain at least one NaN
End of explanation
"""
sl_clean.dtypes
"""
Explanation: not all values are numerical (most are objects)
End of explanation
"""
def change_to_numeric(data):
col_list = list(data.columns)
col_list.remove('date')
col_list.remove(var_name)
data[col_list] = data[col_list].apply(pd.to_numeric)
change_to_numeric(sl_clean)
change_to_numeric(guinea_clean)
change_to_numeric(liberia_clean)
"""
Explanation: make all types numerical (excluding the date and variable columns)
End of explanation
"""
# create a total colon
def add_and_fill_total_col(data, ignore_cols_list):
col_list = list(data.columns)
for c in ignore_cols_list:
col_list.remove(c)
data['total'] = data[col_list].sum(axis=1)
add_and_fill_total_col(sl_clean, ['date', var_name, 'National'])
add_and_fill_total_col(guinea_clean, ['date', var_name, 'Totals'])
add_and_fill_total_col(liberia_clean, ['date', var_name, 'National'])
# remove unused cols:
sl_clean = sl_clean[['date', var_name, 'total']]
guinea_clean = guinea_clean[['date', var_name, 'total']]
liberia_clean = liberia_clean[['date', var_name, 'total']]
#rename data again
sl_final = sl_clean.copy()
liberia_final = liberia_clean.copy()
guinea_final = guinea_clean.copy()
"""
Explanation: Now we can summ over all cities and store it in a 'Total' column.
Note that all countries have a 'National' or 'total' column, but they are inconsistent with the sumed values in each city, so we ignore it.
End of explanation
"""
liberia_final.head()
guinea_final.head()
sl_final.head()
"""
Explanation: Show the data
End of explanation
"""
# create infected and death cols
def create_inf_death_cols(data):
inf = data[data['vars'] == new_infected]
inf[new_infected] = inf['total']
death = data[data['vars'] == new_death]
death[new_death] = death['total']
res = data.join(inf[new_infected], how='outer')
return res.join(death[new_death], how='outer')
sl_final = create_inf_death_cols(sl_final)
liberia_final = create_inf_death_cols(liberia_final)
guinea_final = create_inf_death_cols(guinea_final)
sl_final.head()
# remove vars & total col
sl_final = sl_final.drop(var_name, 1).drop('total', 1)
liberia_final = liberia_final.drop(var_name, 1).drop('total', 1)
guinea_final = guinea_final.drop(var_name, 1).drop('total', 1)
sl_final.head()
"""
Explanation: Move the variables into the columns
End of explanation
"""
# group by date to merge the cols
liberia_final = liberia_final.groupby('date', as_index=False).sum()
sl_final = sl_final.groupby('date', as_index=False).sum()
guinea_final = guinea_final.groupby('date', as_index=False).sum()
"""
Explanation: Then we need to merge the data
End of explanation
"""
sl_final['country'] = 'sl'
guinea_final['country'] = 'guinea'
liberia_final['country'] = 'liberia'
guinea_final.head()
liberia_final.head()
sl_final.head()
"""
Explanation: add 'country' col to distinguish the dataframes when they are put together
End of explanation
"""
final_data = pd.concat([sl_final, guinea_final, liberia_final], ignore_index=True)
"""
Explanation: Concat the dataframes
End of explanation
"""
final_data.sort_values(by='date').set_index(['date', 'country'])
"""
Explanation: And sort the data:
End of explanation
"""
|
planetlabs/notebooks | jupyter-notebooks/analytics-snippets/building_footprints_as_vector.ipynb | apache-2.0 | import os
from pprint import pprint
import fiona
import matplotlib.pyplot as plt
from planet import api
from planet.api.utils import write_to_file
import rasterio
from rasterio import features as rfeatures
from rasterio.enums import Resampling
from rasterio.plot import show
import shapely
from shapely.geometry import shape as sshape
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
analytics_client = api.ClientV1(api_key=API_KEY)
"""
Explanation: Building Footprints as Vectors
This notebook demonstrates converting the building footprint raster that
is the output of the Analaytics feed into a vector dataset.
It demonstrates the following techniques for converting to vector:
1. GDAL CLI
2. Rasterio (no processing)
3. Rasterio (with simplification)
4. Rasterio (arbitrary function, filtering and simplification as example)
End of explanation
"""
# # uncomment to get feed ids
# feeds = analytics_client.list_analytic_feeds({}).get()
# for d in feeds['data']:
# print('{} ({}):\n\r{}\n\r'.format(d['id'], d['created'], d['description']))
# # uncomment to get subscription ids
# FEED_ID = 'b442c53b-fc72-4bee-bab4-0b7aa318ccd9'
# subscriptions = analytics_client.list_analytic_subscriptions(FEED_ID).get()
# for d in subscriptions['data']:
# print('{} ({}):\n\r{}\n\r'.format(d['id'], d['created'], d['title']))
# building footprints in Sazgin, Turkey
SUBSCRIPTION_ID = '02c4f912-090f-45aa-a18b-ac4a55e4b9ba'
# Get subscription details
# subscription_info = analytics_client.get_subscription_info(SUBSCRIPTION_ID).get()
# pprint(subscription_info)
results = analytics_client.list_collection_features(SUBSCRIPTION_ID).get()
features = results['features']
print('{} features in collection'.format(len(features)))
# sort features by acquisition date and take latest feature
features.sort(key=lambda k: k['properties']['first_acquired'])
feature = features[-1]
print(feature['properties']['first_acquired'])
"""
Explanation: Obtain Analytics Raster
Identify road feed feature for download
We want to download the most recent feature from the feed for road detection in Kirazli, Turkey.
End of explanation
"""
RESOURCE_TYPE = 'target-quad'
def create_save_dir(root_dir='data'):
save_dir = root_dir
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
return save_dir
dest = 'data/footprints'
create_save_dir(dest)
def download_feature(feature, subscription_id, resource_type, dest=dest):
# making a long name shorter
get_resource = analytics_client.get_associated_resource_for_analytic_feature
resource = get_resource(subscription_id, feature['id'], resource_type)
filename = download_resource(resource, dest)
return filename
def download_resource(resource, dest, overwrite=False):
writer = write_to_file(dest, overwrite=overwrite)
writer(resource)
filename = os.path.join(dest, resource.name)
print('file saved to: {}'.format(filename))
return filename
filename = download_feature(feature, SUBSCRIPTION_ID, RESOURCE_TYPE)
"""
Explanation: Download Quad Raster
End of explanation
"""
def _open(filename, factor=1):
with rasterio.open(filename) as dataset:
height = int(dataset.height / factor)
width = int(dataset.width / factor)
data = dataset.read(
out_shape=(dataset.count, height, width)
)
return data
def open_bool(filename, factor=1):
data = _open(filename, factor=factor)
return data[0,:,:]
def get_figsize(factor):
return tuple(2 * [int(25/factor)])
factor = 1
figsize = (15, 15)
roads = open_bool(filename, factor=factor)
fig = plt.figure(figsize=figsize)
# show(roads, title="footprints", cmap="binary")
show(roads[2500:3000, 0:500], title="footprints", cmap="binary")
"""
Explanation: Visualize Roads Image
The output of the analytics road detection is a boolean image where road pixels are given a value of True and non-road pixels are given a value of False.
End of explanation
"""
def get_layer_name(filename):
# get the default layer output layer name based on the
# output filename. I wish there was a way to specify
# the output layer name but attempts have failed thus far.
return filename.split('/')[-1].split('.')[0]
gdal_tmp_output_filename = os.path.join(dest, 'test_gdal_all.shp')
gdal_tmp_output_layer_name = get_layer_name(gdal_tmp_output_filename)
gdal_output_filename = os.path.join(dest, 'test_gdal.shp')
gdal_output_layer_name = get_layer_name(gdal_output_filename)
# convert the binary image into polygons
# creates polygons for building footprints as well as regions between
# and around building footprints
!gdal_polygonize.py $filename $gdal_tmp_output_filename
# get number of features, this includes inside and outside building footprints
!ogrinfo -so $gdal_tmp_output_filename $gdal_tmp_output_layer_name | grep 'Feature Count'
# get number of building footprint features
# building footprints are associated with image value (DN) of 255
!ogrinfo -so $gdal_tmp_output_filename -sql "SELECT * FROM $gdal_tmp_output_layer_name WHERE DN=255" \
| grep 'Feature Count'
# create a new shapefile with only building footprints
!ogr2ogr -sql "SELECT * FROM $gdal_tmp_output_layer_name WHERE DN=255" \
$gdal_output_filename $gdal_tmp_output_filename
# confirm the number of building footprint features
!ogrinfo -so $gdal_output_filename -sql "SELECT * FROM $gdal_output_layer_name WHERE DN=255" \
| grep 'Feature Count'
"""
Explanation: Convert Buildings to Vector Features
GDAL Command-Line Interface (CLI)
GDAL provides a python script that can be run via the CLI. It is quite easy to run and fast.
End of explanation
"""
def buildings_as_vectors(filename):
with rasterio.open(filename) as dataset:
buildings = dataset.read(1)
building_mask = buildings == 255 # mask non-building pixels
# transforms roads features to image crs
building_shapes = rfeatures.shapes(buildings, mask=building_mask, transform=dataset.transform)
building_geometries = (s for s, _ in building_shapes)
crs = dataset.crs
return (building_geometries, crs)
def save_as_shapefile(output_filename, geometries, crs):
driver='ESRI Shapefile'
schema = {'geometry': 'Polygon', 'properties': []}
with fiona.open(output_filename, mode='w', driver=driver, schema=schema, crs=crs) as c:
count = 0
for g in geometries:
count += 1;
c.write({'geometry': g, 'properties': {}})
print('wrote {} geometries to {}'.format(count, output_filename))
building_geometries, crs = buildings_as_vectors(filename)
output_filename = os.path.join(dest, 'test_rasterio.shp')
save_as_shapefile(output_filename, building_geometries, crs)
"""
Explanation: Rasterio
In this section we use rasterio to convert the binary buildings raster into a vector dataset. The vectors are written to disk as a shapefile. The shapefile can be imported into geospatial programs such as QGIS or ArcGIS for visualization and further processing.
This is basic conversion to vector shapes. No smoothing to remove pixel edges, or conversion to the road centerlines is performed here.
End of explanation
"""
def buildings_as_vectors_with_simplification(filename):
with rasterio.open(filename) as dataset:
buildings = dataset.read(1)
building_mask = roads == 255 # mask non-building pixels
# we skip transform on vectorization so we can perform filtering in pixel space
building_shapes = rfeatures.shapes(buildings, mask=building_mask)
building_geometries = (s for s, _ in building_shapes)
geo_shapes = (sshape(g) for g in building_geometries)
# simplify so we don't have a million pixel edge points
# value of 1 (in units of pixels) determined by visual comparison to non-simplified
tolerance = 1
geo_shapes = (g.simplify(tolerance, preserve_topology=False)
for g in geo_shapes)
# apply image transform
# rasterio transform: (a, b, c, d, e, f, 0, 0, 1), c and f are offsets
# shapely: a b d e c/xoff f/yoff
d = dataset.transform
shapely_transform = [d[0], d[1], d[3], d[4], d[2], d[5]]
proj_shapes = (shapely.affinity.affine_transform(g, shapely_transform)
for g in geo_shapes)
building_geometries = (shapely.geometry.mapping(s) for s in proj_shapes)
crs = dataset.crs
return (building_geometries, crs)
building_geometries_simp, crs = buildings_as_vectors_with_simplification(filename)
output_filename = os.path.join(dest, 'test_rasterio_simp.shp')
save_as_shapefile(output_filename, building_geometries_simp, crs)
"""
Explanation: Rasterio - Simplifying
In this section, we use shapely to simplify the building footprints so we don't have a million pixel edges.
End of explanation
"""
def buildings_as_vectors_proc(filename, proc_fcn):
with rasterio.open(filename) as dataset:
buildings = dataset.read(1)
building_mask = roads == 255 # mask non-building pixels
# we skip transform on vectorization so we can perform filtering in pixel space
building_shapes = rfeatures.shapes(buildings, mask=building_mask)
building_geometries = (s for s, _ in building_shapes)
geo_shapes = (sshape(g) for g in building_geometries)
# apply arbitrary processing function
geo_shapes = proc_fcn(geo_shapes)
# apply image transform
# rasterio transform: (a, b, c, d, e, f, 0, 0, 1), c and f are offsets
# shapely: a b d e c/xoff f/yoff
d = dataset.transform
shapely_transform = [d[0], d[1], d[3], d[4], d[2], d[5]]
proj_shapes = (shapely.affinity.affine_transform(g, shapely_transform)
for g in geo_shapes)
building_geometries = (shapely.geometry.mapping(s) for s in proj_shapes)
crs = dataset.crs
return (building_geometries, crs)
def filter_and_simplify_footprints(footprints):
# filter to shapes consisting of 6 or more pixels
min_pixel_size = 6
geo_shapes = (s for s in footprints if s.area >= min_pixel_size)
# simplify so we don't have a million pixel edge points
# value of 1 (in units of pixels) determined by visual comparison to non-simplified
tolerance = 1
geo_shapes = (s.simplify(tolerance, preserve_topology=False)
for s in geo_shapes)
return geo_shapes
building_geometries_simp, crs = buildings_as_vectors_proc(filename, filter_and_simplify_footprints)
output_filename = os.path.join(dest, 'test_rasterio_proc.shp')
save_as_shapefile(output_filename, building_geometries_simp, crs)
"""
Explanation: Rasterio - Arbitrary Calculation
In this section we get a little bit fancy and set up the rasterio vectorization function so that it can take any calculation function, as long as that function has a generator of rasterio.shape as input and a generator of rasterio.shape as output. We will use this to filter and simplify building footprint shapes.
End of explanation
"""
|
streety/biof509 | Wk10-Paradigms.ipynb | mit | primes = []
i = 2
while len(primes) < 25:
for p in primes:
if i % p == 0:
break
else:
primes.append(i)
i += 1
print(primes)
"""
Explanation: Week 10 - Programming Paradigms
Learning Objectives
List popular programming paradigms
Demonstrate object oriented programming
Compare procedural programming and object oriented programming
Apply object oriented programming to solve sample problems
Computer programs and the elements they contain can be built in a variety of different ways. Several different styles, or paradigms, exist with differing popularity and usefulness for different tasks.
Some programming languages are designed to support a particular paradigm, while other languages support several different paradigms.
Three of the most commonly used paradigms are:
Procedural
Object oriented
Functional
Python supports each of these paradigms.
Procedural
You may not have realized it but the procedural programming paradigm is probably the approach you are currently taking with your programs.
Programs and functions are simply a series of steps to be performed.
For example:
End of explanation
"""
def square(val):
print(val)
return val ** 2
squared_numbers = [square(i) for i in range(5)]
print('Squared from list:')
print(squared_numbers)
squared_numbers = (square(i) for i in range(5))
print('Squared from iterable:')
print(squared_numbers)
"""
Explanation: Functional
Functional programming is based on the evaluation of mathematical functions. This is a more restricted form of function than you may have used previously - mutable data and changing state is avoided. This makes understanding how a program will behave more straightforward.
Python does support functional programming although it is not as widely used as procedural and object oriented programming. Some languages better known for supporting functional programming include Lisp, Clojure, Erlang, and Haskell.
Functions - Mathematical vs subroutines
In the general sense, functions can be thought of as simply wrappers around blocks of code. In this sense they can also be thought of as subroutines. Importantly they can be written to fetch data and change the program state independently of the function arguments.
In functional programming the output of a function should depend solely on the function arguments.
There is an extensive howto in the python documentation.
This presentation from PyCon US 2013 is also worth watching.
This presentation from PyGotham 2014 covers decorators specifically.
List and generator comprehensions
End of explanation
"""
def squared_numbers(num):
for i in range(num):
yield i ** 2
print('This is only printed after all the numbers output have been consumed')
print(squared_numbers(5))
for i in squared_numbers(5):
print(i)
import functools
def plus(val, n):
return val + n
f = functools.partial(plus, 5)
f(5)
"""
Explanation: Generators
End of explanation
"""
def decorator(inner):
def inner_decorator():
print('before')
inner()
print('after')
return inner_decorator
def decorated():
print('decorated')
f = decorator(decorated)
f()
@decorator
def decorated():
print('decorated')
decorated()
import time
@functools.lru_cache()
def slow_compute(n):
time.sleep(1)
print(n)
start = time.time()
slow_compute(1)
print('First time function runs with these arguments takes ', time.time() - start)
start = time.time()
slow_compute(1)
print('Second time function runs with these arguments takes ', time.time() - start)
start = time.time()
slow_compute(2)
print('Changing the arguments causes slow_compute to be run again and takes ', time.time() - start)
"""
Explanation: Decorators
End of explanation
"""
class Person(object):
"""A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age."""
mammal = True
def __init__(self, name, age):
"""Return a Person object with name and age set to the values supplied"""
self.name = name
self.age = age
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
print(person1, person2)
"""
Explanation: Object oriented
Object oriented programming is a paradigm that combines data with code into objects. The code can interact with and modify the data in an object. A program will be separated out into a number of different objects that interact with each other.
Object oriented programming is a widely used paradigm and a variety of different languages support it including Python, C++, Java, PHP, Ruby, and many others.
Each of these languages use slightly different syntax but the underlying design choices will be the same in each language.
Objects are things, their names often recognise this and are nouns. These might be physical things like a chair, or concepts like a number.
While procedural programs make use of global information, object oriented design forgoes this global knowledge in favor of local knowledge. Objects contain information and can do things. The information they contain are in attributes. The things they can do are in their methods (similar to functions, but attached to the object).
Finally, to achieve the objective of the program objects must interact.
We will look at the python syntax for creating objects later, first let's explore how objects might work in various scenarios.
Designing Object Oriented Programs
These are the simple building blocks for classes and objects. Just as with the other programming constructs available in python, although the language is relatively simple if used effectively they are very powerful.
Learn Python the Hard Way has a very good description of how to design a program using the object oriented programming paradigm. The linked exercise particularly is worth reading.
The best place to start is describing the problem. What are you trying to do? What are the items involved?
Example 1: A Laboratory Inventory
I would like to keep track of all the items in the laboratory so I can easily find them the next time I need them. Both equipment and consumables would be tracked. We have multiple rooms, and items can be on shelves, in refrigerators, in freezers, etc. Items can also be in boxes containing other items in all these places.
The words in bold would all be good ideas to turn into classes. Now we know some of the classes we will need we can start to think about what each of these classes should do, what the methods will be. Let's consider the consumables class:
For consumables we will need to manage their use so there should be an initial quantity and a quantity remaining that is updated every time we use some. We want to make sure that temperature sensitive consumables are always stored at the correct temperature, and that flammables are stored in a flammables cabinet etc.
The consumable class will need a number of attributes:
Initial quantity
Current quantity
Storage temperature
Flammability
The consumable class will need methods to:
Update the quantity remaining
Check for improper storage?
The consumable class might interact with the shelf, refrigerator, freezer, and/or box classes.
Reading back through our description of consumables there is reference to a flammables cabinet that was not mentioned in our initial description of the problem. This is an iterative design process so we should go back and add a flammables cabinet class.
Exercise: A Chart
We have used matplotlib several times now to generate charts. If we were to create a charting library ourselves what are the objects we would use?
I would like to plot some data on a chart. The data, as a series of points and lines, would be placed on a set of x-y axes that are numbered and labeled to accurately describe the data. There should be a grid so that values can be easily read from the chart.
What are the classes you would use to create this plot?
Pick one class and describe the methods it would have, and the other classes it might interact with.
Exercise 2: A Cookbook
A system to manage different recipes, with their ingredients, equipment needed and instructions. Recipes should be scalable to different numbers of servings with the amount of ingredients adjusted appropriately and viewable in metric and imperial units. Nutritional information should be tracked.
What are the classes you would use to create this system?
Pick one class and describe the methods it would have, and the other classes it might interact with.
Building Skills in Object Oriented Design is a good resource to learn more about this process.
Syntax
Now let's look at the syntax we use to work with objects in python.
There is a tutorial in the python documentation.
Before we use an object in our program we must first define it. Just as we define a function with the def keyword, we use class to define a class. What is a class? Think of it as the template, or blueprint from which our objects will be made.
Remember that in addition to code, objects can also contain data that can change so we may have many different instances of an object. Although each may contain different data they are all formed from the same class definition.
As an example:
End of explanation
"""
class Person(object):
"""A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age."""
mammal = True
def __init__(self, name, age):
"""Return a Person object with name and age set to the values supplied"""
self.name = name
self.age = age
def __str__(self):
return '{0} who is {1} years old.'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
print(person1, person2)
"""
Explanation: There is a lot happening above.
class Person(object): The class keyword begins the definition of our class. Here, we are naming the class Person. Next, (object) means that this class will inherit from the object class. This is not strictly necessary but is generally good practice. Inheritance will be discussed in greater depth next week. Finally, just as for a function definition we finish with a colon.
"""Documentation""" Next, a docstring provides important notes on usage.
mammal = True This is a class attribute. This is useful for defining data that our objects will need that is the same for all instances.
def init(self, name, age): This is a method definition. The def keyword is used just as for functions. The first parameter here is self which refers to the object this method will be part of. The double underscores around the method name signify that this is a special method. In this case the __init__ method is called when the object is first instantiated.
self.name = name A common reason to define an __init__ method is to set instance attributes. In this class, name and age are set to the values supplied.
That is all there is to this class definition. Next, we create two instances of this class. The values supplied will be passed to the __init__ method.
Printing these objects don't provide a useful description of what they are. We can improve on this with another special method.
End of explanation
"""
def next_fibonacci(status=[]):
if len(status) < 2:
status.append(1)
return 1
status.append(status[-2] + status[-1])
return status[-1]
print(next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci())
"""
Explanation: There are many more special methods.
Before we go on a note of caution is needed for class attributes. Do you remember the strange fibonacci sequence function from our first class?
End of explanation
"""
class Person(object):
"""A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age."""
friends = []
def __init__(self, name, age):
"""Return a Person object with name and age set to the values supplied"""
self.name = name
self.age = age
def __str__(self):
return '{0} who is {1} years old'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.friends.append('Charlie')
person2.friends.append('Danielle')
print(person1.friends, person2.friends)
"""
Explanation: The same issue can happen with classes, only this is a much more common source of bugs.
If only using strings and numbers the behaviour will likely be much as you expect. However, if using a list, dictionary, or other similar type you may get a surprise.
End of explanation
"""
class Person(object):
"""A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age."""
def __init__(self, name, age):
"""Return a Person object with name and age set to the values supplied"""
self.name = name
self.age = age
self.friends = []
def __str__(self):
return '{0} who is {1} years old'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.friends.append('Charlie')
person2.friends.append('Danielle')
print(person1.friends, person2.friends)
"""
Explanation: Both of our objects point to the same instance of the list type so adding a new friend to either object shows up in both.
The solution to this is creating our friends attribute only at instantiation of the object. This can be done by creating it in the __init__ method.
End of explanation
"""
print('This works:', person1.friends)
print('This does not work:', friends)
"""
Explanation: Objects have their own namespace, although we have created variables called name, age, and friends they can only be accessed in the context of the object.
End of explanation
"""
class Person(object):
"""A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age."""
def __init__(self, name, age):
"""Return a Person object with name and age set to the values supplied"""
self.name = name
self.age = age
self.friends = []
def __str__(self):
"""Return a string representation of the object"""
return '{0} who is {1} years old'.format(self.name, self.age)
def add_friend(self, friend):
"""Add a friend"""
self.friends.append(friend)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.add_friend('Charlie')
person2.add_friend('Danielle')
print(person1.friends, person2.friends)
"""
Explanation: We are not limited to special methods when creating classes. Standard functions, or in this context methods, are an integral part of object oriented programming. Their definition is identical to special methods and functions outside of classes.
End of explanation
"""
|
mikekestemont/lot2016 | Chapter 3 - Conditions.ipynb | mit | print(2 < 5)
print(2 <= 5)
print(3 > 7)
print(3 >= 7)
print(3 == 3)
print("school" == "school")
print("Python" != "perl")
"""
Explanation: Chapter 3: Conditions
Simple conditions
A lot of programming has to do with executing code if a particular condition holds. Here we give a brief overview of how you can express certain conditions in Python. Can you figure our what all of the conditions do?
End of explanation
"""
greater = (5 > 2)
print(greater)
greater = 5 < 2
print(greater)
print(type(greater))
"""
Explanation: The relevant 'logical operators' that we used here are: <, <=, >,>=,==,!=. In Python-speak, we say that such logical expression gets 'evaluated' when you run the code. The outcome of such an evaluation is a 'binary value' or a so-called 'boolean' that can take only two possible values: True or False. You can assign such a boolean to a variable:
End of explanation
"""
good_reads = {"Emma":8, "Pride and Prejudice":10, "Sense and Sensibility":7, "Northanger Abbey":3}
score = good_reads["Sense and Sensibility"]
print(score)
"""
Explanation: if, elif and else
At the end of the previous chapter, we have talked about dictionaries, which are a kind of data structure which you will need a lot when writing Python. Recall our collection of good_reads from the previous chapter. Recall that we could use the key of an entry to retrieve the score of a book in our collection:
End of explanation
"""
score = good_reads["Moby Dick"]
print(score)
"""
Explanation: At some point, however, we might forget which books we have already added to our collection. What happens if we try to get the score of a book that is not in our collection?
End of explanation
"""
book = "Moby Dick"
if book in good_reads:
print(book + " is in the collection")
else:
print(book + " is NOT in the collection")
"""
Explanation: We get an error, and more specifically a KeyError, which basically means: "the key you asked me to look up is not in the dictionary...". We will learn a lot more about error handling later, but for now we would like to prevent our program from giving it in the first place. Let's write a little program that prints "X is in the collection" if a particular book is in the collection and "X is NOT in the collection" if it is not.
End of explanation
"""
print(book in good_reads)
"""
Explanation: A lot of new syntax here. Let's go through it step by step. First we check whether the value we assigned to book is in our collection. The part after if is a logical expression which will be True or False:
End of explanation
"""
print("Emma" in good_reads)
"""
Explanation: Because our book is not in the collection, Python returns False. Let's do the same thing for a book that we know is in the collection:
End of explanation
"""
if "Emma" in good_reads:
print("Found it!")
if book in good_reads:
print("Found it!")
"""
Explanation: Indeed, it is in the collection! Back to our if statement. If the expression after if evaluates to True, our program will go on to the next line and print book + " is in the collection". Let's try that as well:
End of explanation
"""
print("a" in "banana")
"""
Explanation: Notice that the last print statement is not executed. That is because the value we assigned to book is not in our collection and thus the part after if did not evaluate to True. In our little program above we used another statement besides if, namely else. It shouldn't be too hard to figure out what's going on here. The part after else will be evaluated if the if statement evaluated to False. In English: if the book is not in the collection, print that is is not.
Indentation!
Unlike other languages, Python does not make use of curly braces to mark the start and end of pieces of code, like if-statements. The only delimiter is a colon (:) and the indentation of the code (i.e. the use of whitespace). This indentation must be used consistently throughout your code. The convention is to use 4 spaces as indentation. This means that after you have used a colon (such as in our if statement) the next line should be indented by four spaces. (The shortcut for typing these 4 spaces in many editors is inserting a TAB.)
Sometimes we have various conditions that should all evaluate to something different. For that Python provides the elif statement. We use it similar to if and else. Note however that you can only use elif after an if statement! Above we asked whether a book was in the collection. We can do the same thing for parts of strings or for items in a list. For example we could test whether the letter a is in the word banana:
End of explanation
"""
print("z" in "banana")
"""
Explanation: Likewise the following evaluates to False:
End of explanation
"""
word = "rocket science"
if "a" in word:
print(word + " contains the letter a")
elif "s" in word:
print(word + " contains the letter s")
elif "d" in word:
print(word + " contains the letter s")
elif "c" in word:
print(word + " contains the letter c")
else:
print("What a weird word!")
"""
Explanation: Let's use this in an if-elif-else combination, a very common way to implement 'decision trees' in Python:
End of explanation
"""
# insert your code here
"""
Explanation: First the if statement will be evaluated. Only if that statement turns out to be False the computer will proceed to evaluate the elif statement. If the elif statement in turn would prove to be False, the machine will proceed and execute the lines of code associated with the else statement. You can think of this coding structure as a decision tree! Remember: if somewhere along the tree, your machine comes across a logical expression which is true, it won't bother anymore to evaluate the remaining options!
DIY
Let's practice our new condition skills a little. Write a small program that defines a variable weight. If the weight is > 50 pounds, print "There is a $25 charge for luggage that heavy." If it is not, print: "Thank you for your business." If the weight is exactly 50, print: "Pfiew! The weight is just right!". Change the value of weight a couple of times to check whether your code works. (Tip: make use of the logical operators and if-elif-else tree! Make sure you use the correct indentation.)
End of explanation
"""
word = "banana"
if ("a" in word) or ("b" in word):
print("Both a and b are in " + word)
"""
Explanation: and, or, not
Until now, our conditions consisted of single logical expresssions. However, quite often we would like to test for multiple conditions: for instance, you would like to tell your computer to do something if this and this were but this and that were not true. Python provides a number of ways to do that. The first is with the and statement which allows us to combine two expressions that need both to be true in order for the combination to be true. Let's see how that works:
End of explanation
"""
word = "banana"
if ("a" in word) and ("b" in word):
print("Both a and b are in " + word)
"""
Explanation: Note how we can use round brackets to make the code more readable (but you can just as easily leave them out):
End of explanation
"""
if ("a" in word) and ("z" in word):
print("Both a and z are in " + word)
"""
Explanation: If one of the expressions evaluates to False, nothing will be printed:
End of explanation
"""
word = "banana"
if ("a" in word) or ("z" in word):
print("Both a and b are in " + word)
"""
Explanation: Now you know that the and operator exists in Python, you won't be too surprised to learn that there is also an or operator in Python that you can use. Replace and with or in the if statement below. Can you deduce what happens?
End of explanation
"""
if ("a" in word) and ("z" in word):
print("a or z are in " + word)
else:
print("None of these were found...")
# insert your code here
"""
Explanation: In the code block below, can you add an else statement that prints that none of the letters were found?
End of explanation
"""
if ("z" not in word):
print("z is not in " + word)
"""
Explanation: Finally we can use not to test for conditions that are not true.
End of explanation
"""
numbers = [1, 2, 3, 4]
if numbers:
print("I found some numbers!")
"""
Explanation: Objects, such as strings or integers of lists are True, simply because they exist. Empty strings, lists, dictionaries etc on the other hand are False because in a way they do not exist -- an empty list is not really a list, right? This principle is often by programmers to, for example, only execute a piece of code if a certain list contains anything at all:
End of explanation
"""
numbers = [9,999]
if numbers:
print("I found some numbers!")
"""
Explanation: Now if our list were empty, Python wouldn't print anything:
End of explanation
"""
numbers = []
# insert your code here
if not numbers:
print("Is an empty list")
"""
Explanation: DIY
Can you write code that prints "This is an empty list" if the provided list does not contain any values?
End of explanation
"""
# insert your code here
"""
Explanation: Can you do the same thing, but this time using the function len()?
End of explanation
"""
# grading system
"""
Explanation: What we have learnt
To finish this section, here is an overview of the new functions, statements and concepts we have learnt. Go through them and make sure you understand what their purpose is and how they are used.
conditions
indentation
if
elif
else
True
False
empty objects are false
not
in
and
or
multiple conditions
==
<
>
!=
KeyError
Final Exercises Chapter 3
Inspired by Think Python by Allen B. Downey (http://thinkpython.com), Introduction to Programming Using Python by Y. Liang (Pearson, 2013). Some exercises below have been taken from: http://www.ling.gu.se/~lager/python_exercises.html.
Can you implement the following grading scheme in Python?
<img src="https://raw.githubusercontent.com/mikekestemont/python-course/master/images/grade.png">
End of explanation
"""
score = 98.0
if score >= 60.0:
grade = 'D'
elif score >= 70.0:
grade = 'C'
elif score >= 80.0:
grade = 'B'
elif score >= 90.0:
grade = 'A'
else:
grade = 'F'
print(grade)
"""
Explanation: Can you spot the reasoning error in the following code?
End of explanation
"""
# code
"""
Explanation: Write Python code that defines two numbers and prints the largest one of them. Use an if-then-else tree.
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Congrats: you've reached the end of Chapter 3! Ignore the code block below; it's only here to make the page prettier.
End of explanation
"""
|
infilect/ml-course1 | keras-notebooks/ANN/4.4-overfitting-and-underfitting.ipynb | mit | from keras.datasets import imdb
import numpy as np
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
"""
Explanation: Overfitting and underfitting
This notebook contains the code samples found in Chapter 3, Section 6 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
In all the examples we saw in the previous chapter -- movie review sentiment prediction, topic classification, and house price regression --
we could notice that the performance of our model on the held-out validation data would always peak after a few epochs and would then start
degrading, i.e. our model would quickly start to overfit to the training data. Overfitting happens in every single machine learning
problem. Learning how to deal with overfitting is essential to mastering machine learning.
The fundamental issue in machine learning is the tension between optimization and generalization. "Optimization" refers to the process of
adjusting a model to get the best performance possible on the training data (the "learning" in "machine learning"), while "generalization"
refers to how well the trained model would perform on data it has never seen before. The goal of the game is to get good generalization, of
course, but you do not control generalization; you can only adjust the model based on its training data.
At the beginning of training, optimization and generalization are correlated: the lower your loss on training data, the lower your loss on
test data. While this is happening, your model is said to be under-fit: there is still progress to be made; the network hasn't yet
modeled all relevant patterns in the training data. But after a certain number of iterations on the training data, generalization stops
improving, validation metrics stall then start degrading: the model is then starting to over-fit, i.e. is it starting to learn patterns
that are specific to the training data but that are misleading or irrelevant when it comes to new data.
To prevent a model from learning misleading or irrelevant patterns found in the training data, the best solution is of course to get
more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution
is to modulate the quantity of information that your model is allowed to store, or to add constraints on what information it is allowed to
store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most
prominent patterns, which have a better chance of generalizing well.
The processing of fighting overfitting in this way is called regularization. Let's review some of the most common regularization
techniques, and let's apply them in practice to improve our movie classification model from the previous chapter.
Note: in this notebook we will be using the IMDB test set as our validation set. It doesn't matter in this context.
Let's prepare the data using the code from Chapter 3, Section 5:
End of explanation
"""
from keras import models
from keras import layers
original_model = models.Sequential()
original_model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
original_model.add(layers.Dense(16, activation='relu'))
original_model.add(layers.Dense(1, activation='sigmoid'))
original_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
"""
Explanation: Fighting overfitting
Reducing the network's size
The simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is
determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is
often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore
will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any
generalization power. For instance, a model with 500,000 binary parameters could easily be made to learn the class of every digits in the
MNIST training set: we would only need 10 binary parameters for each of the 50,000 digits. Such a model would be useless for classifying
new digit samples. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge
is generalization, not fitting.
On the other hand, if the network has limited memorization resources, it will not be able to learn this mapping as easily, and thus, in
order to minimize its loss, it will have to resort to learning compressed representations that have predictive power regarding the targets
-- precisely the type of representations that we are interested in. At the same time, keep in mind that you should be using models that have
enough parameters that they won't be underfitting: your model shouldn't be starved for memorization resources. There is a compromise to be
found between "too much capacity" and "not enough capacity".
Unfortunately, there is no magical formula to determine what the right number of layers is, or what the right size for each layer is. You
will have to evaluate an array of different architectures (on your validation set, not on your test set, of course) in order to find the
right model size for your data. The general workflow to find an appropriate model size is to start with relatively few layers and
parameters, and start increasing the size of the layers or adding new layers until you see diminishing returns with regard to the
validation loss.
Let's try this on our movie review classification network. Our original network was as such:
End of explanation
"""
smaller_model = models.Sequential()
smaller_model.add(layers.Dense(4, activation='relu', input_shape=(10000,)))
smaller_model.add(layers.Dense(4, activation='relu'))
smaller_model.add(layers.Dense(1, activation='sigmoid'))
smaller_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
"""
Explanation: Now let's try to replace it with this smaller network:
End of explanation
"""
original_hist = original_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
smaller_model_hist = smaller_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
epochs = range(1, 21)
original_val_loss = original_hist.history['val_loss']
smaller_model_val_loss = smaller_model_hist.history['val_loss']
import matplotlib.pyplot as plt
# b+ is for "blue cross"
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
# "bo" is for "blue dot"
plt.plot(epochs, smaller_model_val_loss, 'bo', label='Smaller model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
"""
Explanation: Here's a comparison of the validation losses of the original network and the smaller network. The dots are the validation loss values of
the smaller network, and the crosses are the initial network (remember: a lower validation loss signals a better model).
End of explanation
"""
bigger_model = models.Sequential()
bigger_model.add(layers.Dense(512, activation='relu', input_shape=(10000,)))
bigger_model.add(layers.Dense(512, activation='relu'))
bigger_model.add(layers.Dense(1, activation='sigmoid'))
bigger_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
bigger_model_hist = bigger_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
"""
Explanation: As you can see, the smaller network starts overfitting later than the reference one (after 6 epochs rather than 4) and its performance
degrades much more slowly once it starts overfitting.
Now, for kicks, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
End of explanation
"""
bigger_model_val_loss = bigger_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_val_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
"""
Explanation: Here's how the bigger network fares compared to the reference one. The dots are the validation loss values of the bigger network, and the
crosses are the initial network.
End of explanation
"""
original_train_loss = original_hist.history['loss']
bigger_model_train_loss = bigger_model_hist.history['loss']
plt.plot(epochs, original_train_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_train_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
plt.legend()
plt.show()
"""
Explanation: The bigger network starts overfitting almost right away, after just one epoch, and overfits much more severely. Its validation loss is also
more noisy.
Meanwhile, here are the training losses for our two networks:
End of explanation
"""
from keras import regularizers
l2_model = models.Sequential()
l2_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
activation='relu', input_shape=(10000,)))
l2_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
activation='relu'))
l2_model.add(layers.Dense(1, activation='sigmoid'))
l2_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
"""
Explanation: As you can see, the bigger network gets its training loss near zero very quickly. The more capacity the network has, the quicker it will be
able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large
difference between the training and validation loss).
Adding weight regularization
You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the
"simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some
training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and
simpler models are less likely to overfit than complex ones.
A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer
parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity
of a network by forcing its weights to only take small values, which makes the distribution of weight values more "regular". This is called
"weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This
cost comes in two flavors:
L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the
"L1 norm" of the weights).
L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called
the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different
name confuse you: weight decay is mathematically the exact same as L2 regularization.
In Keras, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight
regularization to our movie review classification network:
End of explanation
"""
l2_model_hist = l2_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
l2_model_val_loss = l2_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, l2_model_val_loss, 'bo', label='L2-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
"""
Explanation: l2(0.001) means that every coefficient in the weight matrix of the layer will add 0.001 * weight_coefficient_value to the total loss of
the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training
than at test time.
Here's the impact of our L2 regularization penalty:
End of explanation
"""
from keras import regularizers
# L1 regularization
regularizers.l1(0.001)
# L1 and L2 regularization at the same time
regularizers.l1_l2(l1=0.001, l2=0.001)
"""
Explanation: As you can see, the model with L2 regularization (dots) has become much more resistant to overfitting than the reference model (crosses),
even though both models have the same number of parameters.
As alternatives to L2 regularization, you could use one of the following Keras weight regularizers:
End of explanation
"""
# At training time: we drop out 50% of the units in the output
layer_output *= np.randint(0, high=2, size=layer_output.shape)
"""
Explanation: Adding dropout
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his
students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. setting to zero) a number of
output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a
given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,
1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test
time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to
balance for the fact that more units are active than at training time.
Consider a Numpy matrix containing the output of a layer, layer_output, of shape (batch_size, features). At training time, we would be
zero-ing out at random a fraction of the values in the matrix:
End of explanation
"""
# At test time:
layer_output *= 0.5
"""
Explanation: At test time, we would be scaling the output down by the dropout rate. Here we scale by 0.5 (because we were previous dropping half the
units):
End of explanation
"""
# At training time:
layer_output *= np.randint(0, high=2, size=layer_output.shape)
# Note that we are scaling *up* rather scaling *down* in this case
layer_output /= 0.5
"""
Explanation: Note that this process can be implemented by doing both operations at training time and leaving the output unchanged at test time, which is
often the way it is implemented in practice:
End of explanation
"""
model.add(layers.Dropout(0.5))
"""
Explanation: This technique may seem strange and arbitrary. Why would this help reduce overfitting? Geoff Hinton has said that he was inspired, among
other things, by a fraud prevention mechanism used by banks -- in his own words: "I went to my bank. The tellers kept changing and I asked
one of them why. He said he didn’t know but they got moved around a lot. I figured it must be because it would require cooperation
between employees to successfully defraud the bank. This made me realize that randomly removing a different subset of neurons on each
example would prevent conspiracies and thus reduce overfitting".
The core idea is that introducing noise in the output values of a layer can break up happenstance patterns that are not significant (what
Hinton refers to as "conspiracies"), which the network would start memorizing if no noise was present.
In Keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before it, e.g.:
End of explanation
"""
dpt_model = models.Sequential()
dpt_model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
dpt_model.add(layers.Dropout(0.5))
dpt_model.add(layers.Dense(16, activation='relu'))
dpt_model.add(layers.Dropout(0.5))
dpt_model.add(layers.Dense(1, activation='sigmoid'))
dpt_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
dpt_model_hist = dpt_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
"""
Explanation: Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
End of explanation
"""
dpt_model_val_loss = dpt_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, dpt_model_val_loss, 'bo', label='Dropout-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
"""
Explanation: Let's plot the results:
End of explanation
"""
|
slundberg/shap | notebooks/api_examples/explainers/GPUTree.ipynb | mit | import shap
import xgboost
# get a dataset on income prediction
X,y = shap.datasets.adult()
# train an XGBoost model (but any other model type would also work)
model = xgboost.XGBClassifier()
model.fit(X, y)
"""
Explanation: GPUTree explainer
This notebooks demonstrates how to use the GPUTree explainer on some simple datasets. Like the Tree explainer, the GPUTree explainer is specifically designed for tree-based machine learning models, but it is designed to accelerate the computations using NVIDA GPUs.
Note that in order to use the GPUTree explainer you need to have an NVIDA GPU, and SHAP needs to have been compiled to support the current GPU libraries on your system. On a recent Ubuntu server the steps to make this happen would be:
Check to makes sure you have the NVIDA CUDA Toolkit installed by running the nvcc command (the CUDA compiler) from the terminal. If this command is not found then you need to install it with something like sudo apt install nvidia-cuda-toolkit.
Once the NVIDA CUDA Toolkit is installed you need to set the CUDA_PATH environment variable. If which nvcc produces /usr/bin/nvcc then you can run export CUDA_PATH=/usr.
Build SHAP with CUDA support by cloning the shap repo using git clone https://github.com/slundberg/shap.git then running python setup.py install --user.
If you run into issues with the above instructions, make sure you don't still have an old version of SHAP around by ensuring import shap fails before you start the new install.
Below we domonstrate how to use the GPUTree explainer on a simple adult income classification dataset and model.
End of explanation
"""
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.GPUTree(model, X)
shap_values = explainer(X)
# get just the explanations for the positive class
shap_values = shap_values
"""
Explanation: Tabular data with independent (Shapley value) masking
End of explanation
"""
shap.plots.bar(shap_values)
"""
Explanation: Plot a global summary
End of explanation
"""
shap.plots.waterfall(shap_values[0])
"""
Explanation: Plot a single instance
End of explanation
"""
explainer2 = shap.explainers.GPUTree(model, feature_perturbation="tree_path_dependent")
interaction_shap_values = explainer2(X[:100], interactions=True)
shap.plots.scatter(interaction_shap_values[:,:,0])
"""
Explanation: Interaction values
GPUTree support the Shapley taylor interaction values (an improvement over what the Tree explainer original provided).
End of explanation
"""
|
phoebe-project/phoebe2-docs | development/tutorials/multiprocessing.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.4,<2.5"
import phoebe
"""
Explanation: Advanced: Running PHOEBE with Multiprocessing
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
print(phoebe.multiprocessing_get_nprocs())
"""
Explanation: Accessing/Changing Multiprocessing Settings
To check the number of processors that will be used whenever multiprocessing is invoked, call phoebe.multiprocessing_get_nprocs. By default, this will be the number of detected CPUs on the machine.
End of explanation
"""
phoebe.multiprocessing_off()
print(phoebe.multiprocessing_get_nprocs())
"""
Explanation: To disable multiprocessing, we can call phoebe.multiprocessing_off.
End of explanation
"""
phoebe.multiprocessing_on()
print(phoebe.multiprocessing_get_nprocs())
"""
Explanation: To re-enable multiprocessing with all available CPUs on the machine, we can call phoebe.multiprocessing_on.
End of explanation
"""
phoebe.multiprocessing_set_nprocs(2)
print(phoebe.multiprocessing_get_nprocs())
"""
Explanation: Or to manually set the number of processors to use, we can call phoebe.multiprocessing_set_nprocs.
End of explanation
"""
|
chagaz/SamSpecCoEN | Significant subnetworks.ipynb | mit | print aces_gene_names[:10]
alist = list(aces_gene_names[:10])
gn1 = 'Entrez_5982'
gn2 = 'Entrez_76'
print alist.index(gn1)
print alist.index(gn2)
aces_gene_names = list(aces_gene_names)
edges_set = set([]) # (gene_idx_1, gene_idx_2)
# gene_idx_1 < gene_idx_2
# idx in aces_gene_names, starting at 0
with open('ACES/experiments/data/I2D_edges_0411.sif') as f:
for line in f:
ls = line.split()
gene_name_1 = 'Entrez_%s' % ls[0]
gene_name_2 = 'Entrez_%s' % ls[2]
# Exclude self edges
if gene_name_1 == gene_name_2:
continue
try:
gene_idx_1 = aces_gene_names.index(gene_name_1)
gene_idx_2 = aces_gene_names.index(gene_name_2)
except ValueError:
continue
if gene_idx_1 < gene_idx_2:
e = (gene_idx_1, gene_idx_2)
else:
e = (gene_idx_2, gene_idx_1)
edges_set.add(e)
f.close()
np.savetxt('I2D_edges.txt', np.array([list(x) for x in list(edges_set)]), fmt='%d')
len(edges_set)
edges_list = np.array(edges_list)
genes_in_network = set(np.array([list(x) for x in list(edges_set)]).flatten())
print len(genes_in_network)
len(set(np.where(np.sum(X_zeroed, axis=0)==0)[0]).intersection(genes_in_network))
"""
Explanation: I2D network: 12 643 nodes, of which 10 018 are present in the data, and 142309 edges.
End of explanation
"""
X_binary = np.where(X<-1, 0, 1)
float(np.count_nonzero(X_binary))/(X.shape[1]*X.shape[1])
np.count_nonzero(X_binary)
"""
Explanation: 9900 genes in the network. 4181 of those are always expressed.
End of explanation
"""
|
Kaggle/learntools | notebooks/geospatial/raw/tut4.ipynb | apache-2.0 | #$HIDE$
import pandas as pd
import geopandas as gpd
import numpy as np
import folium
from folium import Marker
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Introduction
In this tutorial, you'll learn about two common manipulations for geospatial data: geocoding and table joins.
End of explanation
"""
from geopy.geocoders import Nominatim
"""
Explanation: Geocoding
Geocoding is the process of converting the name of a place or an address to a location on a map. If you have ever looked up a geographic location based on a landmark description with Google Maps, Bing Maps, or Baidu Maps, for instance, then you have used a geocoder!
We'll use geopy to do all of our geocoding.
End of explanation
"""
geolocator = Nominatim(user_agent="kaggle_learn")
location = geolocator.geocode("Pyramid of Khufu")
print(location.point)
print(location.address)
"""
Explanation: In the code cell above, Nominatim refers to the geocoding software that will be used to generate locations.
We begin by instantiating the geocoder. Then, we need only apply the name or address as a Python string. (In this case, we supply "Pyramid of Khufu", also known as the Great Pyramid of Giza.)
If the geocoding is successful, it returns a geopy.location.Location object with two important attributes:
- the "point" attribute contains the (latitude, longitude) location, and
- the "address" attribute contains the full address.
End of explanation
"""
point = location.point
print("Latitude:", point.latitude)
print("Longitude:", point.longitude)
"""
Explanation: The value for the "point" attribute is a geopy.point.Point object, and we can get the latitude and longitude from the latitude and longitude attributes, respectively.
End of explanation
"""
universities = pd.read_csv("../input/geospatial-learn-course-data/top_universities.csv")
universities.head()
"""
Explanation: It's often the case that we'll need to geocode many different addresses. For instance, say we want to obtain the locations of 100 top universities in Europe.
End of explanation
"""
def my_geocoder(row):
try:
point = geolocator.geocode(row).point
return pd.Series({'Latitude': point.latitude, 'Longitude': point.longitude})
except:
return None
universities[['Latitude', 'Longitude']] = universities.apply(lambda x: my_geocoder(x['Name']), axis=1)
print("{}% of addresses were geocoded!".format(
(1 - sum(np.isnan(universities["Latitude"])) / len(universities)) * 100))
# Drop universities that were not successfully geocoded
universities = universities.loc[~np.isnan(universities["Latitude"])]
universities = gpd.GeoDataFrame(
universities, geometry=gpd.points_from_xy(universities.Longitude, universities.Latitude))
universities.crs = {'init': 'epsg:4326'}
universities.head()
"""
Explanation: Then we can use a lambda function to apply the geocoder to every row in the DataFrame. (We use a try/except statement to account for the case that the geocoding is unsuccessful.)
End of explanation
"""
# Create a map
m = folium.Map(location=[54, 15], tiles='openstreetmap', zoom_start=2)
# Add points to the map
for idx, row in universities.iterrows():
Marker([row['Latitude'], row['Longitude']], popup=row['Name']).add_to(m)
# Display the map
m
"""
Explanation: Next, we visualize all of the locations that were returned by the geocoder. Notice that a few of the locations are certainly inaccurate, as they're not in Europe!
End of explanation
"""
#$HIDE_INPUT$
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
europe = world.loc[world.continent == 'Europe'].reset_index(drop=True)
europe_stats = europe[["name", "pop_est", "gdp_md_est"]]
europe_boundaries = europe[["name", "geometry"]]
europe_boundaries.head()
"""
Explanation: Table joins
Now, we'll switch topics and think about how to combine data from different sources.
Attribute join
You already know how to use pd.DataFrame.join() to combine information from multiple DataFrames with a shared index. We refer to this way of joining data (by simpling matching values in the index) as an attribute join.
When performing an attribute join with a GeoDataFrame, it's best to use the gpd.GeoDataFrame.merge(). To illustrate this, we'll work with a GeoDataFrame europe_boundaries containing the boundaries for every country in Europe. The first five rows of this GeoDataFrame are printed below.
End of explanation
"""
europe_stats.head()
"""
Explanation: We'll join it with a DataFrame europe_stats containing the estimated population and gross domestic product (GDP) for each country.
End of explanation
"""
# Use an attribute join to merge data about countries in Europe
europe = europe_boundaries.merge(europe_stats, on="name")
europe.head()
"""
Explanation: We do the attribute join in the code cell below. The on argument is set to the column name that is used to match rows in europe_boundaries to rows in europe_stats.
End of explanation
"""
# Use spatial join to match universities to countries in Europe
european_universities = gpd.sjoin(universities, europe)
# Investigate the result
print("We located {} universities.".format(len(universities)))
print("Only {} of the universities were located in Europe (in {} different countries).".format(
len(european_universities), len(european_universities.name.unique())))
european_universities.head()
"""
Explanation: Spatial join
Another type of join is a spatial join. With a spatial join, we combine GeoDataFrames based on the spatial relationship between the objects in the "geometry" columns. For instance, we already have a GeoDataFrame universities containing geocoded addresses of European universities.
Then we can use a spatial join to match each university to its corresponding country. We do this with gpd.sjoin().
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ncar/cmip6/models/sandbox-1/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
ppyht2/tf-exercise | 014. RNN for Sin2/main.ipynb | gpl-3.0 | # 0. Import all the libararies and packages we will need
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.contrib import rnn
% matplotlib inline
"""
Explanation: Understanding Recurrent Neural Network Through Sine Waves
End of explanation
"""
def generate_sin(batch_size=1000, T=50, t_offset = 0, y_offset=0, past_range=10, future_range=10):
'''
For now: assume amplitude = 1
y = sin(wt+ t_offset) + y_offset
'''
f = (2 * np.pi) / T
past = np.empty((batch_size, past_range))
future = np.empty((batch_size, future_range))
for t in range(batch_size):
past[t,:] = np.sin(t_offset + f*np.array(range(t-past_range, t))) + y_offset
future[t,:] = np.sin(t_offset + f*np.array(range(t,t+future_range))) + y_offset
return np.array(range(batch_size)), past.reshape((batch_size, past_range,1)), future
t, p, f = generate_sin(100)
plt.plot(t, f[:,0])
"""
Explanation: 1. Let's define a function to generate sine waves in the format we want
Past data: [batch_size x past_range x 1]
Future data: [batch_size x future_range]
End of explanation
"""
def my_rnn(x, W, b):
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, n_input])
x = tf.split(x, n_step, axis=0)
lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
return tf.nn.bias_add(tf.matmul(outputs[-1], W), b)
learning_rate = 1e-3
n_hidden = 10
n_step = 10
n_input = 1
n_output = 10
n_epoch = 10
epoch_size = 100
n_iter = n_epoch * epoch_size
tf.reset_default_graph()
x = tf.placeholder(tf.float32, [None, n_step, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
W = tf.Variable(tf.truncated_normal([n_hidden, n_output]))
b = tf.Variable(tf.truncated_normal([n_output]))
h = my_rnn(x, W, b)
individual_losses = tf.reduce_sum(tf.squared_difference(h,y), 1)
loss = tf.reduce_mean(individual_losses)
optimiser = tf.train.AdamOptimizer(learning_rate).minimize(loss)
%%time
batch_size = 50
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for iter in range(n_iter+1):
# No mini Batch yet
_, p, f = generate_sin(batch_size)
optimiser.run(feed_dict={x:p, y:f})
if iter% epoch_size ==0:
print('Epoch: {} Loss: {}' .format(int(iter/epoch_size), loss.eval(feed_dict={x:p, y:f})))
t, p, f = generate_sin(2000, T = 50, t_offset=10)
l = loss.eval(feed_dict={x:p, y:f})
print(l)
fcast = h.eval(feed_dict={x:p})
plt.plot(t, f[:,0], label='Actual')
plt.plot(t, fcast[:,0], label='fcast')
plt.legend()
"""
Explanation: 2. Let's define a our models
End of explanation
"""
|
shear/rppy | notebooks/QSI Sample Workflow.ipynb | bsd-2-clause | %matplotlib inline
from rppy import las
import rppy
from matplotlib import pyplot as plt
import numpy as np
from matplotlib.ticker import AutoMinorLocator
well2 = las.LASReader("data/well_2.las", null_subs=np.nan)
"""
Explanation: Quantitative Seismic Interpretation
This notebook provides a step-by-step walkthrough of a reservoir characterization workflow based on an example data-set and problem-set from Quantitative Seismic Interpretation (Avseth, Mukerji, Mavko, 2005).
The dataset used is provided free of charge on the QSI website (link). It consists of well log data from five wells, with six distinct lithofacies identified in Well 2, chosen as the type well. Alongside the well log data ia a seismic dataset containing on 2D section of NMO-corrected pre-stack CDP gathers and two 3D volumes - near and far offset partial stacks.
Rock physics modeling
First off, we'll need to load the Well 2 data. The LAS file contains both P and S velocity, density, and gamma ray curves. We'll use the LASReader from SciPy recipes (link)
End of explanation
"""
plt.figure(1)
plt.suptitle("Well #2 Log Suite")
plt.subplot(1, 3, 1)
plt.plot(well2.data['GR'], well2.data['DEPT'], 'g')
plt.ylim(2000, 2600)
plt.title('Gamma')
plt.xlim(20, 120)
plt.gca().set_xticks([20, 120])
plt.gca().xaxis.grid(True, which="minor")
minorLoc = AutoMinorLocator(6)
plt.gca().xaxis.set_minor_locator(minorLoc)
plt.gca().invert_yaxis()
plt.ylabel("Depth [m]")
plt.gca().set_yticks([2100, 2200, 2300, 2400, 2500])
plt.subplot(1, 3, 2)
plt.plot(well2.data['RHOB'], well2.data['DEPT'], 'b')
plt.ylim(2000, 2600)
plt.title('Density')
plt.xlim(1.65, 2.65)
plt.gca().set_xticks([1.65, 2.65])
plt.gca().xaxis.grid(True, which="minor")
minorLoc = AutoMinorLocator(6)
plt.gca().xaxis.set_minor_locator(minorLoc)
plt.gca().invert_yaxis()
plt.gca().axes.get_yaxis().set_ticks([])
plt.subplot(1, 3, 3)
plt.plot(well2.data['Vp'], well2.data['DEPT'], 'b')
plt.ylim(2000, 2600)
plt.title('VP')
plt.xlim(0.5, 4.5)
plt.gca().set_xticks([0.5, 4.5])
plt.gca().xaxis.grid(True, which="minor")
minorLoc = AutoMinorLocator(6)
plt.gca().xaxis.set_minor_locator(minorLoc)
plt.gca().invert_yaxis()
plt.gca().axes.get_yaxis().set_ticks([])
plt.plot(well2.data['Vs'], well2.data['DEPT'], 'r')
plt.ylim(2000, 2600)
plt.title('Velocity')
plt.xlim(0.5, 4.5)
plt.gca().set_xticks([0.5, 4.5])
plt.gca().xaxis.grid(True, which="minor")
minorLoc = AutoMinorLocator(6)
plt.gca().xaxis.set_minor_locator(minorLoc)
plt.gca().invert_yaxis()
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()
"""
Explanation: First things first, let's take a look at our well logs. Most of the code below isn't necessary just to get a quick look, but it makes the plots nice and pretty.
End of explanation
"""
phi = (well2.data['RHOB'] - 2.6)/(1.05 - 2.6)
"""
Explanation: Now, we'll derive a porosity curve from the bulk density assuming a uniform grain density of 2.65 g/cm^3 and a fluid density of 1.05 g/cm^3.
End of explanation
"""
plt.figure(2)
plt.subplot(1, 3, 1)
plt.plot(phi, well2.data['DEPT'], 'k')
plt.ylim(2000, 2600)
plt.title('Porosity')
plt.xlim(0, 0.6)
plt.gca().set_xticks([0, 0.6])
plt.gca().xaxis.grid(True, which="minor")
minorLoc = AutoMinorLocator(6)
plt.gca().xaxis.set_minor_locator(minorLoc)
plt.gca().invert_yaxis()
plt.ylabel("Depth [m]")
plt.gca().set_yticks([2100, 2200, 2300, 2400, 2500])
plt.show()
"""
Explanation: Let's take a look at our porosity curve and sanity-check it.
End of explanation
"""
plt.figure(3)
fig, ax = plt.subplots()
im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5)
plt.xlabel('POROSITY')
plt.ylabel('VP')
plt.xlim([0, 1])
plt.ylim([0, 6])
plt.clim([50, 100])
cbar = fig.colorbar(im, ax=ax)
plt.show()
"""
Explanation: Looks plausible, no values above 0.6, average porosity hovering around or below 30%. Now let's start looking at log relationships. We'll create a crossplot of Vp vs. Porosity for well 2, and colour it by gamma as a quick-look lithology indicator.
End of explanation
"""
phi_2 = np.arange(0, 1, 0.001)
Ku = np.empty(np.shape(phi_2))
Kl = np.empty(np.shape(phi_2))
uu = np.empty(np.shape(phi_2))
ul = np.empty(np.shape(phi_2))
Vpu = np.empty(np.shape(phi_2))
Vpl = np.empty(np.shape(phi_2))
K = np.array([37., 2.25])
u = np.array([44., 0.001])
for n in np.arange(0, len(phi_2)):
Ku[n], Kl[n], uu[n], ul[n] = rppy.media.hashin_shtrikman(K, u, np.array([1-phi_2[n], phi_2[n]]))
Vpu[n] = rppy.moduli.Vp(2.65, K=Ku[n], u=uu[n])
Vpl[n] = rppy.moduli.Vp(2.65, K=Kl[n], u=ul[n])
plt.figure(4)
fig, ax = plt.subplots()
im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5)
plt.xlabel('POROSITY')
plt.ylabel('VP')
plt.xlim([0, 1])
plt.ylim([0, 6])
plt.clim([50, 100])
cbar = fig.colorbar(im, ax=ax)
plt.plot(phi_2, Vpu, 'k')
plt.plot(phi_2, Vpl, 'k')
plt.show()
"""
Explanation: We'll now compute the Hashin-Shtrikman upper and lower bounds, and add them to our crossplot. In order to do this, we'll need to make some assumptions about the composition of the rock. We'll assume a solid quartz matrix, with a bulk modulus K=37 GPa and a shear modulus u=44 GPa. For the fluid phase we'll begin by using water-filled porosity, by assuming a fluid bulk modulus K=2.25 GPa, shear modulus u=0 GPa.
End of explanation
"""
Vphan, Vshan = rppy.media.han(phi_2, 0.5)
plt.figure(5)
fig, ax = plt.subplots()
im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5)
plt.xlabel('POROSITY')
plt.ylabel('VP')
plt.xlim([0, 1])
plt.ylim([0, 6])
plt.clim([50, 100])
cbar = fig.colorbar(im, ax=ax)
plt.plot(phi_2, Vpu, 'k')
plt.plot(phi_2, Vpl, 'k')
plt.plot(phi_2, Vphan, 'k')
plt.show()
"""
Explanation: Now we'll compute, and add to the mix, Han's empirical sandstone line, for a water-saturated sand at 40 MPa, with a clay content of 5%.
End of explanation
"""
Kss, uss = rppy.media.soft_sand(36.6, 45, phi_2, phi_0=0.36, C=9, P=0.02)
Vpss = rppy.moduli.Vp(2.65, K=Kss, u=uss)
plt.figure(6)
fig, ax = plt.subplots()
im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5)
plt.xlabel('POROSITY')
plt.ylabel('VP')
plt.xlim([0, 1])
plt.ylim([0, 6])
plt.clim([50, 100])
cbar = fig.colorbar(im, ax=ax)
plt.plot(phi_2, Vpu, 'k')
plt.plot(phi_2, Vpl, 'k')
plt.plot(phi_2, Vphan, 'k')
plt.plot(phi_2, Vpss, 'b')
plt.show()
"""
Explanation: Now, we'll compute the modified Hashin-Shtrikman lower bound, using Hertz-Mindlin theory to define the moduli of the high-porosity end member. This approach is commonly referred to as the "soft", "unconsolidated", or "friable" sand model. It represents a physical model where intergranular cements are deposited away from, rather than at, the grain-to-grain contacts, and as such, models a 'sorting' trend with reduced porosity (rather than a 'diagenetic' trend where clays and cements are deposited at grain contacts, significantly stiffening the model).
For this, we'll use a quartz matriz, with a bulk modulus of 36.6 GPa, and a shear modulus of 45 MPa. We'll model using a critical porosity of 0.36, number of grain contacts C=9, and an effective pressure of 20 MPa.
End of explanation
"""
|
skkandrach/foundations-homework | data-databases/Twitter_API.ipynb | mit | api_key = ""
api_secret = ""
access_token = ""
token_secret = ""
"""
Explanation: The Twitter API
This tutorial presents an overview of how to use the Python programming language to interact with the Twitter API, both for acquiring data and for posting it. We're using the Twitter API because it's useful in its own right but also presents an interesting "case study" of how to work with APIs offered by commercial entities and social media sites.
About the API
The Twitter API allows you to programmatically enact many of the same actions that you can perform with the Twitter app and the Twitter website, such as searching for Tweets, following users, reading your timeline, posting tweets and direct messages, etc, though there are some parts of the Twitter user experience, like polls, that are (as of this writing) unavailable through the API. You can use the API to do things like collect data from tweets and write automated agents that post to Twitter.
In particular, we're going to be talking about Twitter's REST API. ("REST" stands for Representational State Transfer, a popular style of API design). For the kind of work we'll be doing, the streaming API is also worth a look, but left as an exercise for the reader.
Authorization
All requests to the REST API—making a post, running a search, following or unfollowing an account—must be made on behalf of a user. Your code must be authorized to submit requests to the API on that user's behalf. A user can authorize code to act on their behalf through the medium of something called an application. You can see which applications you've authorized by logging into Twitter and following this link.
When making requests to the Twitter API, you don't use your username and password: instead you use four different unique identifiers: the Consumer (Application) Key, the Consumer (Application) Secret, an Access Token, and a Token Secret. The Consumer Key/Secret identify the application, and the Token/Token Secret identify a particular account as having access through a particular application. You don't choose these values; they're strings of random numbers automatically generated by Twitter. These strings, together, act as a sort of "password" for the Twitter API.
In order to obtain these four magical strings, we need to...
Create a Twitter "application," which is associated with a Twitter account;
the "API Key" and "API Secret" are created with this application;
Create an access token and token secret;
Copy all four strings to use when creating Python programs that access Twitter.
This site has a good overview of the steps you need to
perform
in order to create a Twitter application. I'll demonstrate the process in
class. You'll need to have already signed up for a Twitter account!
When you're done creating your application and getting your token for the application, assign them to the variables below:
End of explanation
"""
!pip3 install twython
"""
Explanation: Working with Twython
Twitter's API operates over HTTP and returns JSON objects, and technically you could use the requests library (or any other HTTP client) to make requests to and receive responses from the API. However, the Twitter API uses a somewhat complicated authentication process called Oauth, which requires the generation of cryptographic signatures of requests in order to ensure their security. This process is a bit complicated, and not worth implementing from scratch. For this reason, most programmers making use of the Twitter API use a third-party library to do so. These libraries wrap up and abstract away the particulars of working with Oauth authentication so that programmers don't have to worry about them. As a happy side-effect, the libraries provide particular abstractions for API calls which make it slightly easier to use the API—you can just call methods with parameters instead of constructing URLs in your code "by hand".
There are a number of different libraries for accessing the Twitter API. We're going to use one called Twython. You can install Twython with pip:
End of explanation
"""
import twython
# create a Twython object by passing the necessary secret passwords
twitter = twython.Twython(api_key, api_secret, access_token, token_secret)
response = twitter.search(q="data journalism", result_type="recent", count=20)
[r['text'] for r in response['statuses']]
"""
Explanation: Searching Twitter
Here's our first example Twython snippet, which uses the search resource to find tweets that match a particular search term.
End of explanation
"""
response = twitter.search(q="data journalism", result_type="recent", count=2)
response
"""
Explanation: The .search() method performs a Twitter search, just as though you'd gone to the Twitter search page and typed in your query. The method returns a JSON object, which Twython converts to a dictionary for us. This dictionary contains a number of items; importantly, the value for the key statuses is a list of tweets that match the search term. Let's look at the underlying response in detail. I'm going to run the search again, this time asking for only two results instead of twenty:
End of explanation
"""
def tweet_url(tweet):
return 'https://twitter.com/' + \
tweet['user']['screen_name'] + \
"/statuses/" + \
tweet['id_str']
tweet_url(response['statuses'][1])
"""
Explanation: As you can see, there's a lot of stuff in here, even for just two tweets. In the top-level dictionary, there's a key search_metadata whose value is a dictionary with, well, metadata about the search: how many results it returned, what the query was, and what URL to use to get the next page of results. The value for the statuses key is a list of dictionaries, each of which contains information about the matching tweets. Tweets are limited to 140 characters, but have much more than 140 characters of metadata. Twitter has a good guide to what each of the fields mean here, but here are the most interesting key/value pairs from our perspective:
id_str: the unique numerical ID of the tweet
in_reply_to_status_id_str: the ID of the tweet that this tweet is a reply to, if applicable
retweet_count: number of times this tweet has been retweeted
retweet_status: the tweet that this tweet is a retweet of, if applicable
favorite_count: the number of times that this tweet has been favorited
text: the actual text of the tweet
user: a dictionary with information on the user who wrote the tweet, including the screen_name key which has the Twitter screen name of the user
NOTE: You can do much more with the query than just search for raw strings. The "Query operators" section on this page shows the different bits of syntax you can use to make your query more expressive.
You can form the URL of a particular tweet by combining the tweet's ID and the user's screen name using the following function. (This is helpful if you want to view the tweet in a web browser.)
End of explanation
"""
response = twitter.search(q="data journalism", result_type="recent", count=2)
"""
Explanation: Twython, the REST API, and method parameters
In general, Twython has one method for every "endpoint" in the Twitter REST API. Usually the Twython method has a name that resembles or is identical to the corresponding URL in the REST API. The Twython API documentation lists the available methods and which parts of the Twitter API they map to.
As a means of becoming more familiar with this, let's dig a bit deeper into search. The .search() method of Twython takes a number of different parameters, which match up with the query string parameters of the REST API's search/tweets endpoint as documented here. Every parameter that can be specified on the query string in a REST API call can also be included as a named parameter in the call to Twython's .search() method. The preceding examples already show some examples of this:
End of explanation
"""
response = twitter.search(q="data journalism",
result_type="recent",
count=100,
geocode="40.807511,-73.963265,4mi")
for resp in response['statuses']:
print(resp['user']['screen_name'])
"""
Explanation: This call to .search() includes the parameters q (which specifies the search query), result_type (which can be set to either popular, recent or mixed, depending on how you want results to be returned) and count (which specifies how many tweets you want returned in the response, with an upper limit of 100). Looking at the documentation, it appears that there's another interesting parameter we could play with: the geocode parameter, which will make our search respond with tweets only within a given radius of a particular latitude/longitude. Let's use this to find the screen names of people tweeting about data journalism within a few miles of Columbia University:
End of explanation
"""
response = twitter.get_user_timeline(screen_name='columbiajourn',
count=20,
include_rts=False,
exclude_replies=True)
for item in response:
print(item['text'])
"""
Explanation: Getting a user's timeline
The Twitter API provides an endpoint for fetching the tweets of a particular user. The endpoint in the API for this is statuses/user_timeline and the function in Twython is .get_user_timeline(). This function looks a lot like .search() in the shape of its response. Let's try it out, fetching the last few tweets of the Twitter account of the Columbia Journalism School.
End of explanation
"""
response = twitter.get_user_timeline(screen_name='columbiajourn', count=1)
response
"""
Explanation: The screen_name parameter specifies whose timeline we want; the count parameter specifies how many tweets we want from that account. The include_rts and exclude_replies parameters control whether or not particular kinds of tweets are included in the response: setting include_rts to False means we don't see any retweets in our results, while the exclude_replies parameter set to True means we don't see any tweets that are replies to other users. (According to the API documentation, "Using exclude_replies with the count parameter will mean you will receive up-to count tweets — this is because the count parameter retrieves that many tweets before filtering out retweets and replies," which is why asking for 20 tweets doesn't necessarily return 20 tweets in this case.)
Note that the .get_user_timeline() function returns not a dictionary with a key whose value is a list of tweets, like the .search() function. Instead, it simply returns a JSON list:
End of explanation
"""
cursor = twitter.cursor(twitter.search, q='"data journalism" -filter:retweets', count=100)
all_text = list()
for tweet in cursor:
all_text.append(tweet['text'])
if len(all_text) > 500: # stop after 1000 tweets
break
"""
Explanation: Using cursors to fetch more results
The .search() and .get_user_timeline() functions by default only return the most recent results, up to the number specified with count (and sometimes even fewer than that). In order to find older tweets, you need to page through the results. If you were doing this "by hand," you would use the max_id or since_id parameters to find tweets older than the last tweet in the current result, repeating that process until you'd exhausted the results (or found as many tweets as you need). This is delicate work and thankfully Twython includes pre-built functionality to make this easier: the .cursor() function.
The .cursor() function takes the function you want to page through as the first parameter, and after that the keyword parameters that you would normally pass to that function. Given this information, it can repeatedly call the given function on your behalf, going back as far as it can. The object returned from the .cursor() function can be used as the iterable object in a for loop, allowing you to iterate over all of the results. Here's an example using .search():
End of explanation
"""
from collections import Counter
import re
c = Counter()
for text in all_text:
c.update([t.lower() for t in text.split()])
# most common ten words that have a length greater than three and aren't
# "data" or "journalism"
[k for k, v in c.most_common() \
if len(k) > 3 and not(re.search(r"data|journalism", k))][:25]
"""
Explanation: This snippet finds 500 tweets containing the phrase data journalism (excluding retweets) and stores the text of those tweets in a list. We can then use this text for data analysis, like a simple word count:
End of explanation
"""
twitter.update_status(status="This is a test tweet for a tutorial I'm going through, please ignore")
"""
Explanation: Rate limits
TK!
Entities
TK!
Posting tweets
You can also use the Twitter API to post tweets on behalf of a user. In this tutorial, we're going to use this ability of the API to make a simple bot.
Simple example
When you first create a Twitter application, the credentials you have by default (i.e., the ones you get when you click "Create my access token") are for your own user. This means that you can post tweets to your own account using these credentials, and to your own account only. This isn't normally very desirable, but let's give it a shot, just to see how to update your status (i.e., post a tweet) with Twython. Here we go:
End of explanation
"""
twitter = twython.Twython(api_key, api_secret)
auth = twitter.get_authentication_tokens()
print("Log into Twitter as the user you want to authorize and visit this URL:")
print("\t" + auth['auth_url'])
"""
Explanation: Check your account, and you'll see that your status has been updated! (You can safely delete this tweet if you'd like.) As you can see, the .update_status() function takes a single named parameter, status, which should have a string as its value. Twitter will update your status with the given string. The function returns a dictionary with information about the tweet that was just created.
Authorizing another user
Of course, you generally don't want to update your own status. You want to write a program that updates someone else's status, even if that someone else is a bot user of your own creation.
Before you proceed, create a new Twitter account. You'll need to log out of your current account and then open up the Twitter website, or (preferably) use your browser's "private" or "incognito" functionality. Every Twitter account requires a unique e-mail address, and you'll need to have access to the e-mail address to "verify" your account, so make sure you have an e-mail address you can use (and check) other than the one you used for your primary Twitter account. (We'll go over this process in class.)
Once you've created a new Twitter account, you'll need to have that user authorize the Twitter application we created earlier to tweet on its behalf. Doing this is a two-step process. Run the cell below (making sure that the api_key and api_secret variables have been set to the consumer key and consumer secret of your application, respectively), and then open the URL it prints out while you are logged into your bot's account.
End of explanation
"""
pin = ""
twitter = twython.Twython(api_key, api_secret, auth['oauth_token'], auth['oauth_token_secret'])
tokens = twitter.get_authorized_tokens(pin)
new_access_token = tokens['oauth_token']
new_token_secret = tokens['oauth_token_secret']
print("your access token:", new_access_token)
print("your token secret:", new_token_secret)
"""
Explanation: On the page that appears, confirm that you want to authorize the application. A PIN will appear. Paste this PIN into the cell below, as the value assigned to the variable pin.
End of explanation
"""
twitter = twython.Twython(api_key, api_secret, new_access_token, new_token_secret)
"""
Explanation: Great! Now you have an access token and token secret for your bot's account. Run the cell below to create a new Twython object authorized with these credentials.
End of explanation
"""
twitter.update_status(status="hello, world!")
"""
Explanation: And run the following cell to post a test tweet:
End of explanation
"""
import pg8000
lakes = list()
conn = pg8000.connect(database="mondial")
cursor = conn.cursor()
cursor.execute("SELECT name, area, depth, elevation, type, river FROM lake")
for row in cursor.fetchall():
lakes.append({'name': row[0],
'area': row[1],
'depth': row[2],
'elevation': row[3],
'type': row[4],
'river': row[5]})
len(lakes)
"""
Explanation: Simple bot example
We're now going to make a simple bot that posts tweets with information about a randomly selected lake from the MONDIAL database. First, we'll make a big list of dictionaries with all of the information from the table:
End of explanation
"""
sentences = {
'area': 'The area of {} is {} square kilometers.',
'depth': 'The depth of {} is {} meters.',
'elevation': 'The elevation of {} is {} meters.',
'type': 'The type of {} is "{}."',
'river': '{} empties into a river named {}.'
}
"""
Explanation: The following dictionary maps each column to a sentence frame:
End of explanation
"""
import random
def random_lake_sentence(lakes, sentences):
rlake = random.choice(lakes)
# get the keys in the dictionary whose value is not None; we'll only try to
# make sentences for these
possible_keys = [k for k, v in rlake.items() if v is not None and k != 'name']
rframe = random.choice(possible_keys)
return sentences[rframe].format(rlake['name'], rlake[rframe])
for i in range(10):
print(random_lake_sentence(lakes, sentences))
"""
Explanation: The following cell selects a random lake from the list, and a random sentence frame from the sentences dictionary, and attempts to fill in the frame with relevant information from the lake.
End of explanation
"""
twitter.update_status(status=random_lake_sentence(lakes, sentences))
"""
Explanation: We can now call the .update_status() function with the result of the random text generation function:
End of explanation
"""
twitter = twython.Twython(api_key, api_secret)
auth = twitter.get_authentication_tokens()
print("Log into Twitter as the user you want to authorize and visit this URL")
"""
Explanation: To make this into a "bot," we'd need to go an extra step: move all of this code into a standalone Python script, and set a cron job to run it every so often (maybe every few hours).
Further reading
TK
posting to twitter
authorizing into twitter
getting access tokens for yourself:
created an app
authorize yourself to use that app using the "create token" button
getting access tokens for someone else:
create the app
have the other user authorize the application <- ocmplicated process!!!
End of explanation
"""
|
brettavedisian/phys202-2015-work | assignments/assignment04/MatplotlibEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 1
Imports
End of explanation
"""
import os
assert os.path.isfile('yearssn.dat')
"""
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
"""
data=np.array(np.loadtxt('yearssn.dat'))
year=np.array(data[0::1,0::2]) #splits the data into two arrays
ssc=np.array(data[0::1,1::2])
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
"""
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
"""
y=plt.figure(figsize=(15,1))
plt.plot(year, ssc) #makes a plot of the two arrays
plt.xlabel('Year')
plt.ylabel('SSC')
plt.xlim(right=2016)
plt.title('Sunspot Count vs. Year')
assert True # leave for grading
"""
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
#referenced http://goo.gl/lle8W6
#for ticks and spines
f,((graph0,graph2),(graph1,graph3))=plt.subplots(nrows=2,ncols=2,figsize=(12,6),sharey=True)
graph0.plot(year[0:100],ssc[0:100]) # makes subpots of the different centuries of data
graph2.plot(year[200:300],ssc[200:300])
graph1.plot(year[100:200],ssc[100:200])
graph3.plot(year[300:400],ssc[300:400])
graph3.set_xlim(right=2100)
graph0.spines['right'].set_visible(False) # formatting all the right and top axes of the data to reduce ink "use"
graph0.spines['top'].set_visible(False) # as well as making titles and labels for the axes
graph0.yaxis.set_ticks_position('left')
graph0.xaxis.set_ticks_position('bottom')
graph0.set_ylabel('SSC')
graph0.set_title('1700-1800')
graph1.spines['right'].set_visible(False)
graph1.spines['top'].set_visible(False)
graph1.yaxis.set_ticks_position('left')
graph1.xaxis.set_ticks_position('bottom')
graph1.set_title('1800-1900')
graph2.spines['right'].set_visible(False)
graph2.spines['top'].set_visible(False)
graph2.yaxis.set_ticks_position('left')
graph2.xaxis.set_ticks_position('bottom')
graph2.set_title('1900-2000')
graph1.set_xlabel('Years in Century Blocks')
graph1.set_ylabel('SSC')
graph3.spines['right'].set_visible(False)
graph3.spines['top'].set_visible(False)
graph3.yaxis.set_ticks_position('left')
graph3.xaxis.set_ticks_position('bottom')
graph3.set_xlabel('Years in Century Blocks')
graph3.set_title('2000-2100')
plt.tight_layout()
assert True # leave for grading
"""
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
So far, I have just labeled the axes, gave the visualizaton a title, set the limits of the x-axis, and increased the size of the visualization.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
VectorBlox/PYNQ | Pynq-Z1/notebooks/examples/arduino_analog.ipynb | bsd-3-clause | # Make sure the base overlay is loaded
from pynq import Overlay
Overlay("base.bit").download()
"""
Explanation: Arduino Analog Example
This example shows how to read out analog values on Arduino analog pins. Users can either wire the test pins, or use the PYNQ shield.
For this notebook, a PYNQ Arduino shield is used. The grove joystick is connected to group A1, while a grove potentiometer is connected to group A4 on this shield.
End of explanation
"""
from pynq.iop import Arduino_Analog
from pynq.iop import ARDUINO
from pynq.iop import ARDUINO_GROVE_A1
from pynq.iop import ARDUINO_GROVE_A4
analog1 = Arduino_Analog(ARDUINO,ARDUINO_GROVE_A1)
"""
Explanation: 1. Instantiate individual analog controller
In this example, connect the grove joystick to group A1.
End of explanation
"""
analog1.read()
"""
Explanation: 2. Read voltage value out
Read out the individual analog voltage values. The voltage (volts) is in the range of [0.0, 3.3].
End of explanation
"""
analog1.read_raw()[0]
"""
Explanation: 3. Read raw value out
Read out the individual raw values. Since the XADC is 16-bit, the raw value is in the range of [0, 65535].
End of explanation
"""
from time import sleep
analog1.set_log_interval_ms(100)
analog1.start_log()
"""
Explanation: 4. Logging multiple sample values
Step 1: Starting logging once every 100 milliseconds
Once the interval is set, users can change the analog values.
For example, if the grove potentiometer is used, move the slider back and forth slowly.
If the joy stick is used, rotate the joy stick slowly.
End of explanation
"""
log1 = analog1.get_log()
"""
Explanation: Step 2: Get the log
Stop and get the log whenever is done.
The log is a nested list, where each list inside log records the samples for one channel.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerLine2D
line1, = plt.plot(range(len(log1[0])), log1[0],
'ro', label="X-axis of joystick")
line2, = plt.plot(range(len(log1[1])), log1[1],
'bs', label="Y-axis of joystick")
plt.title('Arduino Analog Voltage Log')
plt.axis([0, len(log1[0]), 0.0, 3.3])
plt.legend(loc=4,bbox_to_anchor=(1, -0.3),
ncol=2, borderaxespad=0.,
handler_map={line1: HandlerLine2D(numpoints=1),
line2: HandlerLine2D(numpoints=1)})
plt.show()
"""
Explanation: Step 3. Plot values over time
The voltage values can be logged and displayed.
End of explanation
"""
analog2 = Arduino_Analog(ARDUINO,[0,1,4])
analog2.set_log_interval_ms(100)
analog2.start_log()
"""
Explanation: 5. Logging multiple devices
We can also repeat the above steps to track multiple analog devices.
In this example:
* connect the grove joystick to group A1.
* connect the grove potentiometer to group A4.
Both analog devices will be monitored.
Step 1: Starting logging once every 100 milliseconds
Once the interval is set, users can change the analog values by:
* rotating the joy stick.
* moving the slider back and forth.
End of explanation
"""
log2 = analog2.get_log()
"""
Explanation: Step 2: Get the log
Stop and get the log whenever is done.
The log is a nested list, where each list inside log records the samples for one channel.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerLine2D
line1, = plt.plot(range(len(log2[0])), log2[0],
'ro', label="X-axis of joystick")
line2, = plt.plot(range(len(log2[1])), log2[1],
'bs', label="Y-axis of joystick")
line3, = plt.plot(range(len(log2[2])), log2[2],
'g^', label="potentiometer")
plt.title('Arduino Analog Voltage Log')
plt.axis([0, len(log2[0]), 0.0, 3.3])
plt.legend(loc=4,bbox_to_anchor=(1, -0.3),
ncol=2, borderaxespad=0.,
handler_map={line1: HandlerLine2D(numpoints=1),
line2: HandlerLine2D(numpoints=1),
line3: HandlerLine2D(numpoints=1)})
plt.show()
"""
Explanation: Step 3. Plot values over time
The voltage values can be logged and displayed.
End of explanation
"""
|
CrowdTruth/CrowdTruth-core | tutorial/notebooks/.ipynb_checkpoints/Dimensionality Reduction - Stopword Removal from Media Unit & Annotation-checkpoint.ipynb | apache-2.0 | import pandas as pd
test_data = pd.read_csv("data/person-video-highlight.csv")
test_data["taggedinsubtitles"][0:30]
"""
Explanation: Stopword Removal from Media Unit & Annotation
In this tutorial, we will show how dimensionality reduction can be applied over both the media units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to highlight words or phrases in a text that identify or refer to people in a video. The task was executed on Figure Eight. This is how the task looked like to the workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. The answers from the crowd are stored in the taggedinsubtitles column.
End of explanation
"""
import crowdtruth
from crowdtruth.configuration import DefaultConfig
class Config(DefaultConfig):
inputColumns = ["ctunitid", "videolocation", "subtitles"]
outputColumns = ["taggedinsubtitles"]
open_ended_task = True
annotation_separator = ","
remove_empty_rows = False
def processJudgments(self, judgments):
# build annotation vector just from words
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
# normalize vector elements
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('[',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(']',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('"',''))
return judgments
"""
Explanation: Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on row 5 annotated chunks of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is NaN.
A basic pre-processing configuration
Our basic pre-processing configuration attempts to normalize the different ways of performing the crowd annotations.
We set remove_empty_rows = False to keep the empty rows from the crowd. This configuration option will set all empty cell values to correspond to a NONE token in the annotation vector.
We build the annotation vector to have one component for each word in the sentence. To do this, we break up multiple-word annotations into a list of single words in the processJudgments call:
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
The final configuration class Config is this:
End of explanation
"""
data_with_stopwords, config_with_stopwords = crowdtruth.load(
file = "data/person-video-highlight.csv",
config = Config()
)
processed_results_with_stopwords = crowdtruth.run(
data_with_stopwords,
config_with_stopwords
)
"""
Explanation: Now we can pre-process the data and run the CrowdTruth metrics:
End of explanation
"""
import nltk
from nltk.corpus import stopwords
import string
stopword_set = set(stopwords.words('english'))
stopword_set.update(['s'])
def remove_stop_words(words_string, sep):
'''
words_string: string containing all words
sep: separator character for the words in words_string
'''
words_list = words_string.replace("'", sep).split(sep)
corrected_words_list = ""
for word in words_list:
if word.translate(None, string.punctuation) not in stopword_set:
if corrected_words_list != "":
corrected_words_list += sep
corrected_words_list += word
return corrected_words_list
"""
Explanation: Removing stopwords from Media Units and Annotations
A more complex dimensionality reduction technique involves removing the stopwords from both the media units and the crowd annotations. Stopwords (i.e. words that are very common in the English language) do not usually contain much useful information. Also, the behavior of the crowds w.r.t them is inconsistent - some workers omit them, some annotate them.
The first step is to build a function that removes stopwords from strings. We will use the stopwords corpus in the nltk package to get the list of words. We want to build a function that can be reused for both the text in the media units and in the annotations column. Also, we need to be careful about omitting punctuation.
The function remove_stop_words does all of these things:
End of explanation
"""
import pandas as pd
class ConfigDimRed(Config):
def processJudgments(self, judgments):
judgments = Config.processJudgments(self, judgments)
# remove stopwords from input sentence
for idx in range(len(judgments[self.inputColumns[2]])):
judgments.at[idx, self.inputColumns[2]] = remove_stop_words(
judgments[self.inputColumns[2]][idx], " ")
for idx in range(len(judgments[self.outputColumns[0]])):
judgments.at[idx, self.outputColumns[0]] = remove_stop_words(
judgments[self.outputColumns[0]][idx], self.annotation_separator)
if judgments[self.outputColumns[0]][idx] == "":
judgments.at[idx, self.outputColumns[0]] = self.none_token
return judgments
"""
Explanation: In the new configuration class ConfigDimRed, we apply the function we just built to both the column that contains the media unit text (inputColumns[2]), and the column containing the crowd annotations (outputColumns[0]):
End of explanation
"""
data_without_stopwords, config_without_stopwords = crowdtruth.load(
file = "data/person-video-highlight.csv",
config = ConfigDimRed()
)
processed_results_without_stopwords = crowdtruth.run(
data_without_stopwords,
config_without_stopwords
)
"""
Explanation: Now we can pre-process the data and run the CrowdTruth metrics:
End of explanation
"""
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
processed_results_with_stopwords["units"]["uqs"],
processed_results_without_stopwords["units"]["uqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Sentence Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
"""
Explanation: Effect on CrowdTruth metrics
Finally, we can compare the effect of the stopword removal on the CrowdTruth sentence quality score.
End of explanation
"""
plt.scatter(
processed_results_with_stopwords["workers"]["wqs"],
processed_results_without_stopwords["workers"]["wqs"],
)
plt.plot([0, 0.6], [0, 0.6], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
"""
Explanation: The red line in the plot runs through the diagonal. All sentences above the line have a higher sentence quality score when the stopwords were removed.
The plot shows that removing the stopwords improved the quality for a majority of the sentences. Surprisingly though, some sentences decreased in quality. This effect can be understood when plotting the worker quality scores.
End of explanation
"""
|
AllenDowney/DataScienceBestPractices | hypothesis.ipynb | mit | from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from IPython.html.widgets import interact, fixed
from IPython.html import widgets
import first
# seed the random number generator so we all get the same results
numpy.random.seed(19)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
"""
Explanation: Hypothesis Testing
Copyright 2015 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
live, firsts, others = first.MakeFrames()
"""
Explanation: As an example, let's look at differences between groups. The example I use in Think Stats is first babies compared with others. The first module provides code to read the data into three pandas Dataframes.
End of explanation
"""
def TestStatistic(data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
"""
Explanation: The apparent effect we're interested in is the difference in the means. Other examples might include a correlation between variables or a coefficient in a linear regression. The number that quantifies the size of the effect, whatever it is, is the "test statistic".
End of explanation
"""
group1 = firsts.prglngth
group2 = others.prglngth
"""
Explanation: For the first example, I extract the pregnancy length for first babies and others. The results are pandas Series objects.
End of explanation
"""
actual = TestStatistic((group1, group2))
actual
"""
Explanation: The actual difference in the means is 0.078 weeks, which is only 13 hours.
End of explanation
"""
n, m = len(group1), len(group2)
pool = numpy.hstack((group1, group2))
"""
Explanation: The null hypothesis is that there is no difference between the groups. We can model that by forming a pooled sample that includes first babies and others.
End of explanation
"""
def RunModel():
numpy.random.shuffle(pool)
data = pool[:n], pool[n:]
return data
"""
Explanation: Then we can simulate the null hypothesis by shuffling the pool and dividing it into two groups, using the same sizes as the actual sample.
End of explanation
"""
RunModel()
"""
Explanation: The result of running the model is two NumPy arrays with the shuffled pregnancy lengths:
End of explanation
"""
TestStatistic(RunModel())
"""
Explanation: Then we compute the same test statistic using the simulated data:
End of explanation
"""
test_stats = numpy.array([TestStatistic(RunModel()) for i in range(1000)])
test_stats.shape
"""
Explanation: If we run the model 1000 times and compute the test statistic, we can see how much the test statistic varies under the null hypothesis.
End of explanation
"""
def VertLine(x):
"""Draws a vertical line at x."""
pyplot.plot([x, x], [0, 300], linewidth=3, color='0.8')
VertLine(actual)
pyplot.hist(test_stats, color=COLOR5)
pyplot.xlabel('difference in means')
pyplot.ylabel('count')
None
"""
Explanation: Here's the sampling distribution of the test statistic under the null hypothesis, with the actual difference in means indicated by a gray line.
End of explanation
"""
pvalue = sum(test_stats >= actual) / len(test_stats)
pvalue
"""
Explanation: The p-value is the probability that the test statistic under the null hypothesis exceeds the actual value.
End of explanation
"""
class HypothesisTest(object):
"""Represents a hypothesis test."""
def __init__(self, data):
"""Initializes.
data: data in whatever form is relevant
"""
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
self.test_stats = None
def PValue(self, iters=1000):
"""Computes the distribution of the test statistic and p-value.
iters: number of iterations
returns: float p-value
"""
self.test_stats = numpy.array([self.TestStatistic(self.RunModel())
for _ in range(iters)])
count = sum(self.test_stats >= self.actual)
return count / iters
def MaxTestStat(self):
"""Returns the largest test statistic seen during simulations.
"""
return max(self.test_stats)
def PlotHist(self, label=None):
"""Draws a Cdf with vertical lines at the observed test stat.
"""
def VertLine(x):
"""Draws a vertical line at x."""
pyplot.plot([x, x], [0, max(ys)], linewidth=3, color='0.8')
ys, xs, patches = pyplot.hist(ht.test_stats, color=COLOR4)
VertLine(self.actual)
pyplot.xlabel('test statistic')
pyplot.ylabel('count')
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
raise UnimplementedMethodException()
def MakeModel(self):
"""Build a model of the null hypothesis.
"""
pass
def RunModel(self):
"""Run the model of the null hypothesis.
returns: simulated data
"""
raise UnimplementedMethodException()
"""
Explanation: In this case the result is about 15%, which means that even if there is no difference between the groups, it is plausible that we could see a sample difference as big as 0.078 weeks.
We conclude that the apparent effect might be due to chance, so we are not confident that it would appear in the general population, or in another sample from the same population.
Part Two
We can take the pieces from the previous section and organize them in a class that represents the structure of a hypothesis test.
End of explanation
"""
class DiffMeansPermute(HypothesisTest):
"""Tests a difference in means by permutation."""
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
"""Build a model of the null hypothesis.
"""
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = numpy.hstack((group1, group2))
def RunModel(self):
"""Run the model of the null hypothesis.
returns: simulated data
"""
numpy.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
"""
Explanation: HypothesisTest is an abstract parent class that encodes the template. Child classes fill in the missing methods. For example, here's the test from the previous section.
End of explanation
"""
data = (firsts.prglngth, others.prglngth)
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
"""
Explanation: Now we can run the test by instantiating a DiffMeansPermute object:
End of explanation
"""
ht.PlotHist()
"""
Explanation: And we can plot the sampling distribution of the test statistic under the null hypothesis.
End of explanation
"""
class DiffStdPermute(DiffMeansPermute):
"""Tests a difference in means by permutation."""
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
group1, group2 = data
test_stat = abs(group1.std() - group2.std())
return test_stat
data = (firsts.prglngth, others.prglngth)
ht = DiffStdPermute(data)
p_value = ht.PValue(iters=1000)
print('\nstd permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
"""
Explanation: We can write a class named DiffStdPermute that extends DiffMeansPermute and overrides TestStatistic to compute the difference in standard deviations.
End of explanation
"""
data = (firsts.totalwgt_lb.dropna(), others.totalwgt_lb.dropna())
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute birthweight')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
"""
Explanation: Now let's run DiffMeansPermute again to see if there is a difference in birth weight between first babies and others.
End of explanation
"""
|
kubeflow/kfp-tekton-backend | samples/contrib/local_development_quickstart/Local Development Quickstart.ipynb | apache-2.0 | # PROJECT_ID is used to construct the docker image registry. We will use Google Container Registry,
# but any other accessible registry works as well.
PROJECT_ID='Your-Gcp-Project-Id'
# Install Pipeline SDK
!pip3 install kfp --upgrade
!mkdir -p tmp/pipelines
"""
Explanation: KubeFlow Pipeline local development quickstart
In this notebook, we will demo:
Author components with the lightweight method and ContainerOp based on existing images.
Author pipelines.
Note: Make sure that you have docker installed in the local environment
Setup
End of explanation
"""
def list_blobs(bucket_name: str) -> str:
'''Lists all the blobs in the bucket.'''
import subprocess
subprocess.call(['pip', 'install', '--upgrade', 'google-cloud-storage'])
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
list_blobs_response = bucket.list_blobs()
blobs = ','.join([blob.name for blob in list_blobs_response])
print(blobs)
return blobs
"""
Explanation: Part 1
Two ways to author a component to list blobs in a GCS bucket
A pipeline is composed of one or more components. In this section, you will build a single component that lists the blobs in a GCS bucket. Then you build a pipeline that consists of this component. There are two ways to author a component. In the following sections we will go through each of them.
1. Create a lightweight python component from a Python function.
1.1 Define component function
The requirements for the component function:
* The function must be stand-alone.
* The function can only import packages that are available in the base image.
* If the function operates on numbers, the parameters must have type hints. Supported types are int, float, bool. Everything else is passed as str, that is, string.
* To build a component with multiple output values, use Python’s typing.NamedTuple type hint syntax.
End of explanation
"""
import kfp.components as comp
# Converts the function to a lightweight Python component.
list_blobs_op = comp.func_to_container_op(list_blobs)
"""
Explanation: 1.2 Create a lightweight Python component
End of explanation
"""
import kfp.dsl as dsl
# Defines the pipeline.
@dsl.pipeline(name='List GCS blobs', description='Lists GCS blobs.')
def pipeline_func(bucket_name):
list_blobs_task = list_blobs_op(bucket_name)
# Use the following commented code instead if you want to use GSA key for authentication.
#
# from kfp.gcp import use_gcp_secret
# list_blobs_task = list_blobs_op(bucket_name).apply(use_gcp_secret('user-gcp-sa'))
# Same for below.
# Compile the pipeline to a file.
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, 'tmp/pipelines/list_blobs.pipeline.tar.gz')
"""
Explanation: 1.3 Define pipeline
Note that when accessing google cloud file system, you need to make sure the pipeline can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
End of explanation
"""
%%bash
# Create folders if they don't exist.
mkdir -p tmp/components/list-gcs-blobs
# Create the Python file that lists GCS blobs.
cat > ./tmp/components/list-gcs-blobs/app.py <<HERE
import argparse
from google.cloud import storage
# Parse agruments.
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket', type=str, required=True, help='GCS bucket name.')
args = parser.parse_args()
# Create a client.
storage_client = storage.Client()
# List blobs.
bucket = storage_client.get_bucket(args.bucket)
list_blobs_response = bucket.list_blobs()
blobs = ','.join([blob.name for blob in list_blobs_response])
print(blobs)
with open('/blobs.txt', 'w') as f:
f.write(blobs)
HERE
"""
Explanation: 2. Wrap an existing Docker container image using ContainerOp
2.1 Create a Docker container
Create your own container image that includes your program. If your component creates some outputs to be fed as inputs to the downstream components, each separate output must be written as a string to a separate local text file inside the container image. For example, if a trainer component needs to output the trained model path, it can write the path to a local file /output.txt. The string written to an output file cannot be too big. If it is too big (>> 100 kB), it is recommended to save the output to an external persistent storage and pass the storage path to the next component.
Start by entering the value of your Google Cloud Platform Project ID.
The following cell creates a file app.py that contains a Python script. The script takes a GCS bucket name as an input argument, gets the lists of blobs in that bucket, prints the list of blobs and also writes them to an output file.
End of explanation
"""
%%bash
# Create Dockerfile.
cat > ./tmp/components/list-gcs-blobs/Dockerfile <<EOF
FROM python:3.6-slim
WORKDIR /app
COPY . /app
RUN pip install --upgrade google-cloud-storage
EOF
"""
Explanation: Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY will copy the required files and directories (for example, app.py) to the filesystem of the container. RUN will execute a command (for example, install the dependencies) and commits the results.
End of explanation
"""
%%bash -s "{PROJECT_ID}"
IMAGE_NAME="listgcsblobs"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
# Create script to build docker image and push it.
cat > ./tmp/components/list-gcs-blobs/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"
docker build -t \${IMAGE_NAME} .
docker tag \${IMAGE_NAME} \${GCR_IMAGE}
docker push \${GCR_IMAGE}
docker image rm \${IMAGE_NAME}
docker image rm \${GCR_IMAGE}
HERE
"""
Explanation: Now that we have created our Dockerfile we can create our Docker image. Then we need to push the image to a registry to host the image. Now create a Shell script that builds a container image and stores it in the Google Container Registry.
End of explanation
"""
%%bash
# Build and push the image.
cd tmp/components/list-gcs-blobs
bash build_image.sh
"""
Explanation: Run the script.
End of explanation
"""
import kfp.dsl
def list_gcs_blobs_op(name, bucket):
return kfp.dsl.ContainerOp(
name=name,
image='gcr.io/{}/listgcsblobs:latest'.format(PROJECT_ID),
command=['python', '/app/app.py'],
file_outputs={'blobs': '/blobs.txt'},
arguments=['--bucket', bucket]
)
"""
Explanation: 2.2 Define each component
Define a component by creating an instance of kfp.dsl.ContainerOp that describes the interactions with the Docker container image created in the previous step. You need to specify the component name, the image to use, the command to run after the container starts, the input arguments, and the file outputs. .
End of explanation
"""
# Create folders if they don't exist.
!mkdir -p tmp/pipelines
"""
Explanation: 2.3 Create your workflow as a Python function
Start by creating a folder to store the pipeline file.
End of explanation
"""
import datetime
import kfp.compiler as compiler
# Define the pipeline
@kfp.dsl.pipeline(
name='List GCS Blobs',
description='Takes a GCS bucket name as input and lists the blobs.'
)
def pipeline_func(bucket='Enter your bucket name here.'):
list_blobs_task = list_gcs_blobs_op('List', bucket)
# Compile the pipeline to a file.
filename = 'tmp/pipelines/list_blobs{dt:%Y%m%d_%H%M%S}.pipeline.tar.gz'.format(
dt=datetime.datetime.now())
compiler.Compiler().compile(pipeline_func, filename)
"""
Explanation: Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration including name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
End of explanation
"""
%%bash -s "{PROJECT_ID}"
# Create folders if they don't exist.
mkdir -p tmp/components/view-input
# Create the Python file that selects and views the input CSV.
cat > ./tmp/components/view-input/app.py <<HERE
import argparse
import json
from google.cloud import storage
# Parse agruments.
parser = argparse.ArgumentParser()
parser.add_argument('--blobs', type=str, required=True, help='List of blobs.')
args = parser.parse_args()
blobs = args.blobs.split(',')
inputs = filter(lambda s: s.endswith('iris.csv'), blobs)
input = list(inputs)[0]
print('The CSV file is {}'.format(input))
# CSV header.
header = [
'sepal_length',
'sepal_width',
'petal_length',
'petal_width',
'species',
]
# Add a metadata for an artifact.
metadata = {
'outputs' : [{
'type': 'table',
'storage': 'gcs',
'format': 'csv',
'header': header,
'source': input
}]
}
print(metadata)
# Create an artifact.
with open('/mlpipeline-ui-metadata.json', 'w') as f:
json.dump(metadata, f)
HERE
# Create Dockerfile.
cat > ./tmp/components/view-input/Dockerfile <<HERE
FROM python:3.6-slim
WORKDIR /app
COPY . /app
RUN pip install --upgrade google-cloud-storage
HERE
# Create script to build docker image and push it.
IMAGE_NAME="viewinput"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
cat > ./tmp/components/view-input/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"
docker build -t \${IMAGE_NAME} .
docker tag \${IMAGE_NAME} \${GCR_IMAGE}
docker push \${GCR_IMAGE}
docker image rm \${IMAGE_NAME}
docker image rm \${GCR_IMAGE}
HERE
# Build and push the image.
cd tmp/components/view-input
bash build_image.sh
"""
Explanation: Follow the instructions on kubeflow.org to access Kubeflow UIs. Upload the created pipeline and run it.
Warning: When the pipeline is run, it pulls the image from the repository to the Kubernetes cluster to create a container. Kubernetes caches pulled images. One solution is to use the image digest instead of the tag in your component dsl, for example, s/v1/sha256:9509182e27dcba6d6903fccf444dc6188709cc094a018d5dd4211573597485c9/g. Alternatively, if you don't want to update the digest every time, you can try :latest tag, which will force the k8s to always pull the latest image..
Part 2
Create a pipeline using Kubeflow Pipelines
In this section, you will build another component. Then you will see how to connect components to build a multi-component pipeline. You will build the new component by building a Docker container image and wrapping it using ContainerOp.
1 Create a container to view CSV
Build a component that can the output of the first component explained in the preceding section (that is, the list of GCS blobs), selects a file ending in iris.csv and displays its content as an artifact. Start by uploading to your Storage bucket the quickstart_iris.csv file that is included in the repository.
End of explanation
"""
import kfp.dsl
def list_gcs_blobs_op(name, bucket):
return kfp.dsl.ContainerOp(
name=name,
image='gcr.io/{}/listgcsblobs:latest'.format(PROJECT_ID),
command=['python', '/app/app.py'],
arguments=['--bucket', bucket],
file_outputs={'blobs': '/blobs.txt'},
output_artifact_paths={'mlpipeline-ui-metadata': '/mlpipeline-ui-metadata.json'},
)
def view_input_op(name, blobs):
return kfp.dsl.ContainerOp(
name=name,
image='gcr.io/{}/viewinput:latest'.format(PROJECT_ID),
command=['python', '/app/app.py'],
arguments=['--blobs', blobs]
)
"""
Explanation: 2 Define each component
Define each of your components by using kfp.dsl.ContainerOp. Decribe the interactions with the Docker container image created in the previous step by specifying the component name, the image to use, the command to run after the container starts, the input arguments, and the file outputs.
End of explanation
"""
# Create folders if they don't exist.
!mkdir -p tmp/pipelines
import datetime
import kfp.compiler as compiler
# Define the pipeline
@kfp.dsl.pipeline(
name='Quickstart pipeline',
description='Takes a GCS bucket name views a CSV input file in the bucket.'
)
def pipeline_func(bucket='Enter your bucket name here.'):
list_blobs_task = list_gcs_blobs_op('List', bucket)
view_input_task = view_input_op('View', list_blobs_task.outputs['blobs'])
# Compile the pipeline to a file.
filename = 'tmp/pipelines/quickstart_pipeline{dt:%Y%m%d_%H%M%S}.pipeline.tar.gz'.format(
dt=datetime.datetime.now())
compiler.Compiler().compile(pipeline_func, filename)
"""
Explanation: 3 Create your workflow as a Python function
Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration including name and description properties. pipeline_func defines the pipeline with the bucket parameter. When the user uploads the pipeline to the system and starts creating a new run from it, they'll see the an input box for the bucket parameter with the initial value Enter your bucket name here.. You can change the initial value with your bucket name at runtime. list_gcs_blobs_op('List', bucket) will create a component named List that lists the blobs. view_input_op('View', list_blobs_task.outputs['blobs']) will create a component named View that views a CSV. list_blobs_task.outputs['blobs'] tells the pipeline to take the output of the first component stored as string in blobs.txt as an input for the second component.
End of explanation
"""
import shutil
import pathlib
path = pathlib.Path("tmp")
shutil.rmtree(path)
"""
Explanation: Follow the instructions on kubeflow.org to access Kubeflow UIs. Upload the created pipeline and run it.
Clean up
End of explanation
"""
|
kit-cel/wt | nt2_ce2/vorlesung/ch_1_basics/pulse_shaping.ipynb | gpl-2.0 | # importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 10) )
"""
Explanation: Content and Objectives
Show pulse shaping (rect and raised-cosine) for random data
Spectra are determined based on the theoretical pulse shape as well as for the random signals when applying estimation
Import
End of explanation
"""
########################
# find impulse response of an RC filter
########################
def get_rc_ir(K, n_sps, t_symbol, r):
'''
Determines coefficients of an RC filter
Formula out of: K.-D. Kammeyer, Nachrichtenübertragung
At poles, l'Hospital was used
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter coefficients
'''
# check that IR length is odd
assert K % 2 == 1, 'Length of the impulse response should be an odd number'
# map zero r to close-to-zero
if r == 0:
r = 1e-32
# initialize output length and sample time
rc = np.zeros( K )
t_sample = t_symbol / n_sps
# time indices and sampled time
k_steps = np.arange( -(K-1) / 2.0, (K-1) / 2.0 + 1 )
t_steps = k_steps * t_sample
for k in k_steps.astype(int):
if t_steps[k] == 0:
rc[ k ] = 1. / t_symbol
elif np.abs( t_steps[k] ) == t_symbol / ( 2.0 * r ):
rc[ k ] = r / ( 2.0 * t_symbol ) * np.sin( np.pi / ( 2.0 * r ) )
else:
rc[ k ] = np.sin( np.pi * t_steps[k] / t_symbol ) / np.pi / t_steps[k] \
* np.cos( r * np.pi * t_steps[k] / t_symbol ) \
/ ( 1.0 - ( 2.0 * r * t_steps[k] / t_symbol )**2 )
return rc
"""
Explanation: Function for determining the impulse response of an RC filter
End of explanation
"""
# modulation scheme and constellation points
M = 2
constellation_points = [ -1, 1 ]
# symbol time and number of symbols
t_symb = 1.0
n_symb = 100
# parameters of the RRC filter
r = .33
n_sps = 8 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus minus in both directions)
K_filt = 2 * syms_per_filt * n_sps + 1 # length of the fir filter
# parameters for frequency regime
N_fft = 512
Omega = np.linspace( -np.pi, np.pi, N_fft)
f_vec = Omega / ( 2 * np.pi * t_symb / n_sps )
"""
Explanation: Parameters
End of explanation
"""
# get RC pulse and rectangular pulse,
# both being normalized to energy 1
rc = get_rc_ir( K_filt, n_sps, t_symb, r )
rc /= np.linalg.norm( rc )
rect = np.append( np.ones( n_sps ), np.zeros( len( rc ) - n_sps ) )
rect /= np.linalg.norm( rect )
# get pulse spectra
RC_PSD = np.abs( np.fft.fftshift( np.fft.fft( rc, N_fft ) ) )**2
RC_PSD /= n_sps
RECT_PSD = np.abs( np.fft.fftshift( np.fft.fft( rect, N_fft ) ) )**2
RECT_PSD /= n_sps
"""
Explanation: Signals and their spectra
End of explanation
"""
# number of realizations along which to average the psd estimate
n_real = 10
# initialize two-dimensional field for collecting several realizations along which to average
S_rc = np.zeros( (n_real, N_fft ), dtype=complex )
S_rect = np.zeros( (n_real, N_fft ), dtype=complex )
# loop for multiple realizations in order to improve spectral estimation
for k in range( n_real ):
# generate random binary vector and
# modulate the specified modulation scheme
data = np.random.randint( M, size = n_symb )
s = [ constellation_points[ d ] for d in data ]
# apply RC filtering/pulse-shaping
s_up_rc = np.zeros( n_symb * n_sps )
s_up_rc[ : : n_sps ] = s
s_rc = np.convolve( rc, s_up_rc)
# apply RECTANGULAR filtering/pulse-shaping
s_up_rect = np.zeros( n_symb * n_sps )
s_up_rect[ : : n_sps ] = s
s_rect = np.convolve( rect, s_up_rect)
# get spectrum using Bartlett method
S_rc[k, :] = np.fft.fftshift( np.fft.fft( s_rc, N_fft ) )
S_rect[k, :] = np.fft.fftshift( np.fft.fft( s_rect, N_fft ) )
# average along realizations
RC_PSD_sim = np.average( np.abs( S_rc )**2, axis=0 )
RC_PSD_sim /= np.max( RC_PSD_sim )
RECT_PSD_sim = np.average( np.abs( S_rect )**2, axis=0 )
RECT_PSD_sim /= np.max( RECT_PSD_sim )
"""
Explanation: Real data-modulated Tx-signal
End of explanation
"""
plt.subplot(221)
plt.plot( np.arange( np.size( rc ) ) * t_symb / n_sps, rc, linewidth=2.0, label='RC' )
plt.plot( np.arange( np.size( rect ) ) * t_symb / n_sps, rect, linewidth=2.0, label='Rect' )
plt.ylim( (-.1, 1.1 ) )
plt.grid( True )
plt.legend( loc='upper right' )
plt.title( '$g(t), s(t)$' )
plt.subplot(222)
np.seterr(divide='ignore') # ignore warning for logarithm of 0
plt.plot( f_vec, 10*np.log10( RC_PSD ), linewidth=2.0, label='RC theory' )
plt.plot( f_vec, 10*np.log10( RECT_PSD ), linewidth=2.0, label='Rect theory' )
np.seterr(divide='warn') # enable warning for logarithm of 0
plt.grid( True )
plt.legend( loc='upper right' )
plt.title( '$|S(f)|^2$' )
plt.ylim( (-60, 10 ) )
plt.subplot(223)
# upper limit for number of symbols shown in the plot
ul = 20 * n_sps
plt.plot( np.arange( np.size( s_rc[ : ul ])) * t_symb / n_sps, s_rc[ : ul ], linewidth=2.0, label='RC' )
plt.plot( np.arange( np.size( s_rect[ : ul ])) * t_symb / n_sps, s_rect[ : ul ], linewidth=2.0, label='Rect' )
plt.plot( np.arange( np.size( s_up_rc[ : ul ])) * t_symb / n_sps, s_up_rc[ : ul ], 'o', linewidth=2.0, label='Syms' )
plt.ylim( (-1.1, 1.1 ) )
plt.grid(True)
plt.legend(loc='upper right')
plt.xlabel('$t/T$')
plt.subplot(224)
np.seterr(divide='ignore') # ignore warning for logarithm of 0
plt.plot( f_vec, 10*np.log10( RC_PSD_sim ), linewidth=2.0, label='RC' )
plt.plot( f_vec, 10*np.log10( RECT_PSD_sim ), linewidth=2.0, label='Rect' )
np.seterr(divide='warn') # enable warning for logarithm of 0
plt.grid(True);
plt.xlabel('$fT$');
plt.legend(loc='upper right')
plt.ylim( (-60, 10 ) )
"""
Explanation: Plotting
End of explanation
"""
|
mpurg/qtools | docs/examples/q2gmx/q2gmx.ipynb | mit | from __future__ import print_function, division, absolute_import
import time
from Qpyl.core.qparameter import QPrm
from Qpyl.core.qlibrary import QLib
from Qpyl.core.qstructure import QStruct
from Qpyl.core.qtopology import QTopology
from Qpyl.common import init_logger
# load the logger
logger = init_logger('Qpyl')
"""
Explanation: Q to GMX conversion
This notebook provides an example of Qtools/Qpyl library usage in Python.
It loads Q parameters (.lib and .prm) and the structure file (.pdb) of a single residue (in this case phenol) and converts it to Gromacs format (.itp and .gro).
Notes:
1. The parameters might not be exactly the same as in original OPLS due to rounding errors, especially in the A,B -> $\sigma$,$\epsilon$ conversion.
2. The output from this script has been validated by comparing the zeroth-step energies (via gmx dump, v5.0.2) of Tyr ($\Delta E = 0.01 \% $) and Trp ($\Delta E = 0.4 \% ~ $) residues, produced with the generated topology and the topology built with GMX opls parameters via pdb2gmx on the same structure.
2.1. Bonds to -C were removed from the GMX library to prevent 'dangling bond error' in pdb2gmx.
2.2. Two impropers are missing in TRP/GMX_opls (on CD2 and CE2). They were removed from Q for the test.
2.3. Fixed a typo in Qoplsaa.lib v1.2 in the TRP improper section (HH2 CH2 CE2 CZ3, should be HH2 CH2 CZ2 CZ3).
2.4. TRP proper dihedrals differ by about 3% in GMX vs Q. The "problematic" dihedrals that do not match are defined explicitly in aminoacids.rtp and come from Kamiski et al (JPCB, 2001). These parameters appear to match "opls2005" (ffld_server -version 14 output). When removed, the difference drops to 0.4 % (rounding errors).
Load the modules
End of explanation
"""
ignore_errors = False
qlib = QLib("oplsaa", ignore_errors=ignore_errors)
qprm = QPrm("oplsaa", ignore_errors=ignore_errors)
qstr = QStruct("fnl.pdb", "pdb", ignore_errors=ignore_errors)
qlib.read_lib("fnl.lib")
qprm.read_prm("fnl.prm")
qtop = QTopology(qlib, qprm, qstr)
if len(qtop.residues) != 1:
raise Exception("Only single residue allowed")
resname = qtop.residues[0].name
"""
Explanation: Load Q parameters
Set ignore_errors = True if you experience issues with bad parameters/non-integer charges/...
End of explanation
"""
crds = []
for atom in qtop.atoms:
x, y, z = [crd/10.0 for crd in atom.struct.coordinates] # A to nm
crds.append("{:>5d}{:<5s}{:>5s}{:>5d}{:>8.3f}{:>8.3f}{:>8.3f}{:>8.4f}{:>8.4f}{:>8.4f}"
"".format(1, resname, atom.name, atom.index, x, y, z, 0, 0, 0))
gro = """\
{} from Q
{:>5d}
{}
0.0 0.0 0.0
""".format(resname, len(qtop.atoms), "\n".join(crds))
"""
Explanation: Make the GRO
End of explanation
"""
typs, atms, bnds, angs, dihs, imps, pairs = [], [], [], [], [], [], set([])
for aprm in sorted(set([atom.prm for atom in qtop.atoms]), key=lambda x: x.prm_id):
if aprm.lj_B < 1e-7 and aprm.lj_A < 1e-7:
sig, eps = 0, 0
elif aprm.lj_B < 1e-7:
# when B is 0, we need to tell GMX this by setting the B to a random (1) value and
# then setting the calculated "fake" sigma to a negative value
# GMX will recalculate c6 (B) and c12 (A) from the fake sigma/epsilons and set c6=B=0
# https://github.com/gromacs/gromacs/blob/5fb87d63ce5df628bfca85f1cebdbc845ec89b40/src/gromacs/gmxpreprocess/convparm.cpp#L100
new_B = 1.0
sig, eps = -(aprm.lj_A/new_B)**(2/6) / 10, (new_B**4) / 4 / (aprm.lj_A**2) * 4.184
else:
sig, eps = (aprm.lj_A/aprm.lj_B)**(2/6) / 10, (aprm.lj_B**4) / 4 / (aprm.lj_A**2) * 4.184
atype = "op_{}".format(aprm.atom_type)
typs.append(" {:<10s} {:<10s} {:>10.6f} 0.000 A {:>15e} {:>15e}"
"".format(atype, atype, aprm.mass, sig, eps))
charge_groups = qtop.residues[0].lib.charge_groups
for atom in qtop.atoms:
atype = "op_{}".format(atom.prm.atom_type)
charge_group = [i+1 for i, ch_grp in enumerate(charge_groups) if atom.name in ch_grp][0]
atms.append("{:>5d} {:<10s} {:>5d} {:5s} {:5s} {:5d} {:10.6f} {:10.6f}"
"".format(atom.index, atype, 1, resname, atom.name,
charge_group, atom.charge, atom.prm.mass))
charge_groups = qtop.residues
for bond in qtop.bonds:
a1, a2 = [atom.index for atom in bond.atoms]
bnds.append("{:>5d} {:>5d} {:>5d} {:>10.6f} {:>10.3f}"
"".format(a1, a2, 1, bond.prm.r0/10.0, bond.prm.fc*4.184*100))
for angle in qtop.angles:
a1, a2, a3 = [atom.index for atom in angle.atoms]
angs.append("{:>5d} {:>5d} {:>5d} {:>5d} {:>10.3f} {:>10.3f}"
"".format(a1, a2, a3, 1, angle.prm.theta0, angle.prm.fc*4.184))
# Use type 5 dihedral (Fourier, GMX Manual 4.2.13, table 5.5)
dih_type = 5
for torsion in qtop.torsions:
opls_torsion = [0, 0, 0, 0] # F1, F2, F3, F4
for prm in torsion.prm.get_prms():
fc, mult, phase, npaths = prm
mult = abs(mult)
if int(mult) != mult or npaths != 1.0 or \
(mult%2 == 0 and phase != 180.0) or \
int(mult) not in (1,2,3,4):
raise Exception("Bad parameter: " + str(torsion.prm))
opls_torsion[abs(int(mult))-1] = fc * 2 * 4.184 # Q to ffld to kJ/mol
c1, c2, c3, c4 = opls_torsion
# Conversion to RB (type 3)
# f1, f2, f3, f4 = opls_torsion
# c0 = (f2 + (f1+f3)/2.0)
# c1 = ((-f1 + 3*f3)/2.0)
# c2 = (-f2 + 4*f4)
# c3 = (-2*f3)
# c4, c5 = 0, 0
a1, a2, a3, a4 = [a.index for a in torsion.atoms]
dihs.append("{:>5d} {:>5d} {:>5d} {:>5d} {:>5d} {:>10.6f} {:>10.6f} {:>10.6f} {:>10.6f}"
"".format(a1, a2, a3, a4, dih_type, c1, c2, c3, c4))
# find 1-4 pairs
# check that atoms don't share bonds/angles (four/five member rings)
# avoid duplicates (six member rings)
if not (set(torsion.atoms[0].bonds) & set(torsion.atoms[3].bonds)) and \
not (set(torsion.atoms[0].angles) & set(torsion.atoms[3].angles)):
pairs.add(tuple(sorted((a1, a4))))
pairs = sorted(["{:>5d} {:>5d} {:>5d}".format(a1, a4, 1) for a1, a4 in pairs])
# Use type 4 periodic improper dihedral (GMX Manual 4.2.12, table 5.5)
imp_type = 4
for improper in qtop.impropers:
a1, a2, a3, a4 = [a.index for a in improper.atoms]
imps.append("{:>5d} {:>5d} {:>5d} {:>5d} {:>5d} {:>10.3f} {:>10.5f} {:>10.3f}"
"".format(a1, a2, a3, a4, imp_type, improper.prm.phi0,
improper.prm.fc*4.184, improper.prm.multiplicity))
prms = {"atomtypes": typs,
"atoms": atms,
"bonds": bnds,
"angles" : angs,
"dihedrals": dihs,
"impropers": imps,
"pairs": pairs}
for k, v in prms.iteritems():
prms[k] = "\n".join(v)
itp = """;
; OPLS/AA topology for '{resname}'
; Converted from Q with q2gmx.ipynb
; Date: {date}
;
[ atomtypes ]
; name mass charge ptype sigma(nm) epsilon (kJ/mol)
{atomtypes}
[ moleculetype ]
; Name nrexcl
{resname} 3
[ atoms ]
; nr type resnr residue atom cgnr charge mass
{atoms}
[ bonds ]
; ai aj type r0 (nm) fc (kJ/(mol nm2))
{bonds}
[ angles ]
; ai aj ak type theta0 (degr) fc (kJ/(mol rad2)
{angles}
[ dihedrals ]
; Type 5 Fourier
; ai aj ak al type coefficients
{dihedrals}
[ dihedrals ]
; Periodic improper dihedrals (type 4)
; ai aj ak al type phi0 fc (kJ/mol) n
{impropers}
[ pairs ]
; ai aj f_qq qi qj sigma (nm) epsilon (kJ/mol)
{pairs}
""".format(resname=resname, date=time.ctime(), **prms)
"""
Explanation: Make the ITP
End of explanation
"""
#open(resname+".gro", "w").write(gro)
#open(resname+".itp", "w").write(itp)
print(gro)
print(itp)
"""
Explanation: Write files
End of explanation
"""
|
srnas/barnaba | examples/example_01_ermsd.ipynb | gpl-3.0 | # import barnaba
import barnaba as bb
# define trajectory and topology files
native="uucg2.pdb"
traj = "../test/data/UUCG.xtc"
top = "../test/data/UUCG.pdb"
# calculate eRMSD between native and all frames in trajectory
ermsd = bb.ermsd(native,traj,topology=top)
"""
Explanation: RMSD/eRMSD calculation
We here show how to calculate distances between three-dimensional structures.
eRMSD can be calculated using the function
python
ermsd = bb.ermsd(reference_file,target_file)
reference_file and target_file can be e.g. PDB files. eRMSD between reference and all frames in a simulation can be calculated by specifying the trajectory and topology files:
python
ermsd = bb.ermsd(reference_file,target_traj_file,topology=topology_file)
All trajectory formats accepted by MDTRAJ (e.g. pdb, xtc, trr, dcd, binpos, netcdf, mdcrd, prmtop) can be used.
Let us see a practical example:
End of explanation
"""
import matplotlib.pyplot as plt
plt.xlabel("Frame")
plt.ylabel("eRMSD from native")
plt.plot(ermsd[::50])
plt.show()
plt.hist(ermsd,density=True,bins=50)
plt.xlabel("eRMSD from native")
plt.show()
"""
Explanation: We plot the eRMSD over time (every 50 frames to make the plot nicer) and make an histogram
End of explanation
"""
# calculate RMSD
rmsd = bb.rmsd(native,traj,topology=top,heavy_atom=False)
# plot time series
plt.xlabel("Frame")
plt.ylabel("RMSD from native (nm)")
plt.plot(rmsd[::50])
plt.show()
# make histogram
plt.hist(rmsd,density=True,bins=50)
plt.ylabel("RMSD from native (nm)")
plt.show()
"""
Explanation: As a rule of thumb, eRMSD below 0.7-0.8 can be considered low, as such the peak around 0.4 eRMSD corresponds to structures that are very similar to the native.
Nota Bene
- eRMSD is a dimensionless number.
- Remember to remove periodic boundary conditions before performing the analysis.
We can also calculate the root mean squared deviation (RMSD) after optimal superposition by using
python
rmsd = bb.rmsd(reference_file,target_file)
or
python
rmsd = bb.rmsd(reference_file,target_traj_file,topology=topology_file)
for trajectories. By default RMSD is calculated using backbone atoms only (heavy_atom=False): this makes it possible to calculate RMSD between structures with different sequences. If heavy_atom=True, RMSD is calculated using all heavy atoms. Values are expressed in nanometers.
End of explanation
"""
plt.xlabel("eRMSD from native")
plt.ylabel("RMSD from native (nm)")
plt.axhline(0.4,ls = "--", c= 'k')
plt.axvline(0.7,ls = "--", c= 'k')
plt.scatter(ermsd,rmsd,s=2.5)
plt.show()
"""
Explanation: Structures with eRMSD lower than 0.7 are typically significantly similar to the reference.
Note that structures with low RMSD (less than 0.4 nm) may be very different from native. We can check if this is true by comparing RMSD and eRMSD
End of explanation
"""
import numpy as np
low_rmsd = np.where(rmsd<0.3)
idx_a = np.argsort(ermsd[low_rmsd])[-1]
low_e = low_rmsd[0][idx_a]
print("Highest eRMSD for structures with RMSD ~ 0.3nm")
print("eRMSD:%5.3f; RMSD: %5.3f nm" % (ermsd[low_e],rmsd[low_e]))
plt.xlabel("eRMSD from native")
plt.ylabel("RMSD from native (nm)")
plt.axhline(0.4,ls = "--", c= 'k')
plt.axvline(0.7,ls = "--", c= 'k')
plt.scatter(ermsd,rmsd,s=2.5)
plt.scatter(ermsd[low_e],rmsd[low_e],s=50,c='r')
plt.show()
"""
Explanation: We can clearly see that the two measures are correlated, but several structures with low RMSD have very large eRMSD.
We cherry-pick a structure with RMSD from native $\approx$ 0.3 nm, but high eRMSD.
End of explanation
"""
import mdtraj as md
# load trajectory
tt = md.load(traj,top=top)
# save low ermsd
tt[low_e].save("low_rmsd.pdb")
# align to native and write aligned PDB to disk
rmsd1 = bb.rmsd(native,'low_rmsd.pdb',out='low_rmsd_align.pdb')
"""
Explanation: We can extract a frame from the simulation using the save function from MDTraj.
Aligned structures are written to disk by passing a string out to the rmsd function.
End of explanation
"""
import py3Dmol
pdb_e = open('low_rmsd_align.pdb','r').read()
pdb_n = open(native,'r').read()
p = py3Dmol.view(width=900,height=600,viewergrid=(1,2))
p.addModel(pdb_n,'pdb',viewer=(0,0))
p.addModel(pdb_e,'pdb',viewer=(0,1))
p.setStyle({'stick':{}})
p.setBackgroundColor('0xeeeeee')
p.zoomTo()
"""
Explanation: Finally, we use py3Dmol module to visualize the native and the low-RMSD/high-eRMSD structure.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tutorials/keras/regression.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
# Use seaborn for pairplot.
!pip install -q seaborn
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
"""
Explanation: 回帰:燃費を予測する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/keras/regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/regression.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
回帰問題では、価格や確率といった連続的な値の出力を予測することが目的となります。これは、分類問題の目的が、(たとえば、写真にリンゴが写っているかオレンジが写っているかといった)離散的なラベルを予測することであるのとは対照的です。
このノートブックでは、古典的な Auto MPG データセットを使用し、1970 年代後半から 1980 年台初めの自動車の燃費を予測するモデルを構築します。この目的のため、モデルにはこの時期の多数の自動車の仕様を読み込ませます。仕様には、気筒数、排気量、馬力、重量などが含まれています。
このサンプルではtf.keras APIを使用しています。詳細はこのガイドを参照してください。
End of explanation
"""
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
"""
Explanation: Auto MPG データセット
このデータセットはUCI Machine Learning Repositoryから入手可能です。
データの取得
まず、データセットをダウンロードします。
End of explanation
"""
dataset.isna().sum()
"""
Explanation: データのクレンジング
このデータセットには、いくつか欠損値があります。
End of explanation
"""
dataset = dataset.dropna()
"""
Explanation: この最初のチュートリアルでは簡単化のためこれらの行を削除します。
End of explanation
"""
dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='')
dataset.tail()
"""
Explanation: "Origin" 列はカテゴリであり、数値ではないので、pd.get_dummies でワンホットに変換します。
注意: keras.Model を設定して、このような変換を行うことができます。これについては、このチュートリアルでは取り上げません。例については、前処理レイヤーまたは CSV データの読み込みのチュートリアルをご覧ください。
End of explanation
"""
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
"""
Explanation: データをトレーニング用セットとテスト用セットに分割
次に、データセットをトレーニングセットとテストセットに分割します。モデルの最終評価ではテストセットを使用します。
End of explanation
"""
sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde')
"""
Explanation: データの観察
トレーニング用セットの列のいくつかのペアの同時分布を見てみます。
一番上の行を見ると、燃費 (MPG) が他のすべてのパラメータの関数であることは明らかです。他の行を見ると、それらが互いの関数であることが明らかです。
End of explanation
"""
train_dataset.describe().transpose()
"""
Explanation: 全体の統計値も見てみましょう。
End of explanation
"""
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
"""
Explanation: ラベルと特徴量の分離
ラベル、すなわち目的変数を特徴量から分離します。このラベルは、モデルに予測させたい数量です。
End of explanation
"""
train_dataset.describe().transpose()[['mean', 'std']]
"""
Explanation: 正規化
統計の表を見て、それぞれの特徴量の範囲がどれほど違っているかに注目してください。
End of explanation
"""
normalizer = tf.keras.layers.Normalization(axis=-1)
"""
Explanation: スケールや値の範囲が異なる特徴量を正規化するのはよい習慣です。
これが重要な理由の 1 つは、特徴にモデルの重みが掛けられるためです。したがって、出力のスケールと勾配のスケールは、入力のスケールの影響を受けます。
モデルは特徴量の正規化なしで収束する可能性がありますが、正規化によりトレーニングがはるかに安定します。
注意: ここでは、簡単にするため実行しますが、ワンホット特徴を正規化する利点はありません。前処理レイヤーの使用方法の詳細については、前処理レイヤーの使用ガイドと Keras 前処理レイヤーを使用した構造化データの分類チュートリアルを参照してください。
正規化レイヤー
preprocessing.Normalization レイヤーは、その前処理をモデルに組み込むためのクリーンでシンプルな方法です。
まず、レイヤーを作成します。
End of explanation
"""
normalizer.adapt(np.array(train_features))
"""
Explanation: 次にデータに .adapt() します。
End of explanation
"""
print(normalizer.mean.numpy())
"""
Explanation: これにより、平均と分散が計算され、レイヤーに保存されます。
End of explanation
"""
first = np.array(train_features[:1])
with np.printoptions(precision=2, suppress=True):
print('First example:', first)
print()
print('Normalized:', normalizer(first).numpy())
"""
Explanation: レイヤーが呼び出されると、入力データが返され、各特徴は個別に正規化されます。
End of explanation
"""
horsepower = np.array(train_features['Horsepower'])
horsepower_normalizer = layers.Normalization(input_shape=[1,], axis=None)
horsepower_normalizer.adapt(horsepower)
"""
Explanation: 線形回帰
DNN モデルを構築する前に、単一変数および複数変数を使用した線形回帰から始めます。
1 つの変数
単一変数の線形回帰から始めて、Horsepower から MPG を予測します。
tf.keras を使用したモデルのトレーニングは、通常、モデルアーキテクチャを定義することから始まります。ここでは、tf.keras.Sequential モデルを使用します。このモデルは、一連のステップを表します。
単一変数の線形回帰モデルには、次の 2 つのステップがあります。
入力 horsepower を正規化します。
線形変換 ($y = mx+b$) を適用して、layers.Dense を使用して 1 つの出力を生成します。
入力の数は、input_shape 引数により設定できます。また、モデルを初めて実行するときに自動的に設定することもできます。
まず、馬力 Normalization レイヤーを作成します。
End of explanation
"""
horsepower_model = tf.keras.Sequential([
horsepower_normalizer,
layers.Dense(units=1)
])
horsepower_model.summary()
"""
Explanation: Sequential モデルを作成します。
End of explanation
"""
horsepower_model.predict(horsepower[:10])
"""
Explanation: このモデルは、Horsepower から MPG を予測します。
トレーニングされていないモデルを最初の 10 の馬力の値で実行します。出力は良くありませんが、期待される形状が (10,1) であることがわかります。
End of explanation
"""
horsepower_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
"""
Explanation: モデルが構築されたら、Model.compile() メソッドを使用してトレーニング手順を構成します。コンパイルするための最も重要な引数は、loss と optimizer です。これらは、最適化されるもの (mean_absolute_error) とその方法 (optimizers.Adam を使用)を定義するためです。
End of explanation
"""
%%time
history = horsepower_model.fit(
train_features['Horsepower'],
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
"""
Explanation: トレーニングを構成したら、Model.fit() を使用してトレーニングを実行します。
End of explanation
"""
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 10])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plot_loss(history)
"""
Explanation: history オブジェクトに保存された数値を使ってモデルのトレーニングの様子を可視化します。
End of explanation
"""
test_results = {}
test_results['horsepower_model'] = horsepower_model.evaluate(
test_features['Horsepower'],
test_labels, verbose=0)
"""
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
"""
x = tf.linspace(0.0, 250, 251)
y = horsepower_model.predict(x)
def plot_horsepower(x, y):
plt.scatter(train_features['Horsepower'], train_labels, label='Data')
plt.plot(x, y, color='k', label='Predictions')
plt.xlabel('Horsepower')
plt.ylabel('MPG')
plt.legend()
plot_horsepower(x, y)
"""
Explanation: これは単一変数の回帰であるため、入力の関数としてモデルの予測を簡単に確認できます。
End of explanation
"""
linear_model = tf.keras.Sequential([
normalizer,
layers.Dense(units=1)
])
"""
Explanation: 複数の入力
ほぼ同じ設定を使用して、複数の入力に基づく予測を実行することができます。このモデルでは、$m$ が行列で、$b$ がベクトルですが、同じ $y = mx+b$ を実行します。
ここでは、データセット全体に適合した Normalization レイヤーを使用します。
End of explanation
"""
linear_model.predict(train_features[:10])
"""
Explanation: 入力のバッチでこのモデルを呼び出すと、各例に対して units=1 出力が生成されます。
End of explanation
"""
linear_model.layers[1].kernel
"""
Explanation: モデルを呼び出すと、その重み行列が作成されます。これで、kernel ($y=mx+b$ の $m$) の形状が (9,1) であることがわかります。
End of explanation
"""
linear_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
%%time
history = linear_model.fit(
train_features,
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
"""
Explanation: Keras Model.compile でモデルを構成し、Model.fit で 100 エポックトレーニングします。
End of explanation
"""
plot_loss(history)
"""
Explanation: この回帰モデルですべての入力を使用すると、入力が 1 つだけの horsepower_model よりもトレーニングエラーや検証エラーが大幅に低くなります。
End of explanation
"""
test_results['linear_model'] = linear_model.evaluate(
test_features, test_labels, verbose=0)
"""
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
"""
def build_and_compile_model(norm):
model = keras.Sequential([
norm,
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
"""
Explanation: DNN 回帰
前のセクションでは、単一および複数の入力の線形モデルを実装しました。
このセクションでは、単一入力および複数入力の DNN モデルを実装します。コードは基本的に同じですが、モデルが拡張されていくつかの「非表示」の非線形レイヤーが含まれる点が異なります。「非表示」とは、入力または出力に直接接続されていないことを意味します。
コードは基本的に同じですが、モデルが拡張されていくつかの「非表示」の非線形レイヤーが含まれる点が異なります。「非表示」とは、入力または出力に直接接続されていないことを意味します。
これらのモデルには、線形モデルよりも多少多くのレイヤーが含まれます。
前と同じく正規化レイヤー。(単一入力モデルの場合は horsepower_normalizer、複数入力モデルの場合は normalizer を使用)。
relu 非線形性を使用する 2 つの非表示の非線形Dense レイヤー。
線形単一出力レイヤー
どちらも同じトレーニング手順を使用するため、compile メソッドは以下の build_and_compile_model 関数に含まれています。
End of explanation
"""
dnn_horsepower_model = build_and_compile_model(horsepower_normalizer)
"""
Explanation: DNN と単一入力を使用した回帰
入力 'Horsepower'、正規化レイヤー horsepower_normalizer(前に定義)のみを使用して DNN モデルを作成します。
End of explanation
"""
dnn_horsepower_model.summary()
"""
Explanation: このモデルには、線形モデルよりも多少多くのトレーニング可能なレイヤーが含まれます。
End of explanation
"""
%%time
history = dnn_horsepower_model.fit(
train_features['Horsepower'],
train_labels,
validation_split=0.2,
verbose=0, epochs=100)
"""
Explanation: Keras Model.fit を使用してモデルをトレーニングします。
End of explanation
"""
plot_loss(history)
"""
Explanation: このモデルは、単一入力の線形 horsepower_model よりもわずかに優れています。
End of explanation
"""
x = tf.linspace(0.0, 250, 251)
y = dnn_horsepower_model.predict(x)
plot_horsepower(x, y)
"""
Explanation: Horsepower の関数として予測をプロットすると、このモデルが非表示のレイヤーにより提供される非線形性をどのように利用するかがわかります。
End of explanation
"""
test_results['dnn_horsepower_model'] = dnn_horsepower_model.evaluate(
test_features['Horsepower'], test_labels,
verbose=0)
"""
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
"""
dnn_model = build_and_compile_model(normalizer)
dnn_model.summary()
%%time
history = dnn_model.fit(
train_features,
train_labels,
validation_split=0.2,
verbose=0, epochs=100)
plot_loss(history)
"""
Explanation: 完全モデル
すべての入力を使用してこのプロセスを繰り返すと、検証データセットの性能がわずかに向上します。
End of explanation
"""
test_results['dnn_model'] = dnn_model.evaluate(test_features, test_labels, verbose=0)
"""
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
"""
pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T
"""
Explanation: 性能
すべてのモデルがトレーニングされたので、テスト用セットの性能を確認します。
End of explanation
"""
test_predictions = dnn_model.predict(test_features).flatten()
a = plt.axes(aspect='equal')
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
lims = [0, 50]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
"""
Explanation: これらの結果は、トレーニング中に見られる検証エラーと一致します。
モデルを使った予測
Keras Model.predict を使用して、テストセットの dnn_model で予測を行い、損失を確認します。
End of explanation
"""
error = test_predictions - test_labels
plt.hist(error, bins=25)
plt.xlabel('Prediction Error [MPG]')
_ = plt.ylabel('Count')
"""
Explanation: モデルの予測精度は妥当です。
次に、エラー分布を見てみましょう。
End of explanation
"""
dnn_model.save('dnn_model')
"""
Explanation: モデルに満足している場合は、後で使用できるように保存します。
End of explanation
"""
reloaded = tf.keras.models.load_model('dnn_model')
test_results['reloaded'] = reloaded.evaluate(
test_features, test_labels, verbose=0)
pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T
"""
Explanation: モデルを再度読み込むと、同じ出力が得られます。
End of explanation
"""
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen | wc-arbeiten-tf-20-aufgabe.ipynb | gpl-3.0 | #importieren sie die Bibliothek pandas
#importieren sie matplotlib.pyplot as plt
#laden Sie die Datei "sensordaten.csv" auf Ihren Hub
#laden Sie die Datei "sensordaten.csv" in einen Datframe df
#Einlesen der Dateien mit header=None
#Betrachten Sie die ersten Daten des Dataframes df
#Erzeugen Sie eine statistische Beschreibung
"""
Explanation: <h1>ANN - Erstes arbeiten mit Tensorflow - Binäre Klassifikation</h1>
End of explanation
"""
#importieren Sie tensorflow as tf
# Laden der Bibliotheen
#importieren Sie das keras.model Sequential()
#importieren Sie die Keras Layer Dense und Activation
# Trennung der Features und der Labels
x_input = df.iloc[:,:-1]
y_input = df.iloc[:,-1]
#Ausgabe der Datenwerte
print(x_input)
#Ausgabe der Labels
"""
Explanation: Die Exploration der Daten zeigt, dass wir 60 Features von X0 - X59 <br> in 208 Datensätzen haben.<br>
Die Klassifikation erfolgt in Spalte 60<br>
Teil gut: 0<br>
Teil schlecht: 1<br>
End of explanation
"""
# Entwickeln des Multilayer Netzwerk Modells
model = Sequential()
model.add(Dense(300, input_dim = 60, activation = 'relu'))
###
# Compilieren des Modells
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Überprüfen der Konfiguration
epoch_num=30
history = model.fit(x_input, y_input, epochs=epoch_num)
#Evaluierung der Ergebnisse
X_test = [[0.0039,0.0063,0.0152,0.0336,0.0310,0.0284,0.0396,0.0272,0.0323,0.0452,0.0492,0.0996,0.1424,0.1194,0.0628,0.0907,0.1177,0.1429,0.1223,0.1104,0.1847,0.3715,0.4382,0.5707,0.6654,0.7476,0.7654,0.8555,0.9720,0.9221,0.7502,0.7209,0.7757,0.6055,0.5021,0.4499,0.3947,0.4281,0.4427,0.3749,0.1972,0.0511,0.0793,0.1269,0.1533,0.0690,0.0402,0.0534,0.0228,0.0073,0.0062,0.0062,0.0120,0.0052,0.0056,0.0093,0.0042,0.0003,0.0053,0.0036]]
result=model.predict(X_test)
print(result)
X_test = [[0.0129,0.0141,0.0309,0.0375,0.0767,0.0787,0.0662,0.1108,0.1777,0.2245,0.2431,0.3134,0.3206,0.2917,0.2249,0.2347,0.2143,0.2939,0.4898,0.6127,0.7531,0.7718,0.7432,0.8673,0.9308,0.9836,1.0000,0.9595,0.8722,0.6862,0.4901,0.3280,0.3115,0.1969,0.1019,0.0317,0.0756,0.0907,0.1066,0.1380,0.0665,0.1475,0.2470,0.2788,0.2709,0.2283,0.1818,0.1185,0.0546,0.0219,0.0204,0.0124,0.0093,0.0072,0.0019,0.0027,0.0054,0.0017,0.0024,0.0029]]
result=model.predict(X_test)
print(result)
history_dict = history.history
history_dict.keys()
acc = history_dict['accuracy']
#val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
#val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, acc, 'r', label='Accuracy')
# b is for "solid blue line"
#plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training loss und accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss / Accuracy')
plt.legend()
plt.show()
"""
Explanation: DieLabels stehen bereits als Zahlen zur Verfügung
Sollten die Labels in einem Textformat wie gut, schlecht vorliegen
müssen wir sie umcodieren.
Hierzu verwendet man den LabelEncoder()
<h2> Erstes Neuronales Netz - Multi Layer</h2>
End of explanation
"""
#Initialisieren eine neuen Netzwerkes nn2
model2 = Sequential()
#Hinzufügen der layer
model2.add(Dense(units = 100, input_dim = 60, activation = 'relu')) #50
model2.add(Dense(units=50, activation = 'relu')) #20
model2.add(Dense(units=1, activation = 'sigmoid'))
# Kompilieren des neuen Modells
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#Ausgabe der Modellstruktur
#Trainieren des Modells
epoch_num = 50
history = model2.fit(x_input,y_input, epochs=epoch_num)
#Evaluierung der Ergebnisse
acc = history_dict['accuracy']
#val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
#val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, acc, 'r', label='Accuracy')
# b is for "solid blue line"
#plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training loss und accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss / Accuracy')
plt.legend()
plt.show()
# Test ( vorheriges Ergebnis 0.999)
X_test = [[0.0129,0.0141,0.0309,0.0375,0.0767,0.0787,0.0662,0.1108,0.1777,0.2245,0.2431,0.3134,0.3206,0.2917,0.2249,0.2347,0.2143,0.2939,0.4898,0.6127,0.7531,0.7718,0.7432,0.8673,0.9308,0.9836,1.0000,0.9595,0.8722,0.6862,0.4901,0.3280,0.3115,0.1969,0.1019,0.0317,0.0756,0.0907,0.1066,0.1380,0.0665,0.1475,0.2470,0.2788,0.2709,0.2283,0.1818,0.1185,0.0546,0.0219,0.0204,0.0124,0.0093,0.0072,0.0019,0.0027,0.0054,0.0017,0.0024,0.0029]]
result=model2.predict(X_test)
print(result)
"""
Explanation: <h2> Initialisierung eines zweiten Neuronalen Netzes - Multi Layer</h2>
End of explanation
"""
|
deepmind/deepmind-research | rl_unplugged/rwrl_d4pg.ipynb | apache-2.0 | !pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
"""
Explanation: Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
RL Unplugged: Offline D4PG - RWRL
Guide to training an Acme D4PG agent on RWRL data.
<a href="https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/rl_unplugged/rwrl_d4pg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Installation
End of explanation
"""
#@title Edit and run
mjkey = """
REPLACE THIS LINE WITH YOUR MUJOCO LICENSE KEY
""".strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL deps
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Fetch MuJoCo binaries from Roboti
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
# Install dm_control
!pip install dm_control
"""
Explanation: MuJoCo
More detailed instructions in this tutorial.
Institutional MuJoCo license.
End of explanation
"""
#@title Add your MuJoCo License and run
mjkey = """
""".strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL dependencies
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Get MuJoCo binaries
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
# Install dm_control, including extra dependencies needed for the locomotion
# mazes.
!pip install dm_control[locomotion_mazes]
"""
Explanation: Machine-locked MuJoCo license.
End of explanation
"""
!git clone https://github.com/google-research/realworldrl_suite.git
!pip install realworldrl_suite/
"""
Explanation: RWRL
End of explanation
"""
!git clone https://github.com/deepmind/deepmind-research.git
%cd deepmind-research
"""
Explanation: RL Unplugged
End of explanation
"""
import collections
import copy
from typing import Mapping, Sequence
import acme
from acme import specs
from acme.agents.tf import actors
from acme.agents.tf import d4pg
from acme.tf import networks
from acme.tf import utils as tf2_utils
from acme.utils import loggers
from acme.wrappers import single_precision
from acme.tf import utils as tf2_utils
import numpy as np
import realworldrl_suite.environments as rwrl_envs
from reverb import replay_sample
import six
from rl_unplugged import rwrl
import sonnet as snt
import tensorflow as tf
"""
Explanation: Imports
End of explanation
"""
domain_name = 'cartpole' #@param
task_name = 'swingup' #@param
difficulty = 'easy' #@param
combined_challenge = 'easy' #@param
combined_challenge_str = str(combined_challenge).lower()
tmp_path = '/tmp/rwrl'
gs_path = f'gs://rl_unplugged/rwrl'
data_path = (f'combined_challenge_{combined_challenge_str}/{domain_name}/'
f'{task_name}/offline_rl_challenge_{difficulty}')
!mkdir -p {tmp_path}/{data_path}
!gsutil cp -r {gs_path}/{data_path}/* {tmp_path}/{data_path}
num_shards_str, = !ls {tmp_path}/{data_path}/* | wc -l
num_shards = int(num_shards_str)
"""
Explanation: Data
End of explanation
"""
#@title Auxiliary functions
def flatten_observation(observation):
"""Flattens multiple observation arrays into a single tensor.
Args:
observation: A mutable mapping from observation names to tensors.
Returns:
A flattened and concatenated observation array.
Raises:
ValueError: If `observation` is not a `collections.MutableMapping`.
"""
if not isinstance(observation, collections.MutableMapping):
raise ValueError('Can only flatten dict-like observations.')
if isinstance(observation, collections.OrderedDict):
keys = six.iterkeys(observation)
else:
# Keep a consistent ordering for other mappings.
keys = sorted(six.iterkeys(observation))
observation_arrays = [tf.reshape(observation[key], [-1]) for key in keys]
return tf.concat(observation_arrays, 0)
def preprocess_fn(sample):
o_tm1, a_tm1, r_t, d_t, o_t = sample.data[:5]
o_tm1 = flatten_observation(o_tm1)
o_t = flatten_observation(o_t)
return replay_sample.ReplaySample(
info=sample.info, data=(o_tm1, a_tm1, r_t, d_t, o_t))
batch_size = 10 #@param
environment = rwrl_envs.load(
domain_name=domain_name,
task_name=f'realworld_{task_name}',
environment_kwargs=dict(log_safety_vars=False, flat_observation=True),
combined_challenge=combined_challenge)
environment = single_precision.SinglePrecisionWrapper(environment)
environment_spec = specs.make_environment_spec(environment)
act_spec = environment_spec.actions
obs_spec = environment_spec.observations
dataset = rwrl.dataset(
tmp_path,
combined_challenge=combined_challenge_str,
domain=domain_name,
task=task_name,
difficulty=difficulty,
num_shards=num_shards,
shuffle_buffer_size=10)
dataset = dataset.map(preprocess_fn).batch(batch_size)
"""
Explanation: Dataset and environment
End of explanation
"""
#@title Auxiliary functions
def make_networks(
action_spec: specs.BoundedArray,
hidden_size: int = 1024,
num_blocks: int = 4,
num_mixtures: int = 5,
vmin: float = -150.,
vmax: float = 150.,
num_atoms: int = 51,
):
"""Creates networks used by the agent."""
num_dimensions = np.prod(action_spec.shape, dtype=int)
policy_network = snt.Sequential([
networks.LayerNormAndResidualMLP(
hidden_size=hidden_size, num_blocks=num_blocks),
# Converts the policy output into the same shape as the action spec.
snt.Linear(num_dimensions),
# Note that TanhToSpec applies tanh to the input.
networks.TanhToSpec(action_spec)
])
# The multiplexer concatenates the (maybe transformed) observations/actions.
critic_network = snt.Sequential([
networks.CriticMultiplexer(
critic_network=networks.LayerNormAndResidualMLP(
hidden_size=hidden_size, num_blocks=num_blocks),
observation_network=tf2_utils.batch_concat),
networks.DiscreteValuedHead(vmin, vmax, num_atoms)
])
return {
'policy': policy_network,
'critic': critic_network,
}
# Create the networks to optimize.
online_networks = make_networks(act_spec)
target_networks = copy.deepcopy(online_networks)
# Create variables.
tf2_utils.create_variables(online_networks['policy'], [obs_spec])
tf2_utils.create_variables(online_networks['critic'], [obs_spec, act_spec])
tf2_utils.create_variables(target_networks['policy'], [obs_spec])
tf2_utils.create_variables(target_networks['critic'], [obs_spec, act_spec])
# The learner updates the parameters (and initializes them).
learner = d4pg.D4PGLearner(
policy_network=online_networks['policy'],
critic_network=online_networks['critic'],
target_policy_network=target_networks['policy'],
target_critic_network=target_networks['critic'],
dataset=dataset,
discount=0.99,
target_update_period=100)
"""
Explanation: D4PG learner
End of explanation
"""
for _ in range(100):
learner.step()
"""
Explanation: Training loop
End of explanation
"""
# Create a logger.
logger = loggers.TerminalLogger(label='evaluation', time_delta=1.)
# Create an environment loop.
loop = acme.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedFeedForwardActor(online_networks['policy']),
logger=logger)
loop.run(5)
"""
Explanation: Evaluation
End of explanation
"""
|
willvousden/emcee | docs/_static/notebooks/parallel.ipynb | mit | import emcee
print(emcee.__version__)
"""
Explanation: Parallelization
With emcee, it's easy to make use of multiple CPUs to speed up slow sampling.
There will always be some computational overhead introduced by parallelization so it will only be beneficial in the case where the model is expensive, but this is often true for real research problems.
All parallelization techniques are accessed using the pool keyword argument in the :class:EnsembleSampler class but, depending on your system and your model, there are a few pool options that you can choose from.
In general, a pool is any Python object with a map method that can be used to apply a function to a list of numpy arrays.
Below, we will discuss a few options.
This tutorial was executed with the following version of emcee:
End of explanation
"""
import time
import numpy as np
def log_prob(theta):
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
"""
Explanation: In all of the following examples, we'll test the code with the following convoluted model:
End of explanation
"""
np.random.seed(42)
initial = np.random.randn(32, 5)
nwalkers, ndim = initial.shape
nsteps = 100
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob)
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
serial_time = end - start
print("Serial took {0:.1f} seconds".format(serial_time))
"""
Explanation: This probability function will randomly sleep for a fraction of a second every time it is called.
This is meant to emulate a more realistic situation where the model is computationally expensive to compute.
To start, let's sample the usual (serial) way:
End of explanation
"""
from multiprocessing import Pool
with Pool() as pool:
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, pool=pool)
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
multi_time = end - start
print("Multiprocessing took {0:.1f} seconds".format(multi_time))
print("{0:.1f} times faster than serial".format(serial_time / multi_time))
"""
Explanation: Multiprocessing
The simplest method of parallelizing emcee is to use the multiprocessing module from the standard library.
To parallelize the above sampling, you could update the code as follows:
End of explanation
"""
from multiprocessing import cpu_count
ncpu = cpu_count()
print("{0} CPUs".format(ncpu))
"""
Explanation: I have 4 cores on the machine where this is being tested:
End of explanation
"""
with open("script.py", "w") as f:
f.write("""
import sys
import time
import emcee
import numpy as np
from schwimmbad import MPIPool
def log_prob(theta):
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
with MPIPool() as pool:
if not pool.is_master():
pool.wait()
sys.exit(0)
np.random.seed(42)
initial = np.random.randn(32, 5)
nwalkers, ndim = initial.shape
nsteps = 100
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, pool=pool)
start = time.time()
sampler.run_mcmc(initial, nsteps)
end = time.time()
print(end - start)
""")
mpi_time = !mpiexec -n {ncpu} python script.py
mpi_time = float(mpi_time[0])
print("MPI took {0:.1f} seconds".format(mpi_time))
print("{0:.1f} times faster than serial".format(serial_time / mpi_time))
"""
Explanation: We don't quite get the factor of 4 runtime decrease that you might expect because there is some overhead in the parallelization, but we're getting pretty close with this example and this will get even closer for more expensive models.
MPI
Multiprocessing can only be used for distributing calculations across processors on one machine.
If you want to take advantage of a bigger cluster, you'll need to use MPI.
In that case, you need to execute the code using the mpiexec executable, so this demo is slightly more convoluted.
For this example, we'll write the code to a file called script.py and then execute it using MPI, but when you really use the MPI pool, you'll probably just want to edit the script directly.
To run this example, you'll first need to install the schwimmbad library because emcee no longer includes its own MPIPool.
End of explanation
"""
def log_prob_data(theta, data):
a = data[0] # Use the data somehow...
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
data = np.random.randn(5000, 200)
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data, args=(data,))
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
serial_data_time = end - start
print("Serial took {0:.1f} seconds".format(serial_data_time))
"""
Explanation: There is often more overhead introduced by MPI than multiprocessing so we get less of a gain this time.
That being said, MPI is much more flexible and it can be used to scale to huge systems.
Pickling, data transfer & arguments
All parallel Python implementations work by spinning up multiple python processes with identical environments then and passing information between the processes using pickle.
This means that the probability function must be picklable.
Some users might hit issues when they use args to pass data to their model.
These args must be pickled and passed every time the model is called.
This can be a problem if you have a large dataset, as you can see here:
End of explanation
"""
with Pool() as pool:
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data, pool=pool, args=(data,))
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
multi_data_time = end - start
print("Multiprocessing took {0:.1f} seconds".format(multi_data_time))
print("{0:.1f} times faster(?) than serial".format(serial_data_time / multi_data_time))
"""
Explanation: We basically get no change in performance when we include the data argument here.
Now let's try including this naively using multiprocessing:
End of explanation
"""
def log_prob_data_global(theta):
a = data[0] # Use the data somehow...
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
with Pool() as pool:
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data_global, pool=pool)
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
multi_data_global_time = end - start
print("Multiprocessing took {0:.1f} seconds".format(multi_data_global_time))
print("{0:.1f} times faster than serial".format(serial_data_time / multi_data_global_time))
"""
Explanation: Brutal.
We can do better than that though.
It's a bit ugly, but if we just make data a global variable and use that variable within the model calculation, then we take no hit at all.
End of explanation
"""
|
gabicfa/RedesSociais | encontro03/.ipynb_checkpoints/show-graph-checkpoint.ipynb | gpl-3.0 | import sys
sys.path.append('..')
import socnet as sn
"""
Explanation: Encontro 03: Grafos Reais
Importando a biblioteca:
End of explanation
"""
sn.node_size = 3
sn.node_color = (0, 0, 0)
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
sn.node_label_position = 'top center'
g = sn.load_graph('tarefa1.gml')
sn.show_graph(g, nlab=True)
"""
Explanation: Carregando e visualizando o grafo:
End of explanation
"""
sn.node_size = 3
sn.node_color = (0, 0, 0)
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
sn.node_label_position = 'top center'
g = sn.load_graph('tarefa2.gml')
sn.show_graph(g, nlab=True)
sn.node_size = 1
sn.node_color = (0, 0, 0)
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
sn.node_label_position = 'top center'
g = sn.load_graph('tarefa5.gml')
sn.show_graph(g, nlab=True)
"""
Explanation: Dependendo do tamanho, o grafo pode demorar um pouco para aparecer.
End of explanation
"""
|
GoogleCloudPlatform/tf-estimator-tutorials | 05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb | apache-2.0 | MODEL_NAME = 'auto-encoder-01'
TRAIN_DATA_FILES_PATTERN = 'data/data-*.csv'
RESUME_TRAINING = False
MULTI_THREADING = True
"""
Explanation: TF Custom Estimator to Build a NN Autoencoder for Feature Extraction
End of explanation
"""
FEATURE_COUNT = 64
HEADER = ['key']
HEADER_DEFAULTS = [[0]]
UNUSED_FEATURE_NAMES = ['key']
CLASS_FEATURE_NAME = 'CLASS'
FEATURE_NAMES = []
for i in range(FEATURE_COUNT):
HEADER += ['x_{}'.format(str(i+1))]
FEATURE_NAMES += ['x_{}'.format(str(i+1))]
HEADER_DEFAULTS += [[0.0]]
HEADER += [CLASS_FEATURE_NAME]
HEADER_DEFAULTS += [['NA']]
print("Header: {}".format(HEADER))
print("Features: {}".format(FEATURE_NAMES))
print("Class Feature: {}".format(CLASS_FEATURE_NAME))
print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
"""
Explanation: 1. Define Dataset Metadata
End of explanation
"""
def parse_csv_row(csv_row):
columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
for column in UNUSED_FEATURE_NAMES:
features.pop(column)
target = features.pop(CLASS_FEATURE_NAME)
return features, target
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=None,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)
num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads)
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
features, target = csv_input_fn(files_name_pattern="")
print("Feature read from CSV: {}".format(list(features.keys())))
print("Target read from CSV: {}".format(target))
"""
Explanation: 2. Define CSV Data Input Function
End of explanation
"""
df_params = pd.read_csv("data/params.csv", header=0, index_col=0)
len(df_params)
df_params['feature_name'] = FEATURE_NAMES
df_params.head()
"""
Explanation: 3. Define Feature Columns
a. Load normalizarion params
End of explanation
"""
def standard_scaler(x, mean, stdv):
return (x-mean)/stdv
def maxmin_scaler(x, max_value, min_value):
return (x-min_value)/(max_value-min_value)
def get_feature_columns():
feature_columns = {}
# feature_columns = {feature_name: tf.feature_column.numeric_column(feature_name)
# for feature_name in FEATURE_NAMES}
for feature_name in FEATURE_NAMES:
feature_max = df_params[df_params.feature_name == feature_name]['max'].values[0]
feature_min = df_params[df_params.feature_name == feature_name]['min'].values[0]
normalizer_fn = lambda x: maxmin_scaler(x, feature_max, feature_min)
feature_columns[feature_name] = tf.feature_column.numeric_column(feature_name,
normalizer_fn=normalizer_fn
)
return feature_columns
print(get_feature_columns())
"""
Explanation: b. Create normalized feature columns
End of explanation
"""
def autoencoder_model_fn(features, labels, mode, params):
feature_columns = list(get_feature_columns().values())
input_layer_size = len(feature_columns)
encoder_hidden_units = params.encoder_hidden_units
# decoder units are the reverse of the encoder units, without the middle layer (redundant)
decoder_hidden_units = encoder_hidden_units.copy()
decoder_hidden_units.reverse()
decoder_hidden_units.pop(0)
output_layer_size = len(FEATURE_NAMES)
he_initialiser = tf.contrib.layers.variance_scaling_initializer()
l2_regulariser = tf.contrib.layers.l2_regularizer(scale=params.l2_reg)
print("[{}]->{}-{}->[{}]".format(len(feature_columns)
,encoder_hidden_units
,decoder_hidden_units,
output_layer_size))
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
# input layer
input_layer = tf.feature_column.input_layer(features=features,
feature_columns=feature_columns)
# Adding Gaussian Noise to input layer
noisy_input_layer = input_layer + (params.noise_level * tf.random_normal(tf.shape(input_layer)))
# Dropout layer
dropout_layer = tf.layers.dropout(inputs=noisy_input_layer,
rate=params.dropout_rate,
training=is_training)
# # Dropout layer without Gaussian Nosing
# dropout_layer = tf.layers.dropout(inputs=input_layer,
# rate=params.dropout_rate,
# training=is_training)
# Encoder layers stack
encoding_hidden_layers = tf.contrib.layers.stack(inputs= dropout_layer,
layer= tf.contrib.layers.fully_connected,
stack_args=encoder_hidden_units,
#weights_initializer = he_init,
weights_regularizer =l2_regulariser,
activation_fn = tf.nn.relu
)
# Decoder layers stack
decoding_hidden_layers = tf.contrib.layers.stack(inputs=encoding_hidden_layers,
layer=tf.contrib.layers.fully_connected,
stack_args=decoder_hidden_units,
#weights_initializer = he_init,
weights_regularizer =l2_regulariser,
activation_fn = tf.nn.relu
)
# Output (reconstructed) layer
output_layer = tf.layers.dense(inputs=decoding_hidden_layers,
units=output_layer_size, activation=None)
# Encoding output (i.e., extracted features) reshaped
encoding_output = tf.squeeze(encoding_hidden_layers)
# Reconstruction output reshaped (for serving function)
reconstruction_output = tf.squeeze(tf.nn.sigmoid(output_layer))
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
# Convert predicted_indices back into strings
predictions = {
'encoding': encoding_output,
'reconstruction': reconstruction_output
}
export_outputs = {
'predict': tf.estimator.export.PredictOutput(predictions)
}
# Provide an estimator spec for `ModeKeys.PREDICT` modes.
return tf.estimator.EstimatorSpec(mode,
predictions=predictions,
export_outputs=export_outputs)
# Define loss based on reconstruction and regularization
# reconstruction_loss = tf.losses.mean_squared_error(tf.squeeze(input_layer), reconstruction_output)
# loss = reconstruction_loss + tf.losses.get_regularization_loss()
reconstruction_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.squeeze(input_layer), logits=tf.squeeze(output_layer))
loss = reconstruction_loss + tf.losses.get_regularization_loss()
# Create Optimiser
optimizer = tf.train.AdamOptimizer(params.learning_rate)
# Create training operation
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Calculate root mean squared error as additional eval metric
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(
tf.squeeze(input_layer), reconstruction_output)
}
# Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
estimator_spec = tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return estimator_spec
def create_estimator(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=autoencoder_model_fn,
params=hparams,
config=run_config)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
"""
Explanation: 4. Define Autoencoder Model Function
End of explanation
"""
TRAIN_SIZE = 2000
NUM_EPOCHS = 1000
BATCH_SIZE = 100
NUM_EVAL = 10
TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS
CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
encoder_hidden_units=[30,3],
learning_rate = 0.01,
l2_reg = 0.0001,
noise_level = 0.0,
max_steps = TOTAL_STEPS,
dropout_rate = 0.05
)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.contrib.learn.RunConfig(
save_checkpoints_steps=CHECKPOINT_STEPS,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", TOTAL_STEPS)
print("Required Evaluation Steps:", NUM_EVAL)
print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs")
print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
"""
Explanation: 5. Run Experiment using Estimator Train_And_Evaluate
a. Set the parameters
End of explanation
"""
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode = tf.contrib.learn.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
),
max_steps=hparams.max_steps,
hooks=None
)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
),
# exporters=[tf.estimator.LatestExporter(
# name="encode", # the name of the folder in which the model will be exported to under export
# serving_input_receiver_fn=csv_serving_input_fn,
# exports_to_keep=1,
# as_text=True)],
steps=None,
hooks=None
)
"""
Explanation: b. Define TrainSpec and EvaluSpec
End of explanation
"""
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(run_config, hparams)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
"""
Explanation: d. Run Experiment via train_and_evaluate
End of explanation
"""
import itertools
DATA_SIZE = 2000
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.INFER,
num_epochs=1,
batch_size=500
)
estimator = create_estimator(run_config, hparams)
predictions = estimator.predict(input_fn=input_fn)
predictions = itertools.islice(predictions, DATA_SIZE)
predictions = list(map(lambda item: list(item["encoding"]), predictions))
print(predictions[:5])
"""
Explanation: 6. Use the trained model to encode data (prediction)
End of explanation
"""
y = pd.read_csv("data/data-01.csv", header=None, index_col=0)[65]
data_reduced = pd.DataFrame(predictions, columns=['c1','c2','c3'])
data_reduced['class'] = y
data_reduced.head()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=data_reduced.c2/1000000, ys=data_reduced.c3/1000000, zs=data_reduced.c1/1000000, c=data_reduced['class'], marker='o')
plt.show()
"""
Explanation: Visualise Encoded Data
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session09/Day4/Matched_filter_tutorial.ipynb | mit | # ! pip install lalsuite pycbc
"""
Explanation: Welcome to the matched filtering tutorial!
Installation
Make sure you have PyCBC and some basic lalsuite tools installed.
Only execute the below cell if you have not already installed pycbc
Note –– if you were not able to install pycbc, or you got errors preventing your from importing pycbc, please upload this notebook to google collaboratory, where you can easily pip install lalsuite pycbc and run the entire notebook.
End of explanation
"""
from pycbc.waveform import get_td_waveform
import matplotlib.pyplot as plt
"""
Explanation: <span style="color:gray">Jess notes: this notebook was made with a PyCBC 1.8.0 kernel. </span>
Learning goals
With this tutorial, you learn how to:
Generate source waveforms detectable by LIGO, Virgo, KAGRA
Use PyCBC to run a matched filter search on gravitational wave detector data
Estimate the significance of a trigger given a background distribution
Challenge: Code up a trigger coincidence algorithm
This tutorial borrows heavily from tutorials made for the LIGO-Virgo Open Data Workshop by Alex Nitz. You can find PyCBC documentation and additional examples here.
Let's get started!
Generate a gravitational wave signal waveform
We'll use a popular waveform approximant (SOEBNRv4) to generate waveforms that would be detectable by LIGO, Virgo, or KAGRA.
First we import the packages we'll need.
End of explanation
"""
for m in [5, 10, 30, 100]:
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=m,
mass2=m,
delta_t=1.0/4096,
f_lower=30)
plt.plot(hp.sample_times, hp, label='$M_{\odot 1,2}=%s$' % m)
plt.legend(loc='upper left')
plt.ylabel('GW strain (plus polarization)')
plt.grid()
plt.xlabel('Time (s)')
plt.show()
"""
Explanation: Let's see what these waveforms look like for different component masses. We'll assume the two compact object have masses equal to each other, and we'll set a lower frequency bound of 30 Hz (determined by the sensitivity of our detectors).
We can also set a time sample rate with get_td_waveform. Let's try a rate of 4096 Hz.
Let's make a plot of the plus polarization (hp) to get a feel for what the waveforms look like.
Hint –– you may want to zoom in on the plot to see the waveforms in detail.
End of explanation
"""
for m in [5, 10, 30, 100]:
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=m,
mass2=m,
delta_t=1.0/4096,
f_lower= # complete
pylab.plot(hp.sample_times, hp, label='$M_{\odot 1,2}=%s$' % m)
pylab.legend(loc='upper left')
pylab.ylabel('GW strain (plus polarization)')
pylab.grid()
pylab.xlabel('Time (s)')
pylab.show()
"""
Explanation: Now let's see what happens if we decrease the lower frequency bound from 30 Hz to 15 Hz.
End of explanation
"""
# complete
"""
Explanation: Exercise 1
What happens to the waveform when the total mass (let's say 20 M<sub>sol</sub>) stays the same, but the mass ratio between the component masses changes?
Compare the waveforms for a m<sub>1</sub> = m<sub>2</sub> = 10 M<sub>sol</sub> system, a m<sub>1</sub> = 5 M<sub>sol</sub>, m<sub>2</sub> = 15 M<sub>sol</sub>, and a m<sub>1</sub> = 2 M<sub>sol</sub>, m<sub>2</sub> = 18 M<sub>sol</sub> system. What do you notice?
End of explanation
"""
# complete
"""
Explanation: Exercise 2
How much longer (in signal duration) would LIGO and Virgo (and KAGRA) be able to detect a 1.4-1.4 M<sub>sol</sub> binary neutron star system if our detectors were sensitive down to 10 Hz instead of 30 Hz? Note you'll need to use a different waveform approximant here. Try TaylorF2.
<span style="color:gray">Jess notes: this would be a major benefit of next-generation ("3G") ground-based gravitational wave detectors.</span>
End of explanation
"""
for d in [100, 500, 1000]:
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=10,
mass2=10,
delta_t=1.0/4096,
f_lower=30,
distance=d)
pylab.plot(hp.sample_times, hp, label='Distance=%s Mpc' % d)
pylab.grid()
pylab.xlabel('Time (s)')
pylab.ylabel('GW strain (plus polarization)')
pylab.legend(loc='upper left')
pylab.show()
"""
Explanation: Distance vs. signal amplitude
Let's see what happens when we scale the distance (in units of Megaparsecs) for a system with a total mass of 20 M<sub>sol</sub>.
<span style="color:gray">Note: redshift effects are not included here.</span>
End of explanation
"""
from pycbc.catalog import Merger
from pycbc.filter import resample_to_delta_t, highpass
merger = Merger("GW150914")
# Get the data from the Hanford detector
strain = merger.strain('H1')
"""
Explanation: Run a matched filter search on gravitational wave detector data
PyCBC also maintains a catalog of open data as PyCBC time series objects, easy to manipulate with PyCBC tools. Let's try using that and importing the data around the first detection, GW150914.
End of explanation
"""
# Remove the low frequency content and downsample the data to 2048Hz
strain = resample_to_delta_t(highpass(strain, 15.0), 1.0/2048)
plt.plot(strain.sample_times, strain)
plt.xlabel('Time (s)')
"""
Explanation: Data pre-conditioning
Once we've imported the open data from this alternate source, the first thing we'll need to do is pre-condition the data. This serves a few purposes:
* 1) reduces the dynamic range of the data
* 2) supresses high amplitudes at low frequencies, which can introduce numerical artifacts
* 3) if we don't need high frequency information, downsampling allows us to compute our matched filter result faster
Let's try highpassing above 15 Hz and downsampling to 2048 Hz, and we'll make a plot to see what the result looks like:
End of explanation
"""
# Remove 2 seconds of data from both the beginning and end
conditioned = strain.crop(2, 2)
plt.plot(conditioned.sample_times, conditioned)
plt.xlabel('Time (s)')
"""
Explanation: Notice the large amplitude excursions in the data at the start and end of our data segment. This is spectral leakage caused by filters we applied to the boundaries ringing off the discontinuities where the data suddenly starts and ends (for a time up to the length of the filter).
To avoid this we should trim the ends of the data in all steps of our filtering. Let's try cropping a couple seconds off of either side.
End of explanation
"""
from pycbc.psd import interpolate, inverse_spectrum_truncation
# Estimate the power spectral density
# We use 4 second samles of our time series in Welch method.
psd = conditioned.psd(4)
# Now that we have the psd we need to interpolate it to match our data
# and then limit the filter length of 1 / PSD. After this, we can
# directly use this PSD to filter the data in a controlled manner
psd = interpolate(psd, conditioned.delta_f)
# 1/PSD will now act as a filter with an effective length of 4 seconds
# Since the data has been highpassed above 15 Hz, and will have low values
# below this we need to informat the function to not include frequencies
# below this frequency.
psd = inverse_spectrum_truncation(psd, 4 * conditioned.sample_rate,
low_frequency_cutoff=15)
"""
Explanation: That's better.
Calculating the spectral density of the data
Optimal matched filtering requires whitening; weighting the frequency components of the potential signal and data by the estimated noise amplitude.
Let's compute the power spectral density (PSD) of our conditioned data.
End of explanation
"""
m = 36 # Solar masses
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=m,
mass2=m,
delta_t=conditioned.delta_t,
f_lower=20)
# We should resize the vector of our template to match our data
hp.resize(len(conditioned))
plt.plot(hp)
plt.xlabel('Time samples')
"""
Explanation: Define a signal model
Recall that matched filtering is essentially integrating the inner product between your data and your signal model in frequency or time (after weighting frequencies correctly) as you slide your signal model over your data in time.
If there is a signal in the data that matches your 'template', we will see a large value of this inner product (the SNR, or 'signal to noise ratio') at that time.
In a full search, we would grid over the parameters and calculate the SNR time series for each template in our template bank
Here we'll define just one template. Let's assume equal masses (which is within the posterior probability of GW150914). Because we want to match our signal model with each time sample in our data, let's also rescale our signal model vector to match the same number of time samples as our data vector (<- very important!).
Let's also plot the output to see what it looks like.
End of explanation
"""
template = hp.cyclic_time_shift(hp.start_time)
plt.plot(template)
plt.xlabel('Time samples')
"""
Explanation: Note that the waveform template currently begins at the start of the vector. However, we want our SNR time series (the inner product between our data and our template) to track with the approximate merger time. To do this, we need to shift our template so that the merger is approximately at the first bin of the data.
For this reason, waveforms returned from get_td_waveform have their merger stamped with time zero, so we can easily shift the merger into the right position to compute our SNR time series.
Let's try shifting our template time and plot the output.
End of explanation
"""
from pycbc.filter import matched_filter
import numpy
snr = matched_filter(template, conditioned,
psd=psd, low_frequency_cutoff=20)
plt.figure(figsize=[10, 4])
plt.plot(snr.sample_times, abs(snr))
plt.xlabel('Time (s)')
plt.ylabel('SNR')
"""
Explanation: Calculate an SNR time series
Now that we've pre-conditioned our data and defined a signal model, we can compute the output of our matched filter search.
End of explanation
"""
snr = snr.crop(4 + 4, 4)
plt.figure(figsize=[10, 4])
plt.plot(snr.sample_times, abs(snr))
plt.ylabel('Signal-to-noise')
plt.xlabel('Time (s)')
plt.show()
"""
Explanation: Note that as we expect, there is some corruption at the start and end of our SNR time series by the template filter and the PSD filter.
To account for this, we can smoothly zero out 4 seconds (the length of the PSD filter) at the beginning and end for the PSD filtering.
We should remove an 4 additional seconds at the beginning to account for the template length, although this is somewhat generous for so short a template. A longer signal such as from a BNS, would require much more padding at the beginning of the vector.
End of explanation
"""
peak = abs(snr).numpy().argmax()
snrp = snr[peak]
time = snr.sample_times[peak]
print("We found a signal at {}s with SNR {}".format(time,
abs(snrp)))
"""
Explanation: Finally, now that the output is properly cropped, we can find the peak of our SNR time series and estimate the merger time and associated SNR of any event candidate within the data.
End of explanation
"""
# complete
"""
Explanation: You found the first gravitational wave detection in LIGO Hanford data! Nice work.
Exercise 3
How does the SNR change if you re-compute the matched filter result using a signal model with compenent masses that are closer to the current estimates for GW150914, say m<sub>1</sub> = 36 M<sub>sol</sub> and m<sub>2</sub> = 31 M<sub>sol</sub>?
End of explanation
"""
# complete
"""
Explanation: Exercise 4
Network SNR is the quadrature sum of the single-detector SNR from each contributing detector. GW150914 was detected by H1 and L1. Try calculating the network SNR (you'll need to estimate the SNR in L1 first), and compare your answer to the network PyCBC SNR as reported in the GWTC-1 catalog.
End of explanation
"""
# import what we need
from scipy.stats import norm
from math import pi
from math import exp
# make a histogram of SNR values
background = (abs(snr))
# plot the histogram to check out any other outliers
plt.hist(background, bins=50)
plt.xlabel('SNR')
plt.semilogy()
# use norm.fit to fit a normal (Gaussian) distribution
(mu, sigma) = norm.fit(background)
# print out the mean and standard deviation of the fit
print('The fit mean = %f and the fit std dev = %f' )%(mu, sigma)
"""
Explanation: Estimate the single-detector significance of an event candidate
Great, we found a large spike in SNR! What are the chances this is a real astrophysical signal? How often would detector noise produce this by chance?
Let's plot a histogram of SNR values output by our matched filtering analysis for this time and see how much this trigger stands out.
End of explanation
"""
# complete
"""
Explanation: Exercise 5
At what single-detector SNR is the significance of a trigger > 5 sigma?
Remember that sigma is constant for a normal distribution (read: this should be simple multiplication now that we have estimated what 1 sigma is).
End of explanation
"""
# complete if time
"""
Explanation: Challenge
Our match filter analysis assumes the noise is stationary and Gaussian, which is not a good assumption, and this short data set isn't representative of all the various things that can go bump in the detector (remember the phone?).
The simple significance estimate above won't work as soon as we encounter a glitch! We need a better noise background estimate, and we can leverage our detector network to help make our signals stand out from our background.
Observing a gravitational wave signal between detectors is an important cross-check to minimize the impact of transient detector noise. Our strategy:
We look for loud triggers within a time window to identify foreground events that occur within the gravitational wave travel time (v=c) between detectors, but could come from any sky position.
We use time slides to estimate the noise background for a network of detectors.
If you still have time, try coding up an algorithm that checks for time coincidence between triggers in different detectors. Remember that the maximum gravitational wave travel time between LIGO detectors is ~10 ms. Check your code with the GPS times for the H1 and L1 triggers you identified for GW150914.
End of explanation
"""
|
dragoon/kilogram | notebooks/entity_linking_for_types.ipynb | apache-2.0 | import matplotlib.pyplot as plt
from mpltools import style
import numpy as np
style.use('ggplot')
%matplotlib inline
import pandas as pd
import shelve
from collections import defaultdict
"""
Explanation: <small><i>This notebook was put together by Roman Prokofyev@eXascale Infolab. Source and license info is on GitHub.</i></small>
Prerequisites
Pandas: pip install pandas
Matplotlib
End of explanation
"""
count_dict = {}
for line in open('../mapreduce/predicted_label_counts.txt'):
uri, label, values = line.split('\t')
upper_count, lower_count = values.split(',')
count_dict[(uri, label)] = {'infer_normal': int(upper_count), 'infer_lower': int(lower_count), 'len': len(label.split('_')),
'label': label, 'organ_normal': 0, 'organ_lower': 0, 'uri': uri}
for line in open('../mapreduce/organic_label_counts.txt'):
uri, label, values = line.split('\t')
if (uri, label) in count_dict:
upper_count, lower_count = values.split(',')
count_dict[(uri, label)].update({'organ_normal': int(upper_count), 'organ_lower': int(lower_count)})
counts_df = pd.DataFrame(count_dict.values())
del count_dict
counts_df.head()
"""
Explanation: Construct original counts file
End of explanation
"""
from __future__ import division
"""
We never exclude uppercase labels since we don't match at the beginning of a sentence
"""
includes = open('../mapreduce/unambiguous_labels.txt', 'w')
for row in counts_df.iterrows():
row = row[1]
exclude = False
label = row['label']
uri = row['uri']
# skip uppercase
if label.isupper():
includes.write(label+'\t'+uri+'\n')
continue
# if label appears only in lowercase - add to lower includes
if row['organ_normal'] == 0: # means label is lowercase
if row['organ_lower'] > 1:
includes.write(label+'\t'+uri+'\n')
continue
else:
infer_ratio = row['infer_normal']/(row['infer_lower'] or 1)
orig_ratio = row['organ_normal']/(row['organ_lower'] or 1)
if infer_ratio == 0:
# weird label, p. ex. 中华人民共和国
continue
# always write a normal-case label
includes.write(label+'\t'+uri+'\n')
if orig_ratio/infer_ratio < 2 and row['infer_lower'] > 0:
includes.write(label.lower()+'\t'+uri+'\n')
includes.close()
"""
Explanation: Generate excludes by ambiguity
End of explanation
"""
counts_df[(counts_df.uri == 'Cicada')]
counts_df[(counts_df.organ_normal > 0) & (counts_df.infer_lower > 0) & (counts_df.infer_normal == 0)]
"""
Explanation: Generate typed n-grams
hdfs dfs -cat /user/roman/wikipedia_ngrams/* | python spark_typed_ngrams_from_plain.py > typed_ngrams.txt
hdfs dfs -put typed_ngrams.txt /user/roman/wikipedia_typed_ngrams/
Hbase-suitable format:
./run_job.py -m ./type_prediction/mapper.py -r ./type_prediction/reducer.py "/user/roman/wikipedia_typed_ngrams" /user/roman/hbase_wikipedia_typed_ngrams
Put into Hbase:
pig -p table=typogram -p path=/user/roman/hbase_wikipedia_typed_ngrams ../extra/hbase_upload_array.pig
End of explanation
"""
|
JaviMerino/lisa | ipynb/thermal/ThermalSensorCharacterisation.ipynb | apache-2.0 | import logging
reload(logging)
log_fmt = '%(asctime)-9s %(levelname)-8s: %(message)s'
logging.basicConfig(format=log_fmt)
# Change to info once the notebook runs ok
logging.getLogger().setLevel(logging.INFO)
%pylab inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Support to configure and run RTApp based workloads
from wlgen import RTA, Periodic
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
"""
Explanation: Thermal Sensor Measurements
The goal of this experiment is to measure temperature on Juno R2 board using the available sensors. In order to do that we will run a busy-loop workload of about 5 minutes and collect traces for the thermal_temperature event.
Measurements must be done with and without fan.
End of explanation
"""
# Setup a target configuration
my_target_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
# Target board IP/MAC address
"host" : '192.168.0.1',
# Login credentials
"username" : 'root',
"password" : '',
# RTApp calibration values (comment to let LISA do a calibration run)
"rtapp-calib" : {
"0": 318, "1": 125, "2": 124, "3": 318, "4": 318, "5": 319
},
# Tools required by the experiments
"tools" : [ 'rt-app', 'trace-cmd' ],
"exclude_modules" : ['hwmon'],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"thermal_temperature",
# Use sched_switch event to recognize tasks on kernelshark
"sched_switch",
# cdev_update has been used to show that "step_wise" thermal governor introduces noise
# because it keeps changing the state of the cooling devices and therefore
# the available OPPs
#"cdev_update",
],
"buffsize" : 80 * 1024,
},
}
"""
Explanation: Target Configuration
Our target is a Juno R2 development board running Linux.
End of explanation
"""
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
te = TestEnv(target_conf=my_target_conf)
target = te.target
"""
Explanation: Tests execution
End of explanation
"""
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp_big = RTA(target, 'big', calibration=te.calibration())
big_tasks = dict()
for cpu in target.bl.bigs:
big_tasks['busy_big'+str(cpu)] = Periodic(duty_cycle_pct=100,
duration_s=360, # 6 minutes
cpus=str(cpu) # pinned to a given cpu
).get()
# Configure this RTApp instance to:
rtapp_big.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params=big_tasks,
# 3. Set load reference for task calibration
loadref='big',
# 4. use this folder for task logfiles
run_dir=target.working_directory
);
rtapp_little = RTA(target, 'little', calibration=te.calibration())
little_tasks = dict()
for cpu in target.bl.littles:
little_tasks['busy_little'+str(cpu)] = Periodic(duty_cycle_pct=100,
duration_s=360,
cpus=str(cpu)).get()
rtapp_little.conf(
kind='profile',
params=little_tasks,
# Allow the task duration to be calibrated for the littles (default is for big)
loadref='little',
run_dir=target.working_directory
);
"""
Explanation: Workloads configuration
End of explanation
"""
logging.info('#### Setup FTrace')
te.ftrace.start()
logging.info('#### Start RTApp execution')
# Run tasks on the bigs in background to allow execution of following instruction
rtapp_big.run(out_dir=te.res_dir, background=True)
# Run tasks on the littles and then wait 2 minutes for device to cool down
rtapp_little.run(out_dir=te.res_dir, end_pause_s=120.0)
logging.info('#### Stop FTrace')
te.ftrace.stop()
"""
Explanation: Workload execution
End of explanation
"""
# Collect the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
logging.info('#### Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
# Parse trace
therm_trace = trappy.FTrace(trace_file)
therm_trace.thermal.data_frame.tail(10)
# Plot the data
therm_plot = trappy.ILinePlot(therm_trace,
signals=['thermal:temp'],
filters={'thermal_zone': ["soc"]},
title='Juno R2 SoC Temperature w/o fans')
therm_plot.view()
"""
Explanation: Trace Analysis
In order to analyze the trace we will plot it using TRAPpy.
End of explanation
"""
# Extract a data frame for each zone
df = therm_trace.thermal.data_frame
soc_df = df[df.thermal_zone == "soc"]
big_df = df[df.thermal_zone == "big_cluster"]
little_df = df[df.thermal_zone == "little_cluster"]
gpu0_df = df[df.thermal_zone == "gpu0"]
gpu1_df = df[df.thermal_zone == "gpu1"]
# Build new trace
juno_trace = trappy.BareTrace(name = "Juno_R2")
juno_trace.add_parsed_event("SoC", soc_df)
juno_trace.add_parsed_event("big_Cluster", big_df)
juno_trace.add_parsed_event("LITTLE_Cluster", little_df)
juno_trace.add_parsed_event("gpu0", gpu0_df)
juno_trace.add_parsed_event("gpu1", gpu1_df)
# Plot the data for all sensors
juno_signals = ['SoC:temp', 'big_Cluster:temp', 'LITTLE_Cluster:temp', 'gpu0:temp', 'gpu1:temp']
therm_plot = trappy.ILinePlot([juno_trace],
signals=juno_signals,
title='Juno R2 Temperature all traces')
therm_plot.view()
"""
Explanation: The pmic sensor if off-chip and therefore it is not useful to get its temperature.
End of explanation
"""
|
hashiprobr/redes-sociais | encontro07/hub-authority.ipynb | gpl-3.0 | import sys
sys.path.append('..')
import numpy as np
import socnet as sn
"""
Explanation: Encontro 07: Simulação e Demonstração de Hub/Authority
Importando as bibliotecas:
End of explanation
"""
sn.graph_width = 225
sn.graph_height = 225
"""
Explanation: Configurando a biblioteca:
End of explanation
"""
g = sn.load_graph('graph.gml', has_pos=True)
sn.show_graph(g)
"""
Explanation: Carregando o grafo:
End of explanation
"""
k = 10
# inicializa arbitrariamente hubs
g.node[0]['h'] = 0
g.node[1]['h'] = 0
g.node[2]['h'] = 0
g.node[3]['h'] = 0
# inicializa arbitrariamente authorities
g.node[0]['a'] = 2
g.node[1]['a'] = 6
g.node[2]['a'] = 4
g.node[3]['a'] = 3
for _ in range(k):
# atualiza hubs a partir de authorities
for n in g.nodes():
g.node[n]['h'] = sum([g.node[m]['a'] for m in g.successors(n)])
# atualiza authorities a partir de hubs
for n in g.nodes():
g.node[n]['a'] = sum([g.node[m]['h'] for m in g.predecessors(n)])
# soma hubs
sh = sum([g.node[n]['h'] for n in g.nodes()])
# soma authorities
sa = sum([g.node[n]['a'] for n in g.nodes()])
# imprime hubs e authorities normalizados
for n in g.nodes():
print('{}: hub {:04.2f}, authority {:04.2f}'.format(n, g.node[n]['h'] / sh, g.node[n]['a'] / sa))
"""
Explanation: Vamos fazer uma simulação de $k$ iterações do algoritmo Hub/Authority:
End of explanation
"""
k = 10
# constrói matriz de adjacência
A = sn.build_matrix(g)
# constrói matriz transposta
At = A.transpose()
# inicializa arbitrariamente hubs
h = np.array([[0], [0], [0], [0]])
# inicializa arbitrariamente authorities
a = np.array([[2], [6], [4], [3]])
for _ in range(k):
# atualiza hubs a partir de authorities
h = A.dot(a)
# atualiza authorities a partir de hubs
a = At.dot(h)
# soma hubs
sh = np.sum(h)
# soma authorities
sa = np.sum(a)
# imprime hubs e authorities normalizados
for n in g.nodes():
print('{}: hub {:04.2f}, authority {:04.2f}'.format(n, h[n, 0] / sh, a[n, 0] / sa))
"""
Explanation: Considere as seguintes definições:
$A$ é a matriz de adjacência de g;
$h^k$ é o vetor de hubs no final da iteração $k$;
$a^k$ é o vetor de authorities no final da iteração $k$.
Note que:
$h^k = Aa^{k-1}$;
$a^k = A^th^k$.
Vamos fazer uma nova simulação de $k$ iterações do algoritmo Hub/Authority, desta vez usando álgebra matricial:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from google.api_core.client_options import ClientOptions
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
"""
Explanation: Working with Streaming Data
Learning Objectives
1. Learn how to process real-time data for ML models using Cloud Dataflow
2. Learn how to serve online predictions using real-time data
Introduction
It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
Typically you will have the following:
- A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)
- A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)
- A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)
- A persistent store to keep the processed data (in our case this is BigQuery)
These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below.
Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below.
<img src='../assets/taxi_streaming_data.png' width='80%'>
In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of trips_last_5min data as an additional feature. This is our proxy for real-time traffic.
End of explanation
"""
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
"""
Explanation: Re-train our model with trips_last_5min feature
In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.
Simulate Real Time Taxi Data
Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
Inspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
To execute the iot_devices.py script, launch a terminal and navigate to the training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs directory. Then run the following two commands.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
You will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.
Create a BigQuery table to collect the processed data
In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.
End of explanation
"""
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
"""
Explanation: Next, we create a table called traffic_realtime and set up the schema.
End of explanation
"""
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
"""
Explanation: Launch Streaming Dataflow Pipeline
Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it.
There are 5 transformations being applied:
- Read from PubSub
- Window the messages
- Count number of messages in the window
- Format the count for BigQuery
- Write results to BigQuery
TODO: Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the beam programming guide for guidance. To check your answer reference the solution.
For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds.
In a new terminal, launch the dataflow pipeline using the command below. You can change the BUCKET variable, if necessary. Here it is assumed to be your PROJECT_ID.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID # CHANGE AS NECESSARY
python3 ./taxicab_traffic/streaming_count.py \
--input_topic taxi_rides \
--runner=DataflowRunner \
--project=$PROJECT_ID \
--temp_location=gs://$BUCKET/dataflow_streaming
Once you've submitted the command above you can examine the progress of that job in the Dataflow section of Cloud console.
Explore the data in the table
After a few moments, you should also see new data written to your BigQuery table as well.
Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
End of explanation
"""
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
"""
Explanation: Make predictions from the new data
In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook.
The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
Exercise. Complete the code in the function below. Write a SQL query that will return the most recent entry in traffic_realtime and add it to the instance.
End of explanation
"""
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
"""
Explanation: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
End of explanation
"""
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=endpoint)
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False, client_options=client_options)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
"""
Explanation: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
Exercise. Complete the code below to call prediction on an instance incorporating realtime traffic info. You should
- use the function add_traffic_last_5min to add the most recent realtime traffic data to the prediction instance
- call prediction on your model for this realtime instance and save the result as a variable called response
- parse the json of response to print the predicted taxifare cost
End of explanation
"""
|
diegocavalca/Studies | programming/Python/tensorflow/exercises/Neural_Network_Part1.ipynb | cc0-1.0 | from __future__ import print_function
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
"""
Explanation: Neural Network
End of explanation
"""
_x = np.linspace(-10., 10., 1000)
x = tf.convert_to_tensor(_x)
relu = ...
elu = ...
softplus = ...
with tf.Session() as sess:
_relu, _elu, _softplus = sess.run([relu, elu, softplus])
plt.plot(_x, _relu, label='relu')
plt.plot(_x, _elu, label='elu')
plt.plot(_x, _softplus, label='softplus')
plt.legend(bbox_to_anchor=(0.5, 1.0))
plt.show()
"""
Explanation: Activation Functions
Q1. Apply relu, elu, and softplus to x.
End of explanation
"""
_x = np.linspace(-10., 10., 1000)
x = tf.convert_to_tensor(_x)
sigmoid = ...
tanh = ...
with tf.Session() as sess:
_sigmoid, _tanh = sess.run([sigmoid, tanh])
plt.plot(_x, _sigmoid, label='sigmoid')
plt.plot(_x, _tanh, label='tanh')
plt.legend(bbox_to_anchor=(0.5, 1.0))
plt.grid()
plt.show()
"""
Explanation: Q2. Apply sigmoid and tanh to x.
End of explanation
"""
_x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)
x = tf.convert_to_tensor(_x)
out = ...
with tf.Session() as sess:
_out = sess.run(out)
print(_out)
assert np.allclose(np.sum(_out, axis=-1), 1)
"""
Explanation: Q3. Apply softmax to x.
End of explanation
"""
_x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)
print("_x =\n" , _x)
x = tf.convert_to_tensor(_x)
out = ...
with tf.Session() as sess:
_out = sess.run(out)
print("_out =\n", _out)
"""
Explanation: Q4. Apply dropout with keep_prob=.5 to x.
End of explanation
"""
x = tf.random_normal([8, 10])
"""
Explanation: Fully Connected
Q5. Apply a fully connected layer to x with 2 outputs and then an sigmoid function.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(2, 3, 3, 3), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Convolution
Q6. Apply 2 kernels of width-height (2, 2), stride 1, and same padding to x.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 3), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Q7. Apply 3 kernels of width-height (2, 2), stride 1, dilation_rate 2 and valid padding to x.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Q8. Apply 4 kernels of width-height (3, 3), stride 2, and same padding to x.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Q9. Apply 4 times of kernels of width-height (3, 3), stride 2, and same padding to x, depth-wise.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Q10. Apply 5 kernels of height 3, stride 2, and valid padding to x.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
shp = x.get_shape().as_list()
output_shape = ...
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Q11. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and same padding to x.
End of explanation
"""
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 3, 5, 4), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
shp = x.get_shape().as_list()
output_shape = ...
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
"""
Explanation: Q12. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and valid padding to x.
End of explanation
"""
_x = np.zeros((1, 3, 3, 3), dtype=np.float32)
_x[0, :, :, 0] = np.arange(1, 10, dtype=np.float32).reshape(3, 3)
_x[0, :, :, 1] = np.arange(10, 19, dtype=np.float32).reshape(3, 3)
_x[0, :, :, 2] = np.arange(19, 28, dtype=np.float32).reshape(3, 3)
print("1st channel of x =\n", _x[:, :, :, 0])
print("\n2nd channel of x =\n", _x[:, :, :, 1])
print("\n3rd channel of x =\n", _x[:, :, :, 2])
x = tf.constant(_x)
maxpool = ...
avgpool = ...
with tf.Session() as sess:
_maxpool, _avgpool = sess.run([maxpool, avgpool])
print("\n1st channel of max pooling =\n", _maxpool[:, :, :, 0])
print("\n2nd channel of max pooling =\n", _maxpool[:, :, :, 1])
print("\n3rd channel of max pooling =\n", _maxpool[:, :, :, 2])
print("\n1st channel of avg pooling =\n", _avgpool[:, :, :, 0])
print("\n2nd channel of avg pooling =\n", _avgpool[:, :, :, 1])
print("\n3rd channel of avg pooling =\n", _avgpool[:, :, :, 2])
"""
Explanation: Q13. Apply max pooling and average pooling of window size 2, stride 1, and valid padding to x.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb | apache-2.0 | PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
"""
Explanation: Basic Feature Engineering in BQML
Learning Objectives
Create SQL statements to evaluate the model
Extract temporal features
Perform a feature cross on temporal features
Overview
In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model.
In this Notebook we set up the environment, create the project dataset, create a feature engineering table, create and evaluate a baseline model, extract temporal features, perform a feature cross on temporal features, and evaluate model performance throughout the process.
End of explanation
"""
%%bash
# Create a BigQuery dataset for feat_eng if it doesn't exist
datasetexists=$(bq ls -d | grep -w feat_eng)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: feat_eng"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:feat_eng
echo "Here are your current datasets:"
bq ls
fi
"""
Explanation: The source dataset
Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
feat_eng.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
passenger_count*1.0 AS passengers,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1
AND fare_amount >= 2.5
AND passenger_count > 0
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
"""
Explanation: Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post.
Note: The dataset in the create table code below is the one created previously, e.g. feat_eng. The table name is feateng_training_data. Run the query to create the table.
End of explanation
"""
%%bigquery
# LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT
*
FROM
feat_eng.feateng_training_data
LIMIT
0
"""
Explanation: Verify table creation
Verify that you created the dataset.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.baseline_model OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
"""
Explanation: Baseline Model: Create the baseline model
Next, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques.
When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.
Now we create the SQL statement to create the baseline model.
End of explanation
"""
%%bigquery
# Eval statistics on the held out data.
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.baseline_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
"""
Explanation: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.
Review the learning and eval statistics for the baseline_model.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
"""
Explanation: NOTE: Because you performed a linear regression, the results include the following columns:
mean_absolute_error
mean_squared_error
mean_squared_log_error
median_absolute_error
r2_score
explained_variance
Resource for an explanation of the Regression Metrics.
Mean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values.
Root mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.
R2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.
Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_1 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
"""
Explanation: Model 1: EXTRACT dayofweek from the pickup_datetime feature.
As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).
If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer.
Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
End of explanation
"""
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_1)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
"""
Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
"""
Explanation: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_2 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
EXTRACT(HOUR
FROM
pickup_datetime) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
"""
Explanation: Model 2: EXTRACT hourofday from the pickup_datetime feature
As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.
Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.
Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_3 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
CONCAT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING), CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING)) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
"""
Explanation: Model 3: Feature cross dayofweek and hourofday using CONCAT
First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross.
Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.
Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
End of explanation
"""
|
deculler/MachineLearningTables | Chapter3-2.ipynb | bsd-2-clause | # HIDDEN
# For Tables reference see http://data8.org/datascience/tables.html
# This useful nonsense should just go at the top of your notebook.
from datascience import *
%matplotlib inline
import matplotlib.pyplot as plots
import numpy as np
from sklearn import linear_model
plots.style.use('fivethirtyeight')
plots.rc('lines', linewidth=1, color='r')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# datascience version number of last run of this notebook
version.__version__
import sys
sys.path.append("..")
from ml_table import ML_Table
import locale
locale.setlocale( locale.LC_ALL, 'en_US.UTF-8' )
# Getting the data
advertising = ML_Table.read_table("./data/Advertising.csv")
advertising = advertising.drop(0)
advertising
"""
Explanation: Chapter 3-2 Multiple Linear Regression
Concepts and data from "An Introduction to Statistical Learning, with applications in R" (Springer, 2013) with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani " available at www.StatLearning.com.
For Tables reference see http://data8.org/datascience/tables.html
End of explanation
"""
advertising.linear_regression('Sales').params
adver_model = advertising.linear_regression('Sales').model
adver_model(0,0,0)
"""
Explanation: 3.2.1 Estimating the Regression Coefficients
The multiple linear regression model takes the form
$Y = β_0 + β_1X_1 +···+β_{p}X_{p} + ε$,
where $X_j$ represents the jth predictor and $β_j$ quantifies the association between that variable and the response. We interpret βj as the average effect on Y of a one unit increase in Xj, holding all other predictors fixed.
In the advertising example, this becomes
$sales= β0 + β1×TV + β2×radio + β3×newspaper + ε$.
End of explanation
"""
ad2 = advertising.drop('Newspaper')
ad2.linear_regression('Sales').summary()
# Linear model with two input variables is a plane
ad2.plot_fit('Sales', ad2.linear_regression('Sales').model, width=8)
"""
Explanation: Visualizing a 2D regression
End of explanation
"""
# response vector
Y = advertising['Sales']
labels = [lbl for lbl in advertising.labels if lbl != 'Sales']
p = len(labels) # number of parameter
n = len(Y) # number of observations
labels
# Transform the table into a matrix
advertising.select(labels).rows
# Design matrix
X = np.array([np.append([1], row) for row in advertising.select(labels).rows])
# slope vector
b0, slopes = advertising.linear_regression('Sales').params
b = np.append([b0], slopes)
np.shape(X), np.shape(b)
# residual
res = np.dot(X, b) - advertising['Sales']
# Variance of the residual
sigma2 = sum(res**2)/(n-p-1)
sigma2
Xt = np.transpose(X)
# The matrix that needs to be inverted is only p x p
np.dot(Xt, X)
np.shape(np.dot(Xt, X))
# standard error matrix
SEM = sigma2*np.linalg.inv(np.dot(Xt, X))
SEM
# variance of the coefficients are the diagonal elements
variances = [SEM[i,i] for i in range(len(SEM))]
variances
# standard error of the coeficients
SE = [np.sqrt(v) for v in variances]
SE
# t-statistics
b/SE
advertising.linear_regression('Sales').summary()
advertising.RSS_model('Sales', adver_model)
advertising.R2_model('Sales', adver_model)
"""
Explanation: Multiple regression inference and goodness of fit
At this point "ISL" skips over how to compute the standard error of the multiple regression parameters - relying on R to just produce the answer. It requires some matrix notation and a numerical computation of the matrix inverse, but involves a bunch of standard terminology that is specific to the inference aspect, as opposed to the general notion in linear algebra of approximating a function over a basis.
A nice treatment can be found at this reference
End of explanation
"""
advertising.Cor()
"""
Explanation: 3.2.2 Some Important questions
Is at least one of the predictors X1 , X2 , . . . , Xp useful in predicting the response?
Do all the predictors help to explain Y, or is only a subset of the predictors useful?
How well does the model fit the data?
Given a set of predictor values, what response value should we predict,
and how accurate is our prediction?
Correlation matrix
Above shows that spending on newspaper appears to have no effect on sales. The apparent effect when looking at newspaper versus sales in isolation is capturing the tendency to spend more on newspaper when spending more on radio.
End of explanation
"""
advertising.F_model('Sales', adver_model)
advertising.lm_fit('Sales', adver_model)
# Using this tool for the 1D model within the table
advertising.lm_fit('Sales', advertising.regression_1d('Sales', 'TV'), 'TV')
"""
Explanation: F-statistic
$F = \frac{(TSS - RSS)/p}{RSS/(n - p - 1)}$
When there is no relationship between the response and predictors, one would expect the F-statistic to take on a value close to 1. On the other hand, if Ha is true, then E{(TSS − RSS)/p} > σ2, so we expect F to be greater than 1.
End of explanation
"""
ad2_model = ad2.linear_regression('Sales').model
ad2.lm_fit('Sales', ad2_model)
RSS0 = ad2.RSS_model('Sales', ad2_model)
RSS = advertising.RSS_model('Sales', adver_model)
((RSS0 - RSS)/1)/(advertising.num_rows - 3 - 1)
"""
Explanation: Sometimes we want to test that a particular subset of q of the coefficients are zero. This corresponds to a null hypothesis
H0 : $β_{p−q+1} =β_{p−q+2} =...=β_{p} =0$
where for convenience we have put the variables chosen for omission at the end of the list. In this case we fit a second model that uses all the variables except those last q. Suppose that the residual sum of squares for that model is $RSS_0$. Then the appropriate F-statistic is
$F = \frac{(RSS_0 − RSS)/q}{RSS/(n−p−1)}$.
End of explanation
"""
input_labels = [lbl for lbl in advertising.labels if lbl != 'Sales']
fwd_labels = ['Sales']
for lbl in input_labels:
fwd = advertising.select(fwd_labels + [lbl])
model = fwd.linear_regression('Sales').model
print(lbl, fwd.RSS_model('Sales', model))
"""
Explanation: Variable selection
Forward selection - start with null model and add predictors one at a time using the variable that result in the lowest RSS
Backward selection - start with all variables and iteratively remove the one with the largest P-value (smallest T-statistic)
Mixed selection - add like forward but skip ones with too high a P-value
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/text_models/solutions/rnn_encoder_decoder.ipynb | apache-2.0 | pip install nltk
import os
import pickle
import sys
import nltk
import numpy as np
import pandas as pd
import tensorflow as tf
import utils_preproc
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import GRU, Dense, Embedding, Input
from tensorflow.keras.models import Model, load_model
print(tf.__version__)
SEED = 0
MODEL_PATH = "translate_models/baseline"
DATA_URL = (
"http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip"
)
LOAD_CHECKPOINT = False
tf.random.set_seed(SEED)
"""
Explanation: Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
End of explanation
"""
path_to_zip = tf.keras.utils.get_file(
"spa-eng.zip", origin=DATA_URL, extract=True
)
path_to_file = os.path.join(os.path.dirname(path_to_zip), "spa-eng/spa.txt")
print("Translation data stored at:", path_to_file)
data = pd.read_csv(
path_to_file, sep="\t", header=None, names=["english", "spanish"]
)
data.sample(3)
"""
Explanation: Downloading the Data
We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.
End of explanation
"""
raw = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?",
]
processed = [utils_preproc.preprocess_sentence(s) for s in raw]
processed
"""
Explanation: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following:
1. Converts sentence to lower case
2. Adds a space between punctuation and words
3. Replaces tokens that aren't a-z or punctuation with space
4. Adds <start> and <end> tokens
For example:
End of explanation
"""
integerized, tokenizer = utils_preproc.tokenize(processed)
integerized
"""
Explanation: Sentence Integerizing
The utils_preproc.tokenize() method does the following:
Splits each sentence into a token list
Maps each token to an integer
Pads to length of longest sentence
It returns an instance of a Keras Tokenizer
containing the token-integer mapping along with the integerized sentences:
End of explanation
"""
tokenizer.sequences_to_texts(integerized)
"""
Explanation: The outputted tokenizer can be used to get back the actual works
from the integers representing them:
End of explanation
"""
def load_and_preprocess(path, num_examples):
with open(path_to_file) as fp:
lines = fp.read().strip().split("\n")
# TODO 1a
sentence_pairs = [
[utils_preproc.preprocess_sentence(sent) for sent in line.split("\t")]
for line in lines[:num_examples]
]
return zip(*sentence_pairs)
en, sp = load_and_preprocess(path_to_file, num_examples=10)
print(en[-1])
print(sp[-1])
"""
Explanation: Creating the tf.data.Dataset
load_and_preprocess
Let's first implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones:
End of explanation
"""
def load_and_integerize(path, num_examples=None):
targ_lang, inp_lang = load_and_preprocess(path, num_examples)
# TODO 1b
input_tensor, inp_lang_tokenizer = utils_preproc.tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = utils_preproc.tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
"""
Explanation: load_and_integerize
Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:
python
(input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)
where
input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences
inp_lang_tokenizer is the source language tokenizer
targ_lang_tokenizer is the target language tokenizer
End of explanation
"""
TEST_PROP = 0.2
NUM_EXAMPLES = 30000
"""
Explanation: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that:
End of explanation
"""
input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(
path_to_file, NUM_EXAMPLES
)
"""
Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:
End of explanation
"""
max_length_targ = target_tensor.shape[1]
max_length_inp = input_tensor.shape[1]
"""
Explanation: Let us store the maximal sentence length of both languages into two variables:
End of explanation
"""
splits = train_test_split(
input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED
)
input_tensor_train = splits[0]
input_tensor_val = splits[1]
target_tensor_train = splits[2]
target_tensor_val = splits[3]
"""
Explanation: We are now using scikit-learn train_test_split to create our splits:
End of explanation
"""
(
len(input_tensor_train),
len(target_tensor_train),
len(input_tensor_val),
len(target_tensor_val),
)
"""
Explanation: Let's make sure the number of example in each split looks good:
End of explanation
"""
print("Input Language; int to word mapping")
print(input_tensor_train[0])
print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), "\n")
print("Target Language; int to word mapping")
print(target_tensor_train[0])
print(utils_preproc.int2word(targ_lang, target_tensor_train[0]))
"""
Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:
End of explanation
"""
def create_dataset(encoder_input, decoder_input):
# TODO 1c
# shift ahead by 1
target = tf.roll(decoder_input, -1, 1)
# replace last column with 0s
zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)
target = tf.concat((target[:, :-1], zeros), axis=-1)
dataset = tf.data.Dataset.from_tensor_slices(
((encoder_input, decoder_input), target)
)
return dataset
"""
Explanation: Create tf.data dataset for train and eval
Below we implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.
End of explanation
"""
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
train_dataset = (
create_dataset(input_tensor_train, target_tensor_train)
.shuffle(BUFFER_SIZE)
.repeat()
.batch(BATCH_SIZE, drop_remainder=True)
)
eval_dataset = create_dataset(input_tensor_val, target_tensor_val).batch(
BATCH_SIZE, drop_remainder=True
)
"""
Explanation: Let's now create the actual train and eval dataset using the function above:
End of explanation
"""
EMBEDDING_DIM = 256
HIDDEN_UNITS = 1024
INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1
TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1
"""
Explanation: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
End of explanation
"""
encoder_inputs = Input(shape=(None,), name="encoder_input")
# TODO 2a
encoder_inputs_embedded = Embedding(
input_dim=INPUT_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_inp,
)(encoder_inputs)
encoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer="glorot_uniform",
)
encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)
"""
Explanation: Let's implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
End of explanation
"""
decoder_inputs = Input(shape=(None,), name="decoder_input")
# TODO 2b
decoder_inputs_embedded = Embedding(
input_dim=TARGET_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_targ,
)(decoder_inputs)
decoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer="glorot_uniform",
)
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=encoder_state
)
"""
Explanation: We now implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!
The output of the encoder will be the decoder_outputs and the decoder_state.
End of explanation
"""
decoder_dense = Dense(TARGET_VOCAB_SIZE, activation="softmax")
predictions = decoder_dense(decoder_outputs)
"""
Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:
End of explanation
"""
# TODO 2c
model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=predictions)
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.summary()
"""
Explanation: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:
End of explanation
"""
STEPS_PER_EPOCH = len(input_tensor_train) // BATCH_SIZE
EPOCHS = 1
history = model.fit(
train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=eval_dataset,
epochs=EPOCHS,
)
"""
Explanation: Let's now train the model!
End of explanation
"""
if LOAD_CHECKPOINT:
encoder_model = load_model(os.path.join(MODEL_PATH, "encoder_model.h5"))
decoder_model = load_model(os.path.join(MODEL_PATH, "decoder_model.h5"))
else:
# TODO 3a
encoder_model = Model(inputs=encoder_inputs, outputs=encoder_state)
decoder_state_input = Input(
shape=(HIDDEN_UNITS,), name="decoder_state_input"
)
# Reuses weights from the decoder_rnn layer
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=decoder_state_input
)
# Reuses weights from the decoder_dense layer
predictions = decoder_dense(decoder_outputs)
decoder_model = Model(
inputs=[decoder_inputs, decoder_state_input],
outputs=[predictions, decoder_state],
)
"""
Explanation: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models:
an encoder model with signature encoder_inputs -> encoder_state
a decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]
This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1.
Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state.
At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word.
This process can be continued until the decoder produces the token <stop>.
This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model.
Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.
End of explanation
"""
def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):
"""
Arguments:
input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)
output_tokenizer: Tokenizer used to conver from int to words
Returns translated sentences
"""
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seqs)
# Populate the first character of target sequence with the start character.
batch_size = input_seqs.shape[0]
target_seq = tf.ones([batch_size, 1])
decoded_sentences = [[] for _ in range(batch_size)]
# TODO 4: Sampling loop
for i in range(max_decode_length):
output_tokens, decoder_state = decoder_model.predict(
[target_seq, states_value]
)
# Sample a token
sampled_token_index = np.argmax(output_tokens[:, -1, :], axis=-1)
tokens = utils_preproc.int2word(output_tokenizer, sampled_token_index)
for j in range(batch_size):
decoded_sentences[j].append(tokens[j])
# Update the target sequence (of length 1).
target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)
# Update states
states_value = decoder_state
return decoded_sentences
"""
Explanation: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.
End of explanation
"""
sentences = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?",
]
reference_translations = [
"We're not eating.",
"Winter is coming.",
"Winter is coming.",
"Tom ate nothing.",
"His bad leg prevented him from winning the race.",
"Your answer is wrong.",
"How about going for a walk after lunch?",
]
machine_translations = decode_sequences(
utils_preproc.preprocess(sentences, inp_lang), targ_lang, max_length_targ
)
for i in range(len(sentences)):
print("-")
print("INPUT:")
print(sentences[i])
print("REFERENCE TRANSLATION:")
print(reference_translations[i])
print("MACHINE TRANSLATION:")
print(machine_translations[i])
"""
Explanation: Now we're ready to predict!
End of explanation
"""
if not LOAD_CHECKPOINT:
os.makedirs(MODEL_PATH, exist_ok=True)
# TODO 3b
model.save(os.path.join(MODEL_PATH, "model.h5"))
encoder_model.save(os.path.join(MODEL_PATH, "encoder_model.h5"))
decoder_model.save(os.path.join(MODEL_PATH, "decoder_model.h5"))
with open(os.path.join(MODEL_PATH, "encoder_tokenizer.pkl"), "wb") as fp:
pickle.dump(inp_lang, fp)
with open(os.path.join(MODEL_PATH, "decoder_tokenizer.pkl"), "wb") as fp:
pickle.dump(targ_lang, fp)
"""
Explanation: Checkpoint Model
Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse:
End of explanation
"""
def bleu_1(reference, candidate):
reference = list(filter(lambda x: x != "", reference)) # remove padding
candidate = list(filter(lambda x: x != "", candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (1,), smoothing_function
)
def bleu_4(reference, candidate):
reference = list(filter(lambda x: x != "", reference)) # remove padding
candidate = list(filter(lambda x: x != "", candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (0.25, 0.25, 0.25, 0.25), smoothing_function
)
"""
Explanation: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
End of explanation
"""
%%time
num_examples = len(input_tensor_val)
bleu_1_total = 0
bleu_4_total = 0
for idx in range(num_examples):
# TODO 5
reference_sentence = utils_preproc.int2word(
targ_lang, target_tensor_val[idx][1:]
)
decoded_sentence = decode_sequences(
input_tensor_val[idx : idx + 1], targ_lang, max_length_targ
)[0]
bleu_1_total += bleu_1(reference_sentence, decoded_sentence)
bleu_4_total += bleu_4(reference_sentence, decoded_sentence)
print(f"BLEU 1: {bleu_1_total / num_examples}")
print(f"BLEU 4: {bleu_4_total / num_examples}")
"""
Explanation: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/detecting_outliers.ipynb | mit | # Load libraries
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.datasets import make_blobs
"""
Explanation: Title: Detecting Outliers
Slug: detecting_outliers
Summary: How to detect outliers for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create simulated data
X, _ = make_blobs(n_samples = 10,
n_features = 2,
centers = 1,
random_state = 1)
# Replace the first observation's values with extreme values
X[0,0] = 10000
X[0,1] = 10000
"""
Explanation: Create Data
End of explanation
"""
# Create detector
outlier_detector = EllipticEnvelope(contamination=.1)
# Fit detector
outlier_detector.fit(X)
# Predict outliers
outlier_detector.predict(X)
"""
Explanation: Detect Outliers
EllipticEnvelope assumes the data is normally distributed and based on that assumption "draws" an ellipse around the data, classifying any observation inside the ellipse as an inlier (labeled as 1) and any observation outside the ellipse as an outlier (labeled as -1). A major limitation of this approach is the need to specify a contamination parameter which is the proportion of observations that are outliers, a value that we don't know.
End of explanation
"""
|
therealAJ/python-sandbox | data-science/learning/ud1/DataScience/TrainTest.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
from pylab import *
np.random.seed(2)
pageSpeeds = np.random.normal(3.0, 1.0, 100)
purchaseAmount = np.random.normal(50.0, 30.0, 100) / pageSpeeds
scatter(pageSpeeds, purchaseAmount)
"""
Explanation: Train / Test
We'll start by creating some data set that we want to build a model for (in this case a polynomial regression):
End of explanation
"""
trainX = pageSpeeds[:80]
testX = pageSpeeds[80:]
trainY = purchaseAmount[:80]
testY = purchaseAmount[80:]
"""
Explanation: Now we'll split the data in two - 80% of it will be used for "training" our model, and the other 20% for testing it. This way we can avoid overfitting.
End of explanation
"""
scatter(trainX, trainY)
"""
Explanation: Here's our training dataset:
End of explanation
"""
scatter(testX, testY)
"""
Explanation: And our test dataset:
End of explanation
"""
x = np.array(trainX)
y = np.array(trainY)
p4 = np.poly1d(np.polyfit(x, y, 8))
"""
Explanation: Now we'll try to fit an 8th-degree polynomial to this data (which is almost certainly overfitting, given what we know about how it was generated!)
End of explanation
"""
import matplotlib.pyplot as plt
xp = np.linspace(0, 7, 100)
axes = plt.axes()
axes.set_xlim([0,7])
axes.set_ylim([0, 200])
plt.scatter(x, y)
plt.plot(xp, p4(xp), c='r')
plt.show()
"""
Explanation: Let's plot our polynomial against the training data:
End of explanation
"""
testx = np.array(testX)
testy = np.array(testY)
axes = plt.axes()
axes.set_xlim([0,7])
axes.set_ylim([0, 200])
plt.scatter(testx, testy)
plt.plot(xp, p4(xp), c='r')
plt.show()
"""
Explanation: And against our test data:
End of explanation
"""
from sklearn.metrics import r2_score
r2 = r2_score(testy, p4(testx))
print r2
"""
Explanation: Doesn't look that bad when you just eyeball it, but the r-squared score on the test data is kind of horrible! This tells us that our model isn't all that great...
End of explanation
"""
from sklearn.metrics import r2_score
r2 = r2_score(np.array(trainY), p4(np.array(trainX)))
print r2
"""
Explanation: ...even though it fits the training data better:
End of explanation
"""
|
datamicroscopes/release | examples/enron-email.ipynb | bsd-3-clause | %matplotlib inline
import pickle
import time
import itertools as it
import numpy as np
import matplotlib.pylab as plt
import matplotlib.patches as patches
from multiprocessing import cpu_count
import seaborn as sns
sns.set_context('talk')
"""
Explanation: Clustering the Enron e-mail corpus using the Infinite Relational Model
Let's setup our environment
End of explanation
"""
from microscopes.common.rng import rng
from microscopes.common.relation.dataview import numpy_dataview
from microscopes.models import bb as beta_bernoulli
from microscopes.irm.definition import model_definition
from microscopes.irm import model, runner, query
from microscopes.kernels import parallel
from microscopes.common.query import groups, zmatrix_heuristic_block_ordering, zmatrix_reorder
"""
Explanation: Below are the functions from datamicroscopes we'll be using to cluster the data
End of explanation
"""
import enron_utils
"""
Explanation: We've made a set of utilities especially for this dataset, enron_utils. We'll include these as well.
We have downloaded the data and preprocessed it as suggested by Ishiguro et al. 2012. The results of the scirpt have been stored in the results.p.
enron_crawler.py in the kernels repo includes the script to create results.p
End of explanation
"""
with open('results.p') as fp:
communications = pickle.load(fp)
def allnames(o):
for k, v in o:
yield [k] + list(v)
names = set(it.chain.from_iterable(allnames(communications)))
names = sorted(list(names))
namemap = { name : idx for idx, name in enumerate(names) }
N = len(names)
communications_relation = np.zeros((N, N), dtype=np.bool)
for sender, receivers in communications:
sender_id = namemap[sender]
for receiver in receivers:
receiver_id = namemap[receiver]
communications_relation[sender_id, receiver_id] = True
print "%d names in the corpus" % N
"""
Explanation: Let's load the data and make a binary matrix to represent email communication between individuals
In this matrix, $X_{i,j} = 1$ if and only if person${i}$ sent an email to person${j}$
End of explanation
"""
blue_cmap = sns.light_palette("#34495e", as_cmap=True)
labels = [i if i%20 == 0 else '' for i in xrange(N)]
sns.heatmap(communications_relation, cmap=blue_cmap, linewidths=0, cbar=False, xticklabels=labels, yticklabels=labels)
plt.xlabel('person number')
plt.ylabel('person number')
plt.title('Email Communication Matrix')
"""
Explanation: Let's visualize the communication matrix
End of explanation
"""
defn = model_definition([N], [((0, 0), beta_bernoulli)])
views = [numpy_dataview(communications_relation)]
prng = rng()
"""
Explanation: Now, let's learn the underlying clusters using the Inifinite Relational Model
Let's import the necessary functions from datamicroscopes
There are 5 steps necessary in inferring a model with datamicroscopes:
1. define the model
2. load the data
3. initialize the model
4. define the runners (MCMC chains)
5. run the runners
Let's start by defining the model and loading the data
To define our model, we need to specify our domains and relations
Our domains are described in a list of the cardinalities of each domain
Our releations are in a list of tuples which refer to the indicies of each domain and the model type
In this case, the our domain is users, which is of size $N$
Our relations are users to users, both of cardinality $N$, and we model the relation with beta-bernoulli distribution since our data is binary
End of explanation
"""
nchains = cpu_count()
latents = [model.initialize(defn, views, r=prng, cluster_hps=[{'alpha':1e-3}]) for _ in xrange(nchains)]
kc = runner.default_assign_kernel_config(defn)
runners = [runner.runner(defn, views, latent, kc) for latent in latents]
r = parallel.runner(runners)
"""
Explanation: Next, let's initialize the model and define the runners.
These runners are our MCMC chains. We'll use cpu_count to define our number of chains.
End of explanation
"""
start = time.time()
r.run(r=prng, niters=1000)
print "inference took {} seconds".format(time.time() - start)
"""
Explanation: From here, we can finally run each chain of the sampler 1000 times
End of explanation
"""
infers = r.get_latents()
clusters = groups(infers[0].assignments(0), sort=True)
ordering = list(it.chain.from_iterable(clusters))
"""
Explanation: Now that we have learned our model let's get our cluster assignments
End of explanation
"""
z = communications_relation.copy()
z = z[ordering]
z = z[:,ordering]
sizes = map(len, clusters)
boundaries = np.cumsum(sizes)[:-1]
"""
Explanation: Let's sort the communications matrix to highlight our inferred clusters
End of explanation
"""
def cluster_with_name(clusters, name, payload=None):
ident = namemap[name]
for idx, cluster in enumerate(clusters):
if ident in cluster:
return idx, (cluster, payload)
raise ValueError("could not find name")
suspicious = [
cluster_with_name(clusters, "horton-s", {"color":"#66CC66", "desc":"The pipeline/regulatory group"}),
cluster_with_name(clusters, "skilling-j", {"color":"#FF6600", "desc":"The VIP/executives group"}),
]
suspicious = dict(suspicious)
for idx, (boundary, size) in enumerate(zip(boundaries, sizes)):
if size < 5:
continue
plt.plot(range(N), boundary*np.ones(N), color='#0066CC')
plt.plot(boundary*np.ones(N), range(N), color='#0066CC')
if idx in suspicious:
rect = patches.Rectangle((boundary-size, boundary-size),
width=size, height=size, alpha=0.5, fc=suspicious[idx][1]["color"])
plt.gca().add_patch(rect)
plt.imshow(z, cmap=blue_cmap, interpolation='nearest', aspect='auto')
plt.xlabel('Person ID')
plt.ylabel('Person ID')
plt.title('Cluster Assignments in Enron Dataset')
plt.savefig('enron.png')
"""
Explanation: Our model finds suspicious cluster based on the communication data. Let's color and label these clusters in our communications matrix.
End of explanation
"""
def cluster_names(cluster):
return [names[idx] for idx in cluster]
def get_full_name(name):
return enron_utils.FULLNAMES.get(name, name)
def get_title(name):
return enron_utils.TITLES.get(name, "?")
for cluster, payload in suspicious.values():
cnames = cluster_names(cluster)
ctitles = map(get_title, cnames)
print payload["desc"]
for n, t in zip(cnames, ctitles):
print "\t", get_full_name(n), '\t\t"{}"'.format(t)
print
"""
Explanation: We've identified two suspicious clusters. Let's look at the data to find out who these individuals are
End of explanation
"""
zmat = query.zmatrix(domain=0, latents=infers)
zmat = zmatrix_reorder(zmat, zmatrix_heuristic_block_ordering(zmat))
sns.heatmap(zmat, cmap=blue_cmap, cbar=False, xticklabels=labels, yticklabels=labels)
plt.xlabel('people (sorted)')
plt.ylabel('people (sorted)')
plt.title('Z-Matrix of IRM Cluster Assignments')
"""
Explanation: Given the uncertainty behind these latent clusters, we can visualize the variablity within these assignments with a z-matrix
Ordering the z-matrix allows us to group members of each possible cluster together
End of explanation
"""
|
ktakagaki/kt-2015-DSPHandsOn | MedianFilter/Python/04. Summaries/Summary sine with more samples(1024).ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
import sys
#Add a new path with needed .py files
sys.path.insert(0, 'C:\Users\Dominik\Documents\GitRep\kt-2015-DSPHandsOn\MedianFilter\Python')
import functions
import gitInformation
gitInformation.printInformation()
% matplotlib inline
"""
Explanation: Test: Error of the median filter with different wave number, with higher sample rate (2015.10.14. DW)
End of explanation
"""
fig = plt.figure()
for i in range (0, 40):
functions.ErrorPlotWave(i, 127,1024)
"""
Explanation: Testing with more samples ( now 1024, before 128)
Plots
End of explanation
"""
fig = plt.figure(1, figsize=(15, 3))
functions.medianSinPlot(15, 127)
plt.title('Wave number 15')
fig = plt.figure(2, figsize=(15, 3))
functions.medianSinPlot(16, 127)
plt.title('Wave number 16')
fig = plt.figure(3, figsize=(15, 3))
functions.medianSinPlot(17, 127)
plt.title('Wave number 17')
fig = plt.figure(1, figsize=(15, 3))
functions.medianSinPlot(31, 127,1024)
plt.title('Wave number 31')
fig = plt.figure(2, figsize=(15, 3))
functions.medianSinPlot(32, 127,1024)
plt.title('Wave number 32')
fig = plt.figure(3, figsize=(15, 3))
functions.medianSinPlot(33, 127,1024)
plt.title('Wave number 33')
"""
Explanation: With more samples the error rate at wave number 16 and 32 is no longer lower then expected.
End of explanation
"""
|
CCI-Tools/sandbox | notebooks/norman/xarray-ex-1.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import xarray as xr
"""
Explanation: Quick overview
Here are some quick examples of what you can do with xarray.DataArray objects. Everything is explained in much more detail in the rest of the documentation.
To begin, import numpy, pandas and xarray using their customary abbreviations:
End of explanation
"""
xr.DataArray(np.random.randn(2, 3))
data = xr.DataArray(np.random.randn(2, 3), [('x', ['a', 'b']), ('y', [-2, 0, 2])])
data
"""
Explanation: Create a DataArray
You can make a DataArray from scratch by supplying data in the form of a numpy array or list, with optional dimensions and coordinates:
End of explanation
"""
xr.DataArray(pd.Series(range(3), index=list('abc'), name='foo'))
"""
Explanation: If you supply a pandas Series or DataFrame, metadata is copied directly:
End of explanation
"""
data.values
data.dims
data.coords
len(data.coords)
data.coords['x']
data.attrs
"""
Explanation: Here are the key properties for a DataArray:
End of explanation
"""
data[[0, 1]]
data.loc['a':'b']
data.loc
data.isel(x=slice(2))
data.sel(x=['a', 'b'])
"""
Explanation: Indexing
xarray supports four kind of indexing. These operations are just as fast as in pandas, because we borrow pandas’ indexing machinery.
End of explanation
"""
data
data + 10
np.sin(data)
data.T
data.sum()
"""
Explanation: Computation
Data arrays work very similarly to numpy ndarrays:
End of explanation
"""
data.mean(dim='x')
"""
Explanation: However, aggregation operations can use dimension names instead of axis numbers:
End of explanation
"""
a = xr.DataArray(np.random.randn(3), [data.coords['y']])
b = xr.DataArray(np.random.randn(4), dims='z')
a
b
a + b
"""
Explanation: Arithmetic operations broadcast based on dimension name. This means you don’t need to insert dummy dimensions for alignment:
End of explanation
"""
v1 = xr.DataArray(np.random.rand(3, 2, 4), dims=['t', 'y', 'x'])
v2 = xr.DataArray(np.random.rand(2, 4), dims=['y', 'x'])
v1
v2
v1 + v2
"""
Explanation: Another broadcast example:
End of explanation
"""
data - data.T
"""
Explanation: It also means that in most cases you do not need to worry about the order of dimensions:
End of explanation
"""
data[:-1]
data[:1]
data[:-1] - data[:1]
"""
Explanation: Operations also align based on index labels:
End of explanation
"""
labels = xr.DataArray(['E', 'F', 'E'], [data.coords['y']], name='labels')
labels
data
data.groupby(labels).mean('y')
data.groupby(labels).apply(lambda x: x - x.min())
"""
Explanation: GroupBy
xarray supports grouped operations using a very similar API to pandas:
End of explanation
"""
data.to_series()
data.to_pandas()
"""
Explanation: Convert to pandas
A key feature of xarray is robust conversion to and from pandas objects:
End of explanation
"""
ds = data.to_dataset(name='foo')
ds
"""
Explanation: Datasets and NetCDF
xarray.Dataset is a dict-like container of DataArray objects that share index labels and dimensions. It looks a lot like a netCDF file:
End of explanation
"""
ds.to_netcdf('example.nc')
xr.open_dataset('example.nc')
"""
Explanation: You can do almost everything you can do with DataArray objects with Dataset objects if you prefer to work with multiple variables at once.
Datasets also let you easily read and write netCDF files:
End of explanation
"""
|
hypergravity/cham_hates_python | notebook/cham_hates_python_07_high_performance_computing.ipynb | mit | %pylab inline
# with plt.xkcd():
fig = plt.figure(figsize=(10, 10))
ax = plt.axes(frameon=False)
plt.xlim(-1.5,1.5)
plt.ylim(-1.5,1.5)
circle = plt.Circle((0.,0.), 1., color='w', fill=False)
rect = plt.Rectangle((-1,-1), 2, 2, color='gray')
plt.gca().add_artist(rect)
plt.gca().add_artist(circle)
plt.arrow(-2., 0., 3.3, 0., head_width=0.1, head_length=0.2)
plt.arrow(0., -2., 0., 3.3, head_width=0.1, head_length=0.2)
randx = np.random.uniform(-1, 1, (100,))
randy = np.random.uniform(-1, 1, (100,))
plot(randx, randy, 'kx')
plt.gca().axis('off')
plt.text(-1.3, -0.1, '(-1, 0)', fontsize=20)
plt.text( 1.1, -0.1, '(+1, 0)', fontsize=20)
plt.text( 0.1, 1.1, '(0, +1)', fontsize=20)
plt.text( 0.1, -1.1, '(0, -1)', fontsize=20);
%%time
import random
samples = 1E5
hits = 0
for i in range(int(samples)):
x = random.uniform(-1.0, 1.0)
y = random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
hits += 1
pi = 4.0*hits/samples
print pi
"""
Explanation: <img src="https://www.python.org/static/img/python-logo.png">
Welcome to my lessons
Bo Zhang (NAOC, bozhang@nao.cas.cn) will have a few lessons on python.
These are very useful knowledge, skills and code styles when you use python to process astronomical data.
All materials can be found on my github page.
jupyter notebook (formerly named ipython notebook) is recommeded to use
These lectures are organized as below:
1. install python
2. basic syntax
3. numerical computing
4. scientific computing
5. plotting
6. astronomical data processing
7. high performance computing
8. version control
flowchart
test your code BEFORE you do ANY optimization!
find the bottleneck of your code (ps: learn to use profiler to find the bottleneck)
use tricks, experience to optimize code
use as many computing resources as possible
parallel computing in multi-CPU/core computer (multiprocessing, ...)
run code on multi-node computer cluster (PBS, ...)
some simple principles for optimization
1. memory vs. speed
2. vectorization
3. type check
4. parallel
recommended packages
1. numexpr
2. Cython
- parallel
1. multiprocessing (standard library)
2. ipcluster/ipyparallel (support PBS)
further reading
Parallel Programming with Python
Python High performance Programming
Learning Cython Programming
Parallel computing
threads: shared memory, involves locks
processes: isolated memory for each process, inter-process communication is less efficient
the easiest way to do parallel computing: embarassingly parallel (no inter-process communication), which is the case we met most often
Monte Carlo approximation for $\pi$
End of explanation
"""
%%time
import multiprocessing
def sample():
x = random.uniform(-1.0, 1.0)
y = random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
return 1
else:
return 0
pool = multiprocessing.Pool()
results_async = [pool.apply_async(sample) for i in range(int(samples))]
hits = sum(r.get() for r in results_async)
pool.close()
pi = 4.0*hits/samples
print pi
%%time
import multiprocessing
def sample_multiple(samples_partial):
return sum(sample() for i in range(samples_partial))
ntasks = 10
chunk_size = int(samples/ntasks)
pool = multiprocessing.Pool()
results_async = [pool.apply_async(sample_multiple, [chunk_size]) for i in range(ntasks)]
hits = sum(r.get() for r in results_async)
pool.close()
pi = 4.0*hits/samples
print pi
"""
Explanation: DO NOTICE , this is extremely SLOW!
End of explanation
"""
# to creat an instance of Client, import Client from IPython.parallel
from IPython.parallel import Client
# from ipyparallel import Client
%%bash
#ipcluster start -n 12
rc = Client() # creat an Client instance
rc.ids # show IDs of each engine
dview = rc[0] # select the first engine
dview
dview = rc[::2] # select every other engine
dview
dview = rc[:] # select all engines
dview
dview.execute('a = 1')
dview.pull('a').get() # equivalent to dview['a']
dview.push({'a':2}) # equivbalent to dview['a'] = 2
dview['a']
res = dview.execute('a = T_T') # got error
res.get()
res = dview.execute('b = a+1')
dview['b']
res = dview.execute('b = b+1')
dview['b']
"""
Explanation: IPython parallel
IPython's power is not limited to its advanced shell. Its parallel package includes a framework to setup and run calculations on single and multi-core machines, as well as on multiple nodes connected to a network. IPython is great because it gives an interactive twist to parallel computing and provides a common interface to different
communication protocols.
how to start engines?
type $ ipcluster start -n 12 in terminal to start a 12-engine
how to use engines?
direct view (specify tasks for engines!)
task-base view (load balanced)
Direct interface
End of explanation
"""
with dview.sync_imports():
import numpy
# the syntax import _ as _ is not supported
"""
Explanation: Engines should be treated as independent IPython sessions, and imports and custom-defined functions must be synchronized over the network. To import some libraries, both locally and in the engines, you can use the DirectView.sync_imports context manager:
End of explanation
"""
a = range(100)
def square(x):
return x*x
results_async = dview.map_async(square, a)
print results_async.get()
"""
Explanation: DirectView.map_async
End of explanation
"""
@dview.parallel(block=False)
def square(x):
return x * x
print square.map(range(100)).get()
"""
Explanation: parallel decorator
End of explanation
"""
def square(x):
return x*x
result_async = dview.apply(square, 2)
result_async.get()
"""
Explanation: DirectView.apply
executed on every engine
End of explanation
"""
dview.scatter('a', [0, 1, 2, 3])
print dview['a']
dview.scatter('a', np.arange(16))
print dview['a']
dview.execute('a = a**2')
print dview['a']
dview.gather('a').get()
"""
Explanation: scatter & gather
End of explanation
"""
from IPython.parallel import Client
rc = Client()
tview = rc.load_balanced_view()
def square(x):
return x * x
dview.apply(square, 2) # executes in every engine!
tview.apply(square, 2).get() # executed in ONLY 1 engine
tview.apply(square, np.arange(10)).get()
"""
Explanation: task-based interface (load balanced)
End of explanation
"""
def sample():
x = numpy.random.uniform(-1.0, 1.0)
y = numpy.random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
return 1
else:
return 0
@dview.parallel()
def sample_multiple(samples_partial):
return sum(sample() for i in range(samples_partial))
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
results = sample_multiple.map([chunk_size for i in range(ntasks)]) # ntask determines the # of processes-->10
print 'pi: ', sum(results.get())/np.double(samples)*4.
"""
Explanation: Run the Monte-Carlo for$\pi$ on IPython cluster
1. using @dview.parallel() decorator
End of explanation
"""
def sample():
x = numpy.random.uniform(-1.0, 1.0)
y = numpy.random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
return 1
else:
return 0
def sample_multiple(samples_partial):
return sum(sample() for i in range(samples_partial))
dview.push({'sample':sample, 'sample_multiple':sample_multiple})
"""
Explanation: 2. using direct/task-based interface
End of explanation
"""
%%time
samples = int(1E8)
ntasks = len(dview)
chunk_size = int(samples/ntasks)
dview.block = True
results = dview.map(sample_multiple, [chunk_size for i in range(ntasks)])
# task should be evenly splited on every engine
print 'pi: ', sum(results)/np.double(samples)*4.
"""
Explanation: 2.1 dview.apply - blocking mode
executed in every engin!
End of explanation
"""
%%time
samples = int(1E8)
ntasks = len(tview)
chunk_size = int(samples/ntasks)
tview.block = True
results = tview.map(sample_multiple, [chunk_size for i in range(ntasks)])
# task should be evenly splited on every engine
print 'pi: ', sum(results)/np.double(samples)*4.
"""
Explanation: 2.2 tview.map (load balanced)
End of explanation
"""
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
dview.block = False
results_async = dview.map_async(sample_multiple,
[chunk_size for i in range(ntasks)])
print results_async.ready()
print 'pi: ', sum(results_async.get())/np.double(samples)*4.
print results_async.ready()
"""
Explanation: 2.3 dview.map_async
End of explanation
"""
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
tview.block = False
results_async = tview.map_async(sample_multiple,
[chunk_size for i in range(ntasks)]) #determines the tasks for each engine
# print 'pi: ', sum(results_async.get())/np.double(samples)*4.
print results_async.ready()
print 'pi: ', sum(results_async.get())/np.double(samples)*4.
print results_async.ready()
"""
Explanation: 2.4 tview.map_async
End of explanation
"""
%%time
samples = 1E8
ntasks = len(dview)
chunk_size = int(samples/ntasks)
dview.block = True
results = dview.apply(sample_multiple, chunk_size)
print 'pi: ', sum(results)/np.double(samples)*4.
"""
Explanation: 2.5 dview.apply
End of explanation
"""
%%time
samples = 1E8
ntasks = len(tview)
chunk_size = int(samples/ntasks)
dview.block = True
results = tview.apply(sample_multiple, chunk_size)
print 'pi: ', sum(results.get())/np.double(samples)*4.
print 'pi: ', sum(results.get())/np.double(samples)*4.*ntasks
"""
Explanation: 2.6 tview.apply (single engine execution!)
End of explanation
"""
samples = 1E8
ntasks = 50
chunk_size = int(samples/ntasks)
dview.scatter('chunk_size', [chunk_size for i in range(ntasks)])
dview.scatter('sum_sample', [0 for i in range(ntasks)])
for cz in dview['chunk_size']:
print cz
dview['sample_multiple']
dview.execute('sum_sample = [sample_multiple(chunk_size_) for chunk_size_ in chunk_size]')
dview['sum_sample']
sum(dview.gather('sum_sample'))/samples*4.
"""
Explanation: 2.7 scatter & gather
End of explanation
"""
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
dview.block = False
results_async = dview.map_async(sample_multiple,
[chunk_size for i in range(ntasks)])
results_async.ready()
dview.wait(results_async)
print 'pi: ', sum(results_async.get())/np.double(samples)*4.
"""
Explanation: view.wait()
can be used to block the async results
End of explanation
"""
dview = rc[::4]
dview.execute('qtconsole')
"""
Explanation: open qtconsole to engines?
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/noaa-gfdl/cmip6/models/sandbox-2/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:35
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
ConnectedSystems/veneer-py | doc/training/7_Parallel_Processing_and_Veneer_Command_Line.ipynb | isc | from veneer.manage import create_command_line
help(create_command_line)
veneer_install = 'D:\\src\\projects\\Veneer\\Compiled\\Source 4.1.1.4484 (public version)'
source_version = '4.1.1'
cmd_directory = 'E:\\temp\\veneer_cmd'
veneer_cmd = create_command_line(veneer_install,source_version,dest=cmd_directory)
veneer_cmd
"""
Explanation: Session 7 - Parallel Processing and the Veneer command line
This session looks at options for parallel processing with Veneer - that is, by running multiple copies of Source, each with a Veneer server running, and giving instructions to each running copy in parallel.
You can establish multiple copies of Source/Veneer by running multiple copies of the Source application, loading a project and starting the Web Server Monitoring window on each one. Alternatively, you can use the Vener Command Line, which presents the same interface to Python and other systems, without the overheads of the user interface.
This session shows how you can run the Veneer command line and use it from Python as you would the main Source application.
Overview
Launching multiple copies of Veneer command line using veneer-py
Running simulations in parallel
Which Model?
Note: This session uses ExampleProject/RiverModel1.rsproj. You are welcome to work with your own model instead, however you will need to change the notebook text at certain points to reflect the names of nodes, links and functions in your model file.
The Veneer Command Line
The Veneer command line is a standalone executable program that runs the Source engine and exposes the Veneer network interface, without the main Source user interface. This means that all the veneer-py functionality able to be used whether you are running the Source application or the Veneer command line.
Setting up the command line
The Veneer Command Line is distributed with Veneer, although the setup process is a little different to the regular Veneer. The Veneer Command Line is a standalone, executable program (specifically FlowMatters.Source.VeneerCmd.exe) and can be found in the directory where you installed (probably unzipped) Veneer.
Now, where the Veneer plugin DLL can be installed and used from any directory, the Veneer Command Line needs access to all of the main libraries (DLLs) supplied with Source - and so the Veneer Command Line and the Source DLLs must reside in the same directory.
There are two options. You can either
copy the program and the other files supplied with Veneer, into your main Source installation directory, or
copy ALL of the files from the main Source directory into a common directory with Veneer.
Once you've done so, you should be able to run the Veneer command line. You can launch the Veneer command line directory from a Windows command prompt. Alternatively, you can start one or more copies directly from Python using veneer.manage.start (described below).
Because it is a common requirement for everyone to co-locate the Veneer Command Line with the files from the Source software, veneer-py includes a create_command_line function that performs the copy for you:
End of explanation
"""
from veneer.manage import start,kill_all_now
help(start)
"""
Explanation: Starting the Command Line
You can run FlowMatters.Source.VeneerCmd.exe program from a windows command prompt, but throughout these tutorials, we will use veneer-py functions for starting and stopping the program.
Specifically, we will use start to start one or more copies of the program and kill_all_now to shutdown the program. (Alternatively, they will shutdown when the Jupyter notebook is shutdown).
End of explanation
"""
project='ExampleProject/RiverModel1.rsproj'
"""
Explanation: The main things you need, in order to call start are a Source project file (a path to the .rsproj file) and a path to the Veneer command line exe. (The latter we saved to variable veneer_cmd on calling create_command_line).
End of explanation
"""
num_copies=4
first_port=9990
processes, ports = start(project,n_instances=num_copies,ports=first_port,debug=True,remote=False,veneer_exe=veneer_cmd)
"""
Explanation: We can now start up a number of Veneer command line 'servers'.
We'll specify how many we want using num_copies - Its a good idea to set this based on the number of CPU cores available.
We also set first_port - Which is used for the first server. This number is incremented by one for each extra server.
End of explanation
"""
import veneer
ports # Saved when we called start()
vs = [veneer.Veneer(port=p) for p in ports]
"""
Explanation: You should see a number of lines along the lines of [3] Server started. Ctrl-C to exit... indicating that the servers have started.
These servers will now run until your current python session ends. (To make that happen, without closing the notebook, use the Kernel|Restart menu option in Jupyter)
Parallel Simulations
You can now work with each of these Veneer servers in the same way that you worked with a single server in the earlier sessions.
You will need an instance of the Veneer client object - one for each instance.
Here, we'll create a list of Veneer clients, each connected to a different instance based on the port number
End of explanation
"""
vs[0].run_model()
vs[0].retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume'})[0:10]
"""
Explanation: You can now ask one of these servers to run a model for you:
End of explanation
"""
for v in vs:
veneer.log('Running on port %d'%v.port)
v.run_model()
print('All runs finished')
"""
Explanation: You could run a model on each server using a for loop:
End of explanation
"""
for v in vs:
veneer.log('Running on port %d'%v.port)
v.run_model(async=True)
print('All runs started... But when will they finish? And how will we know?')
"""
Explanation: But that is a sequential run - One run won't start until the previous run has finished.
The async option on v.run_model will trigger the run on the server and then allow Python to continue:
End of explanation
"""
responses = []
for v in vs:
veneer.log('Running on port %d'%v.port)
responses.append(v.run_model(async=True))
veneer.log("All runs started... Now we'll wait when until they finish")
for r,v in zip(responses,vs):
code = r.getresponse().getcode()
veneer.log('Run finished on port %d. Returned a HTTP %d code'%(v.port,code))
"""
Explanation: The above code block flies through quickly, because it doesn't wait for the simulation to finish. But how will we know when the run has finished, so that we can continue our script?
(We can look at Windows Task Manager to see the CPU load - but its not exactly a precise mechanism...)
When run with async=True, v.run_model returns a HTTP connection object, that can be queried for the success code of the run. We can use this to block for a particular run to finish. Assuming we don't want to do anything else in our script until ALL runs are finished, this is a good approach:
End of explanation
"""
kill_all_now(processes)
"""
Explanation: You can use this the async approach to run multiple, parallel simulations, from a notebook.
Note: We will shutdown those 4 copies of the server now as the next exercise will use a different model
End of explanation
"""
project='ExampleProject/RiverModel2.rsproj'
num_copies=4
first_port=9990
processes, ports = start(project,n_instances=num_copies,ports=first_port,debug=True,remote=False,veneer_exe=veneer_cmd)
"""
Explanation: Running a batch simulation with parallel simulations...
In Tutorial 5, we performed exponential sampling for an inflow scaling factor. We ran the model 50 times. Lets run that process again, using parallel processing.
The code looked like this (combining a few notebook cells and removing some interim visualisation)
```python
import numpy as np
NUMBER_OF_SIMULATIONS=50
sampled_scaling_factors = np.random.exponential(size=NUMBER_OF_SIMULATIONS)
sampled_scaling_factors
spill_results=[]
Store our time series criteria in a variable to use it in configuring recording and retrieving results
ts_match_criteria = {'NetworkElement':'Recreational Lake','RecordingVariable':'Spill Volume'}
v.configure_recording(enable=[ts_match_criteria])
for scaling_factor in sampled_scaling_factors:
veneer.log('Running for $InflowScaling=%f'%scaling_factor)
# We are running the multiple many times in this case - so lets drop any results we already have...
v.drop_all_runs()
# Set $InflowScaling to current scaling factor
v.update_function('$InflowScaling',scaling_factor)
v.run_model()
# Retrieve the spill time series, as an annual sum, with the column named for the variable ('Spill Volume')
run_results = v.retrieve_multiple_time_series(criteria=ts_match_criteria,timestep='annual',name_fn=veneer.name_for_variable)
# Store the mean spill volume and the scaling factor we used
spill_results.append({'ScalingFactor':scaling_factor,'SpillVolume':run_results['Spill Volume'].mean()})
Convert the results to a Data Frame
spill_results_df = pd.DataFrame(spill_results)
spill_results_df
```
Lets convert this to something that runs in parallel
First, lets set up the servers
End of explanation
"""
vs = [veneer.Veneer(port=p) for p in ports]
"""
Explanation: We need a veneer client object for each running server:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
NUMBER_OF_SIMULATIONS=100
sampled_scaling_factors = np.random.exponential(size=NUMBER_OF_SIMULATIONS)
sampled_scaling_factors
plt.hist(sampled_scaling_factors)
"""
Explanation: The sampling process is the same as before...
Except we'll use more samples and make sure the number of samples is a multiple of the number of servers!
End of explanation
"""
samples = sampled_scaling_factors.reshape(NUMBER_OF_SIMULATIONS/len(ports),len(ports))
samples
"""
Explanation: NOW We will organise our samples based on the number of servers we're running - effectively creating batches
End of explanation
"""
for row in samples:
print(row)
break
"""
Explanation: If we iterate over our samples array now, we'll get groups of four. (The break statement stops after the first itertation of the loop)
End of explanation
"""
# Store our time series criteria in a variable to use it in configuring recording and retrieving results
ts_match_criteria = {'NetworkElement':'Recreational Lake','RecordingVariable':'Spill Volume'}
for v in vs:
v.configure_recording(enable=[ts_match_criteria])
"""
Explanation: Importantly, In switching on the output recording, we need to do so on each of the running servers:
End of explanation
"""
spill_results=[]
total_runs=0
for group in samples:
group_run_responses = [] # Somewhere
for i in range(len(vs)): # Will be 0,1.. #ports
total_runs += 1
scaling_factor = group[i]
v = vs[i]
# We are running the multiple many times in this case - so lets drop any results we already have...
v.drop_all_runs()
# Set $InflowScaling to current scaling factor
v.update_function('$InflowScaling',scaling_factor)
response = v.run_model(async=True)
group_run_responses.append(response)
#### NOW, All runs for this group have been triggered. Now go back and retrieve results
# Retrieve the spill time series, as an annual sum, with the column named for the variable ('Spill Volume')
for i in range(len(vs)): # Will be 0,1.. #ports
scaling_factor = group[i]
v = vs[i]
r = group_run_responses[i]
code = r.getresponse().getcode() # Wait until the job is finished
run_results = v.retrieve_multiple_time_series(criteria=ts_match_criteria,timestep='annual',name_fn=veneer.name_for_variable)
# Store the mean spill volume and the scaling factor we used
spill_results.append({'ScalingFactor':scaling_factor,'SpillVolume':run_results['Spill Volume'].mean()})
veneer.log('Completed %d runs'%total_runs)
# Convert the results to a Data Frame
import pandas as pd
spill_results_df = pd.DataFrame(spill_results)
spill_results_df
spill_results_df['SpillVolumeGL'] = spill_results_df['SpillVolume'] * 1e-6 # Convert to GL
spill_results_df['SpillVolumeGL'].hist()
"""
Explanation: Now, we want to trigger all our runs. We'll use the async=True option and wait for each group of runs to finish before starting the next group.
End of explanation
"""
# Terminate the veneer servers
kill_all_now(processes)
"""
Explanation: Final remarks
The above example isn't as efficient as it could be, but it may be good enough for many circumstances.
The simulations run in parallel, but everything else (configuring recorders, retrieving and post-processing results) is done sequentially. (And no simulations are taking place while that's happening).
Furthermore, if some model runs complete quicker than others, one or more Veneer servers will be idle waiting for further instructions.
The example above is a reasonable approach if the simulations take much longer than the post processing and if the simulations will typically take around the same amount of time.
End of explanation
"""
|
jbn/itikz | Quickstart.ipynb | mit | %load_ext itikz
"""
Explanation: Quick Start
Note: If you're viewing this notebook on nbviewer.jupyter.org some of the SVGs render improperly, even across cell output contexts. The bug is not in itikz.
Installation
Install TeX and pdf2svg
This is platform-dependent.
See:
Texlive
pdf2svg
Install itikz
sh
pip install itikz
Usage
Load itikz. It's a jupter extension.
End of explanation
"""
%%itikz
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw[help lines] grid (5, 5);
\draw[fill=black] (1, 1) rectangle (2, 2);
\draw[fill=black] (2, 1) rectangle (3, 2);
\draw[fill=black] (3, 1) rectangle (4, 2);
\draw[fill=black] (3, 2) rectangle (4, 3);
\draw[fill=black] (2, 3) rectangle (3, 4);
\end{tikzpicture}
\end{document}
%%itikz
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw[help lines] grid (5, 5);
\draw[fill=black] (1, 1) rectangle (2, 2);
\draw[fill=black] (2, 1) rectangle (3, 2);
\draw[fill=black] (3, 1) rectangle (4, 2);
\draw[fill=black] (3, 2) rectangle (4, 3);
\draw[fill=black] (2, 3) rectangle (3, 4);
\end{tikzpicture}
\end{document}
"""
Explanation: Create a simple standalone document.
End of explanation
"""
!ls *.svg *.tex
"""
Explanation: The extension:
Writes the cell as a .tex file;
Runs pdflatex on the source;
Runs pdf2svg on the generated pdf;
Removes the intermediary artifacts.
By default, the filenames are the md5 hash of the source. The extension uses the hash to see if regeneration is necessessary. If it's not, it just loads the SVG file.
End of explanation
"""
%%itikz --file-prefix conway-
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw[help lines] grid (5, 5);
\draw[fill=black] (1, 1) rectangle (2, 2);
\draw[fill=black] (2, 1) rectangle (3, 2);
\draw[fill=black] (3, 1) rectangle (4, 2);
\draw[fill=black] (3, 2) rectangle (4, 3);
\draw[fill=black] (2, 3) rectangle (3, 4);
\end{tikzpicture}
\end{document}
!ls *.svg *.tex
"""
Explanation: This is annoying sometimes if you want to look for a specific file outside of the notebook. So, you can prefix it, attaching semantical meaning.
End of explanation
"""
!rm -f *.svg *.tex
%%itikz --temp-dir --file-prefix conway-
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw[help lines] grid (5, 5);
\draw[fill=black] (1, 1) rectangle (2, 2);
\draw[fill=black] (2, 1) rectangle (3, 2);
\draw[fill=black] (3, 1) rectangle (4, 2);
\draw[fill=black] (3, 2) rectangle (4, 3);
\draw[fill=black] (2, 3) rectangle (3, 4);
\end{tikzpicture}
\end{document}
!ls *.svg *.tex
"""
Explanation: Of course, writing TikZ files entails lots of tiny tweaks, resulting in a lot of accumulated cruft. For development, you probably want to use your system temp directory to keep your project directory clean.
End of explanation
"""
import os
os.environ['ITIKZ_TEMP_DIR'] = '1'
%%itikz --file-prefix conway-
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw[help lines] grid (5, 5);
\draw[fill=blue!10] (1, 1) rectangle (2, 2);
\draw[fill=blue!10] (2, 1) rectangle (3, 2);
\draw[fill=blue!10] (3, 1) rectangle (4, 2);
\draw[fill=blue!10] (3, 2) rectangle (4, 3);
\draw[fill=blue!10] (2, 3) rectangle (3, 4);
\end{tikzpicture}
\end{document}
!ls *.svg *.tex
del os.environ['ITIKZ_TEMP_DIR']
"""
Explanation: To make it easier to switch from development to production mode, setting the ITIKZ_TEMP_DIR environmental to any value enables --temp-dir.
End of explanation
"""
conway_str = r"""\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw[help lines] grid (5, 5);
\draw[fill=magenta] (1, 1) rectangle (2, 2);
\draw[fill=magenta] (2, 1) rectangle (3, 2);
\draw[fill=magenta] (3, 1) rectangle (4, 2);
\draw[fill=magenta] (3, 2) rectangle (4, 3);
\draw[fill=magenta] (2, 3) rectangle (3, 4);
\end{tikzpicture}
\end{document}"""
%itikz --temp-dir --file-prefix conway- conway_str
"""
Explanation: Sometimes, you want to generate a TikZ document from a string, rather than a cell. You can do that using the line magic.
End of explanation
"""
%%itikz --file-prefix implicit-demo- --implicit-pic
\draw[help lines] grid (5, 5);
\draw[fill=magenta!10] (1, 1) rectangle (2, 2);
\draw[fill=magenta!10] (2, 1) rectangle (3, 2);
\draw[fill=magenta!10] (3, 1) rectangle (4, 2);
\draw[fill=magenta!10] (3, 2) rectangle (4, 3);
\draw[fill=magenta!10] (2, 3) rectangle (3, 4);
"""
Explanation: Generally, string-generation is bad. One useful thing you can do without it is use an implicit tikzpicture environment.
End of explanation
"""
!cat implicit-demo-a6fdb3ecbc22048b7f090c20b5039b38.tex
!rm implicit-demo*
"""
Explanation: Note that the resulting tex artifact is a full document so you can use it later when writing a tex document.
End of explanation
"""
%%itikz --temp-dir --implicit-pic --tikz-libraries=quotes,angles --tex-packages=amsfonts --scale=2
% Example from Paul Gaborit
% http://www.texample.net/tikz/examples/angles-quotes/
\draw
(3,-1) coordinate (a) node[right] {a}
-- (0,0) coordinate (b) node[left] {b}
-- (2,2) coordinate (c) node[above right] {c}
pic["$\alpha$", draw=orange, <->, angle eccentricity=1.2, angle radius=1cm]
{angle=a--b--c};
\node[rotate=10] (r) at (2.5, 0.65) {Something about in $\mathbb{R}^2$};
"""
Explanation: In an --implicit-pic, it's often useful to:
Set the \tikzpicture[scale=X] via --scale=<X> while iterating.
Set the \usepackage{X,Y,Z} via --tex-packages=<X,Y,Z>
Set the \usetizlibrary{X,Y,Z} via --tiz-libraries=<X,Y,Z>
End of explanation
"""
%%itikz --temp-dir --implicit-standalone --tex-packages=smartdiagram,amsfonts
\smartdiagramset{uniform sequence color=true,
sequence item border color=black,
sequence item font size=\footnotesize,
sequence item text color=white
}
\smartdiagram[sequence diagram]{
$\mathbb{N}$,
$\mathbb{Z}$,
$\mathbb{Q}$,
$\mathbb{R}$,
$\mathbb{I}$,
$\mathbb{C}$
}
"""
Explanation: Sometimes, tikz-based packages don't use tikzpicture environments. To save a few keystrokes, you may want to use an implicit standalone flag. Note: You have to use --tex-packages=tikz in this environment if you need tikz itself!
End of explanation
"""
node_names = "ABCDEF"
nodes = {s: int(365/len(node_names) * i) for i, s in enumerate(node_names)}
n = len(nodes)
nodes
"""
Explanation: To help ensure that your tikz pictures stay aligned with your data -- and, to reduce the need for properly knowing PGF -- you can use jinja2 templates! For example, lets say you had five noes in a DAG, ${A,B,C,D,F}$. You could figure out positioning and such in the notebook, where your brain lives.
End of explanation
"""
%%itikz --as-jinja --temp-dir
\documentclass[tikz]{standalone}
\usetikzlibrary{arrows,automata}
\definecolor{mymagenta}{RGB}{226,0,116}
\begin{document}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\tikzstyle{every state}=[fill=mymagenta,draw=none,text=white]
{% for name, angle in nodes.items() -%}
\node[color=mymagenta] (v{{loop.index0}}) at ({{angle}}:1) {${{name}}$};
{% endfor -%}
{% for n1 in range(n) -%}
{% for n2 in range(n) -%}
{%if n1 < n2 -%}
\path (v{{n1}}) edge (v{{n2}});
{% endif -%}
{% endfor -%}
{% endfor -%}
\end{tikzpicture}
\end{document}
"""
Explanation: Then, you can interpret the cell magic source as a jinja2 template.
End of explanation
"""
%%itikz --as-jinja --temp-dir --tex-packages=tikz --tikz-libraries=arrows,automata --implicit-standalone
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\tikzstyle{every state}=[fill=mymagenta,draw=none,text=white]
{% for name, angle in nodes.items() -%}
\node[color=red] (v{{loop.index0}}) at ({{angle}}:1) {${{name}}$};
{% endfor -%}
{% for n1 in range(n) -%}
{% for n2 in range(n) -%}
{%if n1 < n2 -%}
\path (v{{n1}}) edge (v{{n2}});
{% endif -%}
{% endfor -%}
{% endfor -%}
\end{tikzpicture}
"""
Explanation: Which also works with the implicit environments.
End of explanation
"""
%%itikz --as-jinja --print-jinja --temp-dir --as-jinja --tex-packages=tikz --tikz-libraries=arrows,automata --implicit-standalone
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\tikzstyle{every state}=[fill=mymagenta,draw=none,text=white]
{% for name, angle in nodes.items() -%}
\node[color=red] (v{{loop.index0}}) at ({{angle}}:1) {${{name}}$};
{% endfor -%}
{% for n1 in range(n) -%}
{% for n2 in range(n) -%}
{%if n1 < n2 -%}
\path (v{{n1}}) edge (v{{n2}});
{% endif -%}
{% endfor -%}
{% endfor %}
\end{tikzpicture}
"""
Explanation: Sometimes, you'll make mistakes. Debugging transpiled code is hard, especially without a mapping. To help, you can print the interpolated source.
End of explanation
"""
%%writefile dag_demo.tex
\documentclass[tikz]{standalone}
\usetikzlibrary{arrows,automata}
\definecolor{mymagenta}{RGB}{226,0,116}
\begin{document}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\tikzstyle{every state}=[fill=mymagenta,draw=none,text=white]
{% block content %}
{% endblock %}
\end{tikzpicture}
\end{document}
!ls dag_demo.tex
%%itikz --as-jinja --temp-dir
{% extends "dag_demo.tex" %}
{% block content %}
{% for name, angle in nodes.items() %}
\node[color=mymagenta] (v{{loop.index0}}) at ({{angle}}:1) {${{name}}$};
{% endfor -%}
{% for n1 in range(n) %}
{% for n2 in range(n) %}
{%if n1 < n2 %}
\path (v{{n1}}) edge (v{{n2}});
{% endif %}
{% endfor -%}
{% endfor %}
{% endblock %}
!rm dag_demo.tex # ignore this, it's just housekeeping
"""
Explanation: Finally, its worth noting that jinja templating assumes a jinja2 file loader set in the $CWD. This means you can stick to the DRY principal with blocks and template extension.
End of explanation
"""
%%itikz --implicit-pic --temp-dir --rasterize
\draw[fill=black] (1, 1) rectangle (2, 2);
\draw[fill=black] (2, 1) rectangle (3, 2);
\draw[fill=black] (3, 1) rectangle (4, 2);
\draw[fill=black] (3, 2) rectangle (4, 3);
\draw[fill=black] (2, 3) rectangle (3, 4);
"""
Explanation: Sometimes, you may want to send a collaborator a quick snapshot of your work. But, lots of messaging windows won't accept SVG drag-and-drop. As a time saver, you can rasterize the SVG as a PNG (if you have cairosvg installed).
End of explanation
"""
%%itikz --implicit-pic
4$
"""
Explanation: Also, the latex command line error messages tend to be...verbose. By default, only the tail is shown.
End of explanation
"""
%%itikz --implicit-pic --full-error
4$
"""
Explanation: But, if this isn't enough, you can see the whole message.
End of explanation
"""
%itikz -h
"""
Explanation: Finally, if you forget the usage, ask for help.
End of explanation
"""
|
nansencenter/nansat-lectures | notebooks/09 Nansat introduction.ipynb | gpl-3.0 | import os
import shutil
import nansat
idir = os.path.join(os.path.dirname(nansat.__file__), 'tests', 'data/')
"""
Explanation: Nansat: First Steps
Copy sample data
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
from nansat import Nansat
n = Nansat(idir+'gcps.tif')
"""
Explanation: Open file with Nansat
End of explanation
"""
print (n)
"""
Explanation: Read information ABOUT the data (METADATA)
End of explanation
"""
b1 = n[1]
"""
Explanation: Read the actual DATA
End of explanation
"""
%whos
plt.imshow(b1);plt.colorbar()
plt.show()
"""
Explanation: Check what kind of data we have
End of explanation
"""
|
ANNarchy/ANNarchy | examples/tensorboard/BasalGanglia.ipynb | gpl-2.0 | from ANNarchy import *
from ANNarchy.extensions.tensorboard import Logger
import matplotlib.pyplot as plt
"""
Explanation: Logging with tensorboard
The tensorboard extension allows to log various information (scalars, images, etc) during training for visualization using tensorboard.
It has to be explicitly imported:
End of explanation
"""
stimuli = [
([1, 0, 0, 0], 0), # A : left
([0, 1, 0, 0], 0), # B : left
([0, 0, 1, 0], 1), # C : right
([0, 0, 0, 1], 1), # D : right
]
"""
Explanation: As it is just for demonstration purposes, we will be an extremely simplified model of the basal ganglia learning to solve through reinforcement learning a stimulus-response task with 4 stimuli and 2 responses (left and right). The two first stimuli should be responded with left, the two others with right.
End of explanation
"""
cortex = Population(4, Neuron(parameters="r=0.0"))
"""
Explanation: We keep here the model as simple as possible. It is inspired from the rate-coded model described here:
Vitay J, Hamker FH. 2010. A computational model of Basal Ganglia and its role in memory retrieval in rewarded visual memory tasks. Frontiers in computational neuroscience 4. doi:10.3389/fncom.2010.00013
The input population is composed of 4 static neurons to represent the inputs:
End of explanation
"""
msn = Neuron(
parameters="tau = 10.0 : population; noise = 0.1 : population",
equations="""
tau*dv/dt + v = sum(exc) - sum(inh) + noise * Uniform(-1, 1)
r = clip(v, 0.0, 1.0)
""")
striatum = Population(10, msn)
"""
Explanation: The cortex projects on the striatum, which is composed of 10 neurons integrating excitatory and inhibitory inputs:
End of explanation
"""
gp_neuron = Neuron(
parameters="tau = 10.0 : population; B = 1.0",
equations="tau*dv/dt + v = B - sum(inh); r= pos(v)")
gpi = Population(2, gp_neuron)
"""
Explanation: The striatum projects inhibitorily on GPi, whose neurons are tonically active (high baseline). Normally, GPi would project on the thalamus and back to the cortex, but here we read the output of the network directly in GPi: if the first neuron (corresponding to the left action) is less active than the second neuron, the selected action is left.
End of explanation
"""
corticostriatal = Synapse(
parameters="""
eta = 0.1 : projection
alpha = 0.5 : projection
dopamine = 0.0 : projection""",
equations="w += eta*(dopamine * pre.r * post.r - alpha*w*post.r*post.r) : min=0.0"
)
cx_str = Projection(cortex, striatum, "exc", corticostriatal)
cx_str.connect_all_to_all(weights=Uniform(0.0, 0.5))
"""
Explanation: Learning occurs at the cortico-striatal synapses, using a reward-modulated Hebbian learning rule, with Oja regularization:
End of explanation
"""
str_str = Projection(striatum, striatum, "inh")
str_str.connect_all_to_all(weights=0.6)
"""
Explanation: Some lateral competition between the striatal neurons:
End of explanation
"""
str_gpi1 = Projection(striatum[:int(striatum.size/2)], gpi[0], 'inh').connect_all_to_all(1.0)
str_gpi2 = Projection(striatum[int(striatum.size/2):], gpi[1], 'inh').connect_all_to_all(1.0)
"""
Explanation: One half of the striatal population is connected to the left GPi neuron, the other half to the right neuron:
End of explanation
"""
m = Monitor(gpi, 'r')
compile()
"""
Explanation: We add a monitor on GPi and compile:
End of explanation
"""
def training_trial(x, t):
# Delay period
cortex.r = 0.0
cx_str.dopamine = 0.0
simulate(40.0)
# Set inputs
cortex.r = np.array(x)
simulate(50.0)
# Read output
output = gpi.r
answer = np.argmin(output)
# Provide reward
reward = 1.0 if answer == t else -1.0
cx_str.dopamine = reward
simulate(10.0)
# Get recordings
data = m.get('r')
return reward, data
"""
Explanation: Each trial is very simple: we get a stimulus x from the stimuli array and a correct response t, reset the network for 40 ms, set the input and simulate for 50 ms, observe the activity in GPi to decide what the answer of the network is, provide reward accordingly to the corticostriatal projection and let learn for 10 ms.
Here the "dopamine" signal is directly the reward (+1 for success, -1 for failure), not the reward prediction error, but it is just for demonstration.
End of explanation
"""
%rm -rf runs
with Logger() as logger:
for trial in range(100):
# Get a stimulus
x, t = stimuli[trial%len(stimuli)]
# Perform a trial
reward, data = training_trial(x, t)
# Log received rewards
logger.add_scalar("Reward", reward, trial)
# Log outputs depending on the task
if trial%len(stimuli) == 0:
label = "GPi activity/A"
elif trial%len(stimuli) == 1:
label = "GPi activity/B"
elif trial%len(stimuli) == 2:
label = "GPi activity/C"
elif trial%len(stimuli) == 3:
label = "GPi activity/D"
logger.add_scalars(label, {"Left neuron": gpi.r[0], "Right neuron": gpi.r[1]}, trial)
# Log striatal activity as a 2*5 image
logger.add_image("Activity/Striatum", striatum.r.reshape((2, 5)), trial)
# Log histogram of cortico-striatal weights
w = np.array(cx_str.w)
logger.add_histogram("Cortico-striatal weights/Left - AB/CD", np.mean(w[:5, :2] - w[:5, 2:], axis=1), trial)
logger.add_histogram("Cortico-striatal weights/Right - AB/CD", np.mean(w[5:, :2] - w[5:, 2:], axis=1), trial)
# Log matplotlib figure of GPi activity
fig = plt.figure(figsize=(10, 8))
plt.plot(data[:, 0], label="left")
plt.plot(data[:, 1], label="right")
plt.legend()
logger.add_figure("Activity/GPi", fig, trial)
"""
Explanation: The whole training procedure will simply iterate over the four stimuli for 100 trials:
python
for trial in range(100):
# Get a stimulus
x, t = stimuli[trial%len(stimuli)]
# Perform a trial
reward, data = training_trial(x, t)
We use the Logger class of the tensorboard extension to keep track of various data:
python
with Logger() as logger:
for trial in range(100):
# Get a stimulus
x, t = stimuli[trial%len(stimuli)]
# Perform a trial
reward, data = training_trial(x, t)
# Log data...
Note that it would be equivalent to manually close the Logger after training:
python
logger = Logger()
for trial in range(100):
# Get a stimulus
x, t = stimuli[trial%len(stimuli)]
# Perform a trial
reward, data = training_trial(x, t)
# Log data...
logger.close()
We log here different quantities, just to demonstrate the different methods of the Logger class:
The reward received after each trial:
python
logger.add_scalar("Reward", reward, trial)
The tag "Reward" will be the name of the plot in tensorboard. reward is the value that will be displayed, while trial is the index of the current trial (x-axis).
The activity of the two GPi cells at the end of the trial, in separate plots depending on the stimulus:
python
if trial%len(stimuli) == 0:
label = "GPi activity/A"
elif trial%len(stimuli) == 1:
label = "GPi activity/B"
elif trial%len(stimuli) == 2:
label = "GPi activity/C"
elif trial%len(stimuli) == 3:
label = "GPi activity/D"
logger.add_scalars(label, {"Left neuron": gpi.r[0], "Right neuron": gpi.r[1]}, trial)
The four plots will be grouped under the label "GPi activity", with a title A, B, C or D. Note that add_scalars() requires a dictionary of values that will plot together.
The activity in the striatum as a 2*5 image:
python
logger.add_image("Activity/Striatum", striatum.r.reshape((2, 5)), trial)
The activity should be reshaped to the correct dimensions. Note that activity in the striatum is bounded between 0 and 1, so there is no need for equalization.
An histogram of the preference for the stimuli A and B of striatal cells:
python
w = np.array(cx_str.w)
logger.add_histogram("Cortico-striatal weights/Left - AB/CD", np.mean(w[:5, :2] - w[:5, 2:], axis=1), trial)
logger.add_histogram("Cortico-striatal weights/Right - AB/CD", np.mean(w[5:, :2] - w[5:, 2:], axis=1), trial)
We make here two plots, one for the first 5 striatal cells, the other for the rest. We plot the difference between the mean weights of each cell for the stimuli A and B, and the mean weights for the stimuli C and D. If learning goes well, the first five striatal cells should have stronger weights for A and B than for C and D, as they project to the left GPi cell.
A matplotlib figure showing the time course of the two GPi cells (as recorded by the monitor):
python
fig = plt.figure(figsize=(10, 8))
plt.plot(data[:, 0], label="left")
plt.plot(data[:, 1], label="right")
plt.legend()
logger.add_figure("Activity/GPi", fig, trial)
Note that the figure will be automatically closed by the logger, no need to call show(). Logging figures is extremely slow, use that feature wisely.
By default, the logs are saved in the subfolder runs/, but this can be changed when creating the Logger:
python
with Logger("/tmp/experiment") as logger:
Each run of the network will be saved in this folder. You may want to delete the folder before each run, in order to only visualize the last run:
End of explanation
"""
%load_ext tensorboard
%tensorboard --logdir runs --samples_per_plugin images=100
"""
Explanation: You can now visualize the logged information by running tensorboard in a separate terminal and opening the corresponding page:
bash
tensorboard --logdir runs
or directly in the notebook if you have the tensorboard extension installed:
End of explanation
"""
|
mfinkle/user-data-analytics | fennec-events.ipynb | mit | update_channel = "nightly"
now = dt.datetime.now()
start = now - dt.timedelta(3)
end = now - dt.timedelta(1)
pings = get_pings(sc, app="Fennec", channel=update_channel,
submission_date=(start.strftime("%Y%m%d"), end.strftime("%Y%m%d")),
build_id=("20100101000000", "99999999999999"),
fraction=1)
subset = get_pings_properties(pings, ["meta/clientId",
"meta/documentId",
"meta/submissionDate",
"payload/UIMeasurements"])
"""
Explanation: Let's collect some data that can occur in multiple pings per client per day. We'll need to aggregate by client+day, then dump the data.
End of explanation
"""
def dedupe_pings(rdd):
return rdd.filter(lambda p: p["meta/clientId"] is not None)\
.map(lambda p: (p["meta/clientId"] + p["meta/documentId"], p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
subset = dedupe_pings(subset)
print subset.first()
"""
Explanation: Take the set of pings, make sure we have actual clientIds and remove duplicate pings.
End of explanation
"""
def safe_str(obj):
""" return the byte string representation of obj """
if obj is None:
return unicode("")
return unicode(obj)
def transform(ping):
output = []
clientId = ping["meta/clientId"] # Should not be None since we filter those out
submissionDate = ping["meta/submissionDate"] # Added via the ingestion process so should not be None
events = ping["payload/UIMeasurements"]
if events:
for event in events:
if event["type"] == "event":
# Force all fields to strings
timestamp = safe_str(event["timestamp"])
action = safe_str(event["action"])
method = safe_str(event["method"])
# The extras is an optional field
extras = unicode("")
if "extras" in event and event["extras"] is not None:
extras = safe_str(event["extras"])
sessions = {}
experiments = []
for session in event["sessions"]:
if "experiment.1:" in session:
experiments.append(safe_str(session[13:]))
elif "firstrun.1:" in session:
sessions[unicode("firstrun")] = 1
elif "awesomescreen.1:" in session:
sessions[unicode("awesomescreen")] = 1
elif "reader.1:" in session:
sessions[unicode("reader")] = 1
output.append([clientId, submissionDate, timestamp, action, method, extras, json.dumps(sessions.keys())], json.dumps(experiments)])
return output
rawEvents = subset.flatMap(transform)
print "Raw count: " + str(rawEvents.count())
print rawEvents.first()
"""
Explanation: We're going to dump each event from the pings. Do a little empty data sanitization so we don't get NoneType errors during the dump. We create a JSON array of active experiments as part of the dump.
End of explanation
"""
def dedupe_events(rdd):
return rdd.map(lambda p: (p[0] + p[2] + p[3] + p[4], p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
uniqueEvents = dedupe_events(rawEvents)
print "Unique count: " + str(uniqueEvents.count())
print uniqueEvents.first()
"""
Explanation: The data can have duplicate events, due to a bug in the data collection that was fixed (bug 1246973). We still need to de-dupe the events. Because pings can be archived on device and submitted on later days, we can't assume dupes only happen on the same submission day. We don't use submission date when de-duping.
End of explanation
"""
grouped = pd.DataFrame(uniqueEvents.collect(), columns=["clientid", "submissiondate", "timestamp", "action", "method", "extras", "sessions", "experiments"])
!mkdir -p ./output
grouped.to_csv("./output/fennec-events-" + update_channel + "-" + end.strftime("%Y%m%d") + ".csv", index=False, encoding="utf-8")
s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mfinkle/android_events"
s3_output += "/v1/channel=" + update_channel + "/end_date=" + end.strftime("%Y%m%d")
grouped = sqlContext.createDataFrame(transformed, ["clientid", "submissiondate", "timestamp", "action", "method", "extras", "sessions", "experiments"])
grouped.saveAsParquetFile(s3_output)
"""
Explanation: Output the set of events
End of explanation
"""
|
seifip/udacity-deep-learning-nanodegree | batch-norm/Batch_Normalization_Exercises.ipynb | mit | import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
"""
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
"""
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
"""
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
"""
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
"""
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
#Placeholder for training Boolean
is_training = tf.placeholder(tf.bool, name="is_training")
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
"""
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation
"""
|
AllenDowney/ModSimPy | notebooks/hopper.ipynb | mit | # If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
kg = UNITS.kilogram
m = UNITS.meter
s = UNITS.second
N = UNITS.newton
condition = Condition(mass = 0.03 * kg,
fraction = 1 / 3,
k = 9810.0 * N / m,
duration = 0.3 * s,
L = 0.05 * m,
d = 0.005 * m,
v1 = 0 * m / s,
v2 = 0 * m / s,
g = 9.8 * m / s**2)
condition = Condition(mass = 0.03,
fraction = 1 / 3,
k = 9810.0,
duration = 0.3,
L = 0.05,
d = 0.005,
v1 = 0,
v2 = 0,
g = 9.8)
def make_system(condition):
"""Make a system object.
condition: Condition with
returns: System with init
"""
unpack(condition)
x1 = L - d # upper mass
x2 = 0 # lower mass
init = State(x1=x1, x2=x2, v1=v1, v2=v2)
m1, m2 = fraction*mass, (1-fraction)*mass
ts = linspace(0, duration, 1001)
return System(init=init, m1=m1, m2=m2, k=k, L=L, ts=ts)
"""
Explanation: Modeling and Simulation in Python
Case study: Hopper optimization
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
system = make_system(condition)
system
system.init
def slope_func(state, t, system):
"""Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
"""
x1, x2, v1, v2 = state
unpack(system)
dx = x1 - x2
f_spring = k * (L - dx)
a1 = f_spring/m1 - g
a2 = -f_spring/m2 - g
if t < 0.003 and a2 < 0:
a2 = 0
return v1, v2, a1, a2
"""
Explanation: Testing make_system
End of explanation
"""
slope_func(system.init, 0, system)
"""
Explanation: Testing slope_func
End of explanation
"""
run_odeint(system, slope_func)
system.results.tail()
plot(system.results.x1)
plot(system.results.x2)
plot(system.results.x1 - system.results.x2)
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
"""
Explanation: Now we can run the simulation.
End of explanation
"""
plot(rs, color='red', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (mm)')
"""
Explanation: Plotting r
End of explanation
"""
plot(rs, ys, color='purple')
decorate(xlabel='Radius (mm)',
ylabel='Length (m)',
legend=False)
"""
Explanation: We can also see the relationship between y and r, which I derive analytically in the book.
End of explanation
"""
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(ys, color='green', label='y')
decorate(ylabel='Length (m)')
subplot(3, 1, 3)
plot(rs, color='red', label='r')
decorate(xlabel='Time(s)',
ylabel='Radius (mm)')
savefig('chap11-fig01.pdf')
"""
Explanation: And here's the figure from the book.
End of explanation
"""
T = interp_inverse(ys, kind='cubic')
t_end = T(47)
t_end
"""
Explanation: We can use interpolation to find the time when y is 47 meters.
End of explanation
"""
R = interpolate(rs, kind='cubic')
R(t_end)
"""
Explanation: At that point r is 55 mm, which is Rmax, as expected.
End of explanation
"""
THETA = interpolate(thetas, kind='cubic')
THETA(t_end)
"""
Explanation: The total amount of rotation is 1253 rad.
End of explanation
"""
kg = UNITS.kilogram
N = UNITS.newton
"""
Explanation: Unrolling
For unrolling the paper, we need more units:
End of explanation
"""
condition = Condition(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
duration = 180 * s)
"""
Explanation: And a few more parameters in the Condition object.
End of explanation
"""
def make_system(condition):
"""Make a system object.
condition: Condition with Rmin, Rmax, Mcore, Mroll,
L, tension, and duration
returns: System with init, k, rho_h, Rmin, Rmax,
Mcore, Mroll, ts
"""
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k, rho_h=rho_h,
Rmin=Rmin, Rmax=Rmax,
Mcore=Mcore, Mroll=Mroll,
ts=ts)
"""
Explanation: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
End of explanation
"""
system = make_system(condition)
system
system.init
"""
Explanation: Testing make_system
End of explanation
"""
def moment_of_inertia(r, system):
"""Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object with Mcore, rho, Rmin, Rmax
returns: moment of inertia in kg m**2
"""
unpack(system)
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
"""
Explanation: Here's how we compute I as a function of r:
End of explanation
"""
moment_of_inertia(system.Rmin, system)
"""
Explanation: When r is Rmin, I is small.
End of explanation
"""
moment_of_inertia(system.Rmax, system)
"""
Explanation: As r increases, so does I.
End of explanation
"""
def slope_func(state, t, system):
"""Computes the derivatives of the state variables.
state: State object with theta, omega, y
t: time
system: System object with Rmin, k, Mcore, rho_h, tension
returns: sequence of derivatives
"""
theta, omega, y = state
unpack(system)
r = sqrt(2*k*y + Rmin**2)
I = moment_of_inertia(r, system)
tau = r * tension
alpha = tau / I
dydt = -r * omega
return omega, alpha, dydt
"""
Explanation: Here's the slope function.
End of explanation
"""
slope_func(system.init, 0*s, system)
"""
Explanation: Testing slope_func
End of explanation
"""
run_odeint(system, slope_func)
"""
Explanation: Now we can run the simulation.
End of explanation
"""
system.results.tail()
"""
Explanation: And look at the results.
End of explanation
"""
thetas = system.results.theta
omegas = system.results.omega
ys = system.results.y
"""
Explanation: Extrating the time series
End of explanation
"""
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
"""
Explanation: Plotting theta
End of explanation
"""
plot(omegas, color='orange', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
"""
Explanation: Plotting omega
End of explanation
"""
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
"""
Explanation: Plotting y
End of explanation
"""
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(omegas, color='orange', label='omega')
decorate(ylabel='Angular velocity (rad/s)')
subplot(3, 1, 3)
plot(ys, color='green', label='y')
decorate(xlabel='Time(s)',
ylabel='Length (m)')
savefig('chap11-fig02.pdf')
"""
Explanation: Here's the figure from the book.
End of explanation
"""
condition = Condition(Rmin = 8e-3 * m,
Rmax = 16e-3 * m,
Rout = 35e-3 * m,
mass = 50e-3 * kg,
L = 1 * m,
g = 9.8 * m / s**2,
duration = 1 * s)
"""
Explanation: Yo-yo
Exercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string.
I provide a Condition object with the system parameters:
Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string.
L is the length of the string.
g is the acceleration of gravity.
End of explanation
"""
def make_system(condition):
"""Make a system object.
condition: Condition with Rmin, Rmax, Rout,
mass, L, g, duration
returns: System with init, k, Rmin, Rmax, mass,
I, g, ts
"""
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L,
v = 0 * m / s)
I = mass * Rout**2 / 2
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k,
Rmin=Rmin, Rmax=Rmax,
mass=mass, I=I, g=g,
ts=ts)
"""
Explanation: Here's a make_system function that computes I and k based on the system parameters.
I estimated I by modeling the yo-yo as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
End of explanation
"""
system = make_system(condition)
system
system.init
"""
Explanation: Testing make_system
End of explanation
"""
# Solution goes here
"""
Explanation: Write a slope function for this system, using these results from the book:
$ r = \sqrt{2 k y + R_{min}^2} $
$ T = m g I / I^* $
$ a = -m g r^2 / I^* $
$ \alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
Hint: If y is less than 0, it means you have reached the end of the string, so the equation for r is no longer valid. In this case, the simplest thing to do it return the sequence of derivatives 0, 0, 0, 0
End of explanation
"""
slope_func(system.init, 0*s, system)
"""
Explanation: Test your slope function with the initial conditions.
End of explanation
"""
run_odeint(system, slope_func)
"""
Explanation: Then run the simulation.
End of explanation
"""
system.results.tail()
"""
Explanation: Check the final conditions. If things have gone according to plan, the final value of y should be close to 0.
End of explanation
"""
thetas = system.results.theta
ys = system.results.y
"""
Explanation: Plot the results.
End of explanation
"""
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
"""
Explanation: theta should increase and accelerate.
End of explanation
"""
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
"""
Explanation: y should decrease and accelerate down.
End of explanation
"""
|
isb-cgc/examples-Python | notebooks/UNC HiSeq mRNAseq gene expression.ipynb | apache-2.0 | import gcp.bigquery as bq
mRNAseq_BQtable = bq.Table('isb-cgc:tcga_201607_beta.mRNA_UNC_HiSeq_RSEM')
"""
Explanation: UNC HiSeq mRNAseq gene expression (RSEM)
The goal of this notebook is to introduce you to the mRNAseq gene expression BigQuery table.
This table contains all available TCGA Level-3 gene expression data produced by UNC's RNAseqV2 pipeline using the Illumina HiSeq platform, as of July 2016. The most recent archive (eg unc.edu_BRCA.IlluminaHiSeq_RNASeqV2.Level_3.1.11.0) for each of the 33 tumor types was downloaded from the DCC, and data extracted from all files matching the pattern %.rsem.genes.normalized_results. Each of these raw “RSEM genes normalized results” files has two columns: gene_id and normalized_count. The gene_id string contains two parts: the gene symbol, and the Entrez gene ID, separated by | eg: TP53|7157. During ETL, the gene_id string is split and the gene symbol is stored in the original_gene_symbol field, and the Entrez gene ID is stored in the gene_id field. In addition, the Entrez ID is used to look up the current HGNC approved gene symbol, which is stored in the HGNC_gene_sybmol field.
In order to work with BigQuery, you need to import the python bigquery module (gcp.bigquery) and you need to know the name(s) of the table(s) you are going to be working with:
End of explanation
"""
%bigquery schema --table $mRNAseq_BQtable
"""
Explanation: From now on, we will refer to this table using this variable ($mRNAseq_BQtable), but we could just as well explicitly give the table name each time.
Let's start by taking a look at the table schema:
End of explanation
"""
%%sql --module count_unique
DEFINE QUERY q1
SELECT COUNT (DISTINCT $f, 25000) AS n
FROM $t
fieldList = ['ParticipantBarcode', 'SampleBarcode', 'AliquotBarcode']
for aField in fieldList:
field = mRNAseq_BQtable.schema[aField]
rdf = bq.Query(count_unique.q1,t=mRNAseq_BQtable,f=field).results().to_dataframe()
print " There are %6d unique values in the field %s. " % ( rdf.iloc[0]['n'], aField)
"""
Explanation: Now let's count up the number of unique patients, samples and aliquots mentioned in this table. We will do this by defining a very simple parameterized query. (Note that when using a variable for the table name in the FROM clause, you should not also use the square brackets that you usually would if you were specifying the table name as a string.)
End of explanation
"""
fieldList = ['original_gene_symbol', 'HGNC_gene_symbol', 'gene_id']
for aField in fieldList:
field = mRNAseq_BQtable.schema[aField]
rdf = bq.Query(count_unique.q1,t=mRNAseq_BQtable,f=field).results().to_dataframe()
print " There are %6d unique values in the field %s. " % ( rdf.iloc[0]['n'], aField)
"""
Explanation: We can do the same thing to look at how many unique gene symbols and gene ids exist in the table:
End of explanation
"""
%%sql
SELECT
HGNC_gene_symbol,
original_gene_symbol,
gene_id
FROM
$mRNAseq_BQtable
WHERE
( original_gene_symbol IS NOT NULL
AND HGNC_gene_symbol IS NOT NULL
AND original_gene_symbol=HGNC_gene_symbol
AND gene_id IS NOT NULL )
GROUP BY
original_gene_symbol,
HGNC_gene_symbol,
gene_id
ORDER BY
HGNC_gene_symbol
"""
Explanation: Based on the counts, we can see that there are a few instances where the original gene symbol (from the underlying TCGA data file), or the HGNC gene symbol or the gene id (also from the original TCGA data file) is missing, but for the majority of genes, all three values should be available and for the most part the original gene symbol and the HGNC gene symbol that was added during ETL should all match up. This next query will generate the complete list of genes for which none of the identifiers are null, and where the original gene symbol and the HGNC gene symbol match. This list has over 18000 genes in it.
End of explanation
"""
%%sql
SELECT
HGNC_gene_symbol,
original_gene_symbol,
gene_id
FROM
$mRNAseq_BQtable
WHERE
( original_gene_symbol IS NOT NULL
AND HGNC_gene_symbol IS NOT NULL
AND original_gene_symbol!=HGNC_gene_symbol
AND gene_id IS NOT NULL )
GROUP BY
original_gene_symbol,
HGNC_gene_symbol,
gene_id
ORDER BY
HGNC_gene_symbol
"""
Explanation: We might also want to know how often the gene symbols do not agree:
End of explanation
"""
%%sql
SELECT
Study,
n,
exp_mean,
exp_sigma,
(exp_sigma/exp_mean) AS exp_cv
FROM (
SELECT
Study,
AVG(LOG2(normalized_count+1)) AS exp_mean,
STDDEV_POP(LOG2(normalized_count+1)) AS exp_sigma,
COUNT(AliquotBarcode) AS n
FROM
$mRNAseq_BQtable
WHERE
( SampleTypeLetterCode="TP"
AND HGNC_gene_symbol="EGFR" )
GROUP BY
Study )
ORDER BY
exp_sigma DESC
"""
Explanation: BigQuery is not just a "look-up" service -- you can also use it to perform calculations. In this next query, we take a look at the mean, standard deviation, and coefficient of variation for the expression of EGFR, within each tumor-type, as well as the number of primary tumor samples that went into each summary statistic.
End of explanation
"""
%%sql --module highVar
SELECT
Study,
HGNC_gene_symbol,
n,
exp_mean,
exp_sigma,
(exp_sigma/exp_mean) AS exp_cv
FROM (
SELECT
Study,
HGNC_gene_symbol,
AVG(LOG2(normalized_count+1)) AS exp_mean,
STDDEV_POP(LOG2(normalized_count+1)) AS exp_sigma,
COUNT(AliquotBarcode) AS n
FROM
$t
WHERE
( SampleTypeLetterCode="TP" )
GROUP BY
Study,
HGNC_gene_symbol )
ORDER BY
exp_sigma DESC
"""
Explanation: We can also easily move the gene-symbol out of the WHERE clause and into the SELECT and GROUP BY clauses and have BigQuery do this same calculation over all genes and all tumor types. This time we will use the --module option to define the query and then call it in the next cell from python.
End of explanation
"""
q = bq.Query(highVar,t=mRNAseq_BQtable)
print q.sql
"""
Explanation: Once we have defined a query, we can put it into a python object and print out the SQL statement to make sure it looks as expected:
End of explanation
"""
r = bq.Query(highVar,t=mRNAseq_BQtable).results()
#r.to_dataframe()
"""
Explanation: And then we can run it and save the results in another python object:
End of explanation
"""
%%sql --module hv_genes
SELECT *
FROM ( $hv_result )
HAVING
( exp_mean > 6.
AND n >= 200
AND exp_cv > 0.5 )
ORDER BY
exp_cv DESC
bq.Query(hv_genes,hv_result=r).results().to_dataframe()
"""
Explanation: Since the result of the previous query is quite large (over 600,000 rows representing ~20,000 genes x ~30 tumor types), we might want to put those results into one or more subsequent queries that further refine these results, for example:
End of explanation
"""
|
jan-rybizki/Chempy | tutorials/2-Nucleosynthetic_yields.ipynb | mit | %pylab inline
from Chempy.parameter import ModelParameters
from Chempy.yields import SN2_feedback, AGB_feedback, SN1a_feedback, Hypernova_feedback
from Chempy.infall import PRIMORDIAL_INFALL, INFALL
# This loads the default parameters, you can check and change them in paramter.py
a = ModelParameters()
# Implemented SN Ia yield tables
a.yield_table_name_1a_list
# AGB yields implemented
a.yield_table_name_agb_list
# CC-SN yields implemented
a.yield_table_name_sn2_list
# Hypernova yields (is mixed with Nomoto2013 CC-SN yields for stars more massive than 25Msun)
a.yield_table_name_hn_list
# Here we show the available mass and metallicity range for each yield set
# First for CC-SNe
print('Available CC-SN yield parameter range')
for item in a.yield_table_name_sn2_list:
basic_sn2 = SN2_feedback()
getattr(basic_sn2, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_sn2.masses)
print('provided metallicities',basic_sn2.metallicities)
"""
Explanation: Nucleosynthetic yields
These are key to every chemical evolution model. Chempy supports three nucleosynthetic channels at the moment:
- Core-Collapse Supernova (CC-SN)
- Supernova of type Ia (SN Ia)
- Winds from Asymptotic Giant Branch phase of stars (AGB)
End of explanation
"""
# Then for Hypernovae
print('Available HN yield parameter range')
for item in a.yield_table_name_hn_list:
basic_hn = Hypernova_feedback()
getattr(basic_hn, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_hn.masses)
print('provided metallicities',basic_hn.metallicities)
# Here for AGB stars
print('Available AGB yield parameter range')
for item in a.yield_table_name_agb_list:
basic_agb = AGB_feedback()
getattr(basic_agb, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_agb.masses)
print('provided metallicities',basic_agb.metallicities)
# And for SN Ia
print('Available SN Ia yield parameter range')
for item in a.yield_table_name_1a_list:
basic_1a = SN1a_feedback()
getattr(basic_1a, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_1a.masses)
print('provided metallicities',basic_1a.metallicities)
from Chempy.data_to_test import elements_plot
from Chempy.solar_abundance import solar_abundances
"""
Explanation: Hyper Nova (HN) is only provided for Nomoto 2013 CC-SN yields and it is mixed 50/50 with it for stars with mass >= 25 Msun
End of explanation
"""
# To get the element list we initialise the solar abundance class
basic_solar = solar_abundances()
# we load the default yield set:
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "Nomoto2013")()
basic_1a = SN1a_feedback()
getattr(basic_1a, "Seitenzahl")()
basic_agb = AGB_feedback()
getattr(basic_agb, "Karakas_net_yield")()
#Now we plot the elements available for the default yield set and which elements are available for specific surveys and come from which nucleosynthetic channel
elements_plot('default', basic_agb.elements,basic_sn2.elements,basic_1a.elements,['C','N','O'], basic_solar.table,40)
# Then we load the alternative yield set:
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "chieffi04")()
basic_1a = SN1a_feedback()
getattr(basic_1a, "Thielemann")()
basic_agb = AGB_feedback()
getattr(basic_agb, "Ventura_net")()
#And again plot the elements available
elements_plot('alternative', basic_agb.elements,basic_sn2.elements,basic_1a.elements,['C','N','O'], basic_solar.table,40)
"""
Explanation: Elements availability
usually not all elements are provided by a yield table. We have a handy plotting routine to show which elements are given. We check for the default and the alternative yield table.
End of explanation
"""
# We need solar abundances for normalisation of the feedback
basic_solar.Asplund09()
# Then we plot the [Mg/Fe] of Nomoto+ 2013 for all masses and metallicities
from Chempy.data_to_test import yield_plot
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "Nomoto2013")()
yield_plot('Nomoto+2013', basic_sn2, basic_solar, 'Mg')
# And we plot the same for Chieffi+ 2004 CC-yields
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "chieffi04")()
yield_plot('Chieffi+04', basic_sn2, basic_solar, 'Mg')
"""
Explanation: CC-SN yields
Here we visualise the yield in [X/Fe] for the whole grid in masses and metallicities for two different yields sets
- Interestingly CC-SN ejecta can be Solar in their alpha-enhancement for low-mass progenitors (=13Msun)
- Ths effect is even stronger for the Chieffi04 yields
End of explanation
"""
# Now we plot a comparison for different elements between Nomoto+ 2013 and Chieffi+ 2004 CC-yields:
# You can look into the output/ folder and see the comparison for all those elements
from Chempy.data_to_test import yield_comparison_plot
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "Nomoto2013")()
basic_sn2_chieffi = SN2_feedback()
getattr(basic_sn2_chieffi, "chieffi04")()
for element in ['C', 'N', 'O', 'Mg', 'Ca', 'Na', 'Al', 'Mn','Ti']:
yield_comparison_plot('Nomoto13', 'Chieffi04', basic_sn2, basic_sn2_chieffi, basic_solar, element)
"""
Explanation: Yield comparison
We can plot the differences of the two yield tables for different elements (They are copied into the output/ folder). Here only the result for Ti is displayed.
End of explanation
"""
# We can also plot a comparison between Karakas+ 2010 and Ventura+ 2013 AGB-yields
# Here we plot the fractional N yield
from Chempy.data_to_test import fractional_yield_comparison_plot
basic_agb = AGB_feedback()
getattr(basic_agb, "Karakas_net_yield")()
basic_agb_ventura = AGB_feedback()
getattr(basic_agb_ventura, "Ventura_net")()
fractional_yield_comparison_plot('Karakas10', 'Ventura13', basic_agb, basic_agb_ventura, basic_solar, 'N')
#The next line produces an error in the 0.2 version. Needs checking
#fractional_yield_comparison_plot('Karakas10', 'Ventura13', basic_agb, basic_agb_ventura, basic_solar, 'C')
"""
Explanation: AGB yield comparison
We have a look at the Carbon and Nitrogen yields.
We see that high mass AGB stars produce less fraction of C than low-mass AGB stars and that its vice versa for N. The C/N ratio should be IMF sensitive.
End of explanation
"""
# Different entries of the yield table are queried
print('Mass, Remnant mass fraction, Unprocessed mass in winds fraction, destroyed Hydrogen of total mass')
for i in range(len(basic_agb.masses)):
print(basic_agb.table[0.02]['Mass'][i],basic_agb.table[0.02]['mass_in_remnants'][i],basic_agb.table[0.02]['unprocessed_mass_in_winds'][i],basic_agb.table[0.02]['H'][i])
"""
Explanation: Yield table query and remnant fraction
Here you see how the yield tables are queried (the metallicity accesses the yield table)
For net yield the remnant fraction + the 'unprocessed mass in winds' sums to unity.
The changes come from destroyed Hydrogen that is fused into other elements
End of explanation
"""
# Here we compare the yields for different iron-peak elements for Seitenzahl+ 2013 and Thielemann+ 2003 SNIa tables
basic_1a = SN1a_feedback()
getattr(basic_1a, 'Seitenzahl')()
basic_1a_alternative = SN1a_feedback()
getattr(basic_1a_alternative, 'Thielemann')()
print('Mass fraction of SN1a ejecta: Cr, Mn, Fe and Ni')
print('Seitenzahl2013')
print(basic_1a.table[0.02]['Cr'],basic_1a.table[0.02]['Mn'],basic_1a.table[0.02]['Fe'],basic_1a.table[0.02]['Ni'])
print('Thielemann2003')
print(basic_1a_alternative.table[0.02]['Cr'],basic_1a_alternative.table[0.02]['Mn'],basic_1a_alternative.table[0.02]['Fe'],basic_1a_alternative.table[0.02]['Ni'])
"""
Explanation: SN Ia yields
Here we see that the SNIa ejecta differ quite strongly for our two yieldtables
End of explanation
"""
|
chrlttv/Teaching | Session2/Clustering.ipynb | mit | import pandas as pd
import numpy as np
df = pd.read_csv('NAm2.txt', sep=" ")
print(df.head())
print(df.shape)
# List of populations/tribes
tribes = df.Pop.unique()
country = df.Country.unique()
print(tribes)
print(country)
# The features that we need for clustering starts from the 9th one
# Subset of the dataframe
df_micro = df.iloc[0:494,8:5717]
df_micro.shape
"""
Explanation: Clustering with K-means
In the unsupervised setting, one of the most straightforward tasks we can perform is to find groups of data instances which are similar between each other. We call such groups of data points clusters.
We position ourselves in the setting where we have access to a dataset $D$ that consists of instances $x \in \mathbb{R}^n$. For example, if our instances have two features $x_1$ and $x_2$ we are in the $\mathbb{R}^2$ space. For simplicity and visualization purposes in this session, we assume our data to be 2-dimensional. That said, the method (as well as the implementation) generalizes to more dimensions in a straightforward way.
$k$-Means is one of the most popular and representative "clustering" algorithms. $k$-means stores $k$ centroids, that is points in the $n$-dimensional space which are then used to define clusters. A point is considered to be in a particular cluster if it is closer to that cluster's centroid than any other centroid.
The optimization algorithm
The most common algorithm uses an iterative refinement technique. $k$-means is a ubiquitous case of the Expectation Maximization algorithm for clustering; it is also referred to as Lloyd's algorithm.
Given an initial set of $k$ centroids $m_1(1), \ldots, m_k(1)$ , the algorithm proceeds by alternating between two steps:
Assignment step: Assign each observation to the cluster whose mean yields the least within-cluster sum of squares (WCSS). Since the sum of squares is the squared Euclidean distance, this is intuitively the "nearest" mean.
Update step: Calculate the new means to be the centroids of the observations in the new clusters. Since the arithmetic mean is a least-squares estimator, this also minimizes the within-cluster sum of squares (WCSS) objective.
The algorithm has converged when the assignments no longer change. Since both steps optimize the WCSS objective, and there only exists a finite number of such partitionings, the algorithm must converge to a (local) optimum. There is no guarantee that the global optimum is found using this algorithm.
The algorithm is often presented as assigning objects to the nearest cluster by distance. The standard algorithm aims at minimizing the WCSS objective, and thus assigns by "least sum of squares", which is exactly equivalent to assigning by the smallest Euclidean distance. Using a different distance function other than (squared) Euclidean distance may stop the algorithm from converging.
Illustration of training
To make it easier to understand, the figure belows illustrates the process.
The figure depicts the k-means algorithm (Images courtesy of Michael Jordan and adapted from http://stanford.edu/~cpiech/cs221/handouts/kmeans.html). The training examples are shown as dots, and the cluster centroids are shown as crosses. (a) the dataset, (b) random initial cluster centroids -- one may initialize the algorithm using data points as centroids also, (c-f) illustration of running two iterations of k-means. In each iteration, we assign each training example to the closest cluster centroid (shown by "painting" the training examples the same color as the cluster centroid to which is assigned); then we move each cluster centroid to the mean of the points assigned to it.
Today
Our goal today, is to run K-means on a real dataset. This dataset was first created to study genetic diversity accross America and consists of 494 individuals coming from 27 different tribes (across 10 countries). These individuals are described by their genetic profil in terms of micro-satellites. In addition we have information about the precise location of the tribes, given by the latitude and longitude features.
TO DO :
Import the data
* import the data NAm2.txt using pandas dataframe that you will name df.
* print the first line of df and its dimension
* Create two lists containing the name of the tribes (Pop) and the country (Country). -> see unique() from pandas
Pre-processing
* create a subset of df by only keeping genetic features. This new dataframe is name df_micro.
* do you need to scale the data?
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
Y_sklearn = pca.fit(df_micro)
projected = pca.fit_transform(df_micro)
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
plt.scatter(projected[:, 0], projected[:, 1],
c=df.Pop.astype('category').cat.codes, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
"""
Explanation: Visualisation with PCA
Wikipedia
Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of distinct principal components is equal to the smaller of the number of original variables or the number of observations minus one. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set.
Basically we will only use PCA for a visualisation purpose. Our goal is to get a 2D visualisation of a 5717-dimensional dataset. Just keep in mind that PCA relies on the assumption that the data are linearly separable, but it's not always the case!
TO DO : execute the following code!
End of explanation
"""
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=10)
res = kmeans.fit(df_micro)
labels = res.labels_
res.cluster_centers_
plt.scatter(projected[:, 0], projected[:, 1],
c=labels, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
"""
Explanation: Running K-means
We are now ready to run Kmeans! To this end, we will use the implementation given by scikit learn.
As for the previous session, we first initialise the algorithm then fit it to our data.
TO DO :
* run Kmeans and set the number of clusters to 10
* describe the obtained labels (distribution of objects among them)
* print the obtained centroids
* use the pca plot to visualise the labels obtained with Kmeans. To this end, you just need to change the parameters c in the previous scatter plot and to replace the current one with the obtained labels.
End of explanation
"""
from sklearn import metrics
# 1 random initisalition
kmeans = KMeans(n_clusters=10, init='random')
res = kmeans.fit(df_micro)
label_1 = res.labels_
centroids_1 = res.cluster_centers_
kmeans = KMeans(n_clusters=10, init='random')
res = kmeans.fit(df_micro)
label_2 = res.labels_
centroids_2 = res.cluster_centers_
metrics.adjusted_rand_score(label_1, label_2)
# 50 random initialisations
kmeans = KMeans(n_clusters=10, init='random', n_init=50)
res = kmeans.fit(df_micro)
label_1 = res.labels_
centroids_1 = res.cluster_centers_
kmeans = KMeans(n_clusters=10, init='random',n_init=50)
res = kmeans.fit(df_micro)
label_2 = res.labels_
centroids_2 = res.cluster_centers_
metrics.adjusted_rand_score(label_1, label_2)
# 50 initisalition and improve strategy
kmeans = KMeans(n_clusters=10, init='k-means++', n_init=50)
res = kmeans.fit(df_micro)
label_1 = res.labels_
centroids_1 = res.cluster_centers_
kmeans = KMeans(n_clusters=10, init='k-means++',n_init=50)
res = kmeans.fit(df_micro)
label_2 = res.labels_
centroids_2 = res.cluster_centers_
metrics.adjusted_rand_score(label_1, label_2)
"""
Explanation: Initialisation : be careful!
The initialisation step requires the to set the number of clusters K. To this end, one can either use a priori information and set it manually but there also exists several approach to determine it, including for instance, the Elbow method and the Gap Statistic.
In addition one need to initialise the centroid. Commonly used initialization methods are Forgy and Random Partition. The Forgy method randomly chooses $k$ observations from the data set and uses them as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. For expectation maximization and standard k-means algorithms, the Forgy method of initialization is preferable.
TO DO :
* run Kmeans twice (with K=10 to speed up things) on the df_micro data with random intialisations then compare the obtained labels from both runs with the adjusted rand index from the metric library (available in sklearn).
same than before but this time set the number of initialisation to 50.
swith the initialisation method to Kmeans++ and run previous experiments once again
End of explanation
"""
cluster_range = range(1,50)
cluster_errors = []
for num_clusters in cluster_range:
clust = KMeans(n_clusters=num_clusters, random_state=0, n_init=10)
clust.fit(df_micro)
cluster_errors.append(clust.inertia_)
clusters_df = pd.DataFrame( { "num_clusters":cluster_range, "cluster_errors": cluster_errors } )
clusters_df[0:10]
plt.figure(figsize=(12,6))
plt.plot( clusters_df.num_clusters, clusters_df.cluster_errors, marker = "o" )
"""
Explanation: How to set k ?
The Elbow method
The Elbow method is a method of interpretation and validation of consistency within cluster analysis designed to help finding the appropriate number of clusters in a dataset. This method looks at the percentage of variance explained as a function of the number of clusters. One should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data.
If one plots the percentage of variance explained by the clusters against the number of clusters the first clusters will add much information (explain a lot of variance), but at some point the marginal gain in explained variance will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the "elbow criterion"
End of explanation
"""
def optimalK(data, nrefs=3, maxClusters=15):
"""
Calculates KMeans optimal K using Gap Statistic from Tibshirani, Walther, Hastie
Params:
data: ndarry of shape (n_samples, n_features)
nrefs: number of sample reference datasets to create
maxClusters: Maximum number of clusters to test for
Returns: (gaps, optimalK)
"""
gaps = np.zeros((len(range(1, maxClusters)),))
resultsdf = pd.DataFrame({'clusterCount':[], 'gap':[]})
for gap_index, k in enumerate(range(1, maxClusters)):
# Holder for reference dispersion results
refDisps = np.zeros(nrefs)
# For n references, generate random sample and perform kmeans getting resulting dispersion of each loop
for i in range(nrefs):
# Create new random reference set
randomReference = np.random.random_sample(size=data.shape)
# Fit to it
km = KMeans(k)
km.fit(randomReference)
refDisp = km.inertia_
refDisps[i] = refDisp
# Fit cluster to original data and create dispersion
km = KMeans(k)
km.fit(data)
origDisp = km.inertia_
# Calculate gap statistic
gap = np.log(np.mean(refDisps)) - np.log(origDisp)
# Assign this loop's gap statistic to gaps
gaps[gap_index] = gap
resultsdf = resultsdf.append({'clusterCount':k, 'gap':gap}, ignore_index=True)
return (gaps.argmax() + 1, resultsdf) # Plus 1 because index of 0 means 1 cluster is optimal, index 2 = 3 clusters are optimal
k, gapdf = optimalK(df_micro, nrefs=5, maxClusters=30)
print ('Optimal k is: ', k)
plt.plot(gapdf.clusterCount, gapdf.gap, linewidth=3)
plt.scatter(gapdf[gapdf.clusterCount == k].clusterCount, gapdf[gapdf.clusterCount == k].gap, s=250, c='r')
plt.xlabel('Cluster Count')
plt.ylabel('Gap Value')
plt.title('Gap Values by Cluster Count')
plt.show()
"""
Explanation: Gap Statistic
The Gap Statistic was developped by reasearchers from stanford, and relies on the variation of the within-cluster variation. To have more details, you
To compute it we will use the implementation of https://github.com/milesgranger/gap_statistic
TO DO
* Install the library gap statistic
* Run the optimalK function on df_micro with a maximum number of clusters set to 30
End of explanation
"""
# 1 run of the Gaussian mixture models
from sklearn import mixture
gmm = mixture.GaussianMixture(n_components=10).fit(df_micro)
labels_gmm = gmm.predict(df_micro)
"""
Explanation: What about mixture model ?
In the probabilistic framework of mixture models, we assume that the data are generated according to a mixture of probability density functions, with cluster-specific parameters.
$$ p(x,\theta) = \sum_k \pi_k f(x,\theta_k)$$
where, $\pi_k$ can be interpreted as the proportion of each cluster and $\theta_k$ is the set of parameters. For instance, in the case of a Gaussian mixture we have $\theta_k = (\mu_k,\sigma_k)$.
Then, the goal is to estimate the set of parameters $\theta_k$ and to compute the partition of the objects which is assumed to a hidden variable of the model.
End of explanation
"""
|
hershaw/data-science-101 | course/class1/pca/iris/PCA - Iris dataset.ipynb | mit | from sklearn import datasets
from sklearn.decomposition import PCA
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
%matplotlib notebook
"""
Explanation: Principal Component Analysis with Iris Dataset
End of explanation
"""
iris = datasets.load_iris()
X = pd.DataFrame(iris.data, columns=iris.feature_names)
y = pd.Series(iris.target, name='FlowerType')
X.head()
plt.figure(2, figsize=(8, 6))
plt.clf()
# Plot the training points
plt.scatter(X['sepal length (cm)'], X['sepal width (cm)'], s=35, c=y, cmap=plt.cm.brg)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.title('Sepal length vs. Sepal width')
plt.show()
"""
Explanation: Load Iris dataset
The Iris Dataset here.
This data sets consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray.
The rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width.
End of explanation
"""
pca_iris = PCA(n_components=3).fit(iris.data)
pca_iris.explained_variance_ratio_
pca_iris.transform(iris.data)
"""
Explanation: PCA
Can we reduce the dimensionality of our dataset withour losing much information? PCA will help us decide.
End of explanation
"""
iris_reduced = PCA(n_components=3).fit(iris.data)
iris_reduced.components_
iris_reduced = PCA(n_components=3).fit_transform(iris.data)
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
X_reduced = PCA(n_components=3).fit_transform(iris.data)
ax.scatter(iris_reduced[:, 0], iris_reduced[:, 1], iris_reduced[:, 2],
cmap=plt.cm.Paired, c=iris.target)
for k in range(3):
ax.scatter(iris_reduced[y==k, 0], iris_reduced[y==k, 1], iris_reduced[y==k, 2], label=iris.target_names[k])
ax.set_title("First three P.C.")
ax.set_xlabel("P.C. 1")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("P.C. 2")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("P.C. 3")
ax.w_zaxis.set_ticklabels([])
plt.legend(numpoints=1)
plt.show()
"""
Explanation: The P.C. #0 explained variance is one order of magnitude higher than P.C. #1 and #2, and two orders of magnitude higher than P.C. #3. We can us use this knowledge to reduce our dataset from 4D to 3D.
We could have done everything in one line by setting the number of components we want (3), fitting the PCA and transforming it to 3D:
End of explanation
"""
|
Diyago/Machine-Learning-scripts | DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb | apache-2.0 | # can comment out after executing
!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
"""
Explanation: Face Generation
In this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate new images of faces that look as realistic as possible!
The project will be broken down into a series of tasks from loading in data to defining and training adversarial networks. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise.
Get the Data
You'll be using the CelebFaces Attributes Dataset (CelebA) to train your adversarial networks.
This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training.
Pre-processed Data
Since the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.
<img src='assets/processed_face_data.png' width=60% />
If you are working locally, you can download this data by clicking here
This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data processed_celeba_small/
End of explanation
"""
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
return None
"""
Explanation: Visualize the CelebA Data
The CelebA dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with 3 color channels (RGB) each.
Pre-process and Load the Data
Since the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This pre-processed dataset is a smaller subset of the very large CelebA data.
There are a few other steps that you'll need to transform this data and create a DataLoader.
Exercise: Complete the following get_dataloader function, such that it satisfies these requirements:
Your images should be square, Tensor images of size image_size x image_size in the x and y dimension.
Your function should return a DataLoader that shuffles and batches these Tensor images.
ImageFolder
To create a dataset given a directory of images, it's recommended that you use PyTorch's ImageFolder wrapper, with a root directory processed_celeba_small/ and data transformation passed in.
End of explanation
"""
# Define function hyperparameters
batch_size =
img_size =
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
"""
Explanation: Create a DataLoader
Exercise: Create a DataLoader celeba_train_loader with appropriate hyperparameters.
Call the above function and create a dataloader to view images.
* You can decide on any reasonable batch_size parameter
* Your image_size must be 32. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
End of explanation
"""
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
"""
Explanation: Next, you can view some images! You should seen square images of somewhat-centered faces.
Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested imshow code is below, but it may not be perfect.
End of explanation
"""
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
"""
Explanation: Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1
You need to do a bit of pre-processing; you know that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
End of explanation
"""
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
"""
Explanation: Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
Discriminator
Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with normalization. You are also allowed to create any helper functions that may be useful.
Exercise: Complete the Discriminator class
The inputs to the discriminator are 32x32x3 tensor images
The output should be a single value that will indicate whether a given image is real or fake
End of explanation
"""
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
"""
Explanation: Generator
The generator should upsample an input and generate a new image of the same size as our training data 32x32x3. This should be mostly transpose convolutional layers with normalization applied to the outputs.
Exercise: Complete the Generator class
The inputs to the generator are vectors of some length z_size
The output should be a image of shape 32x32x3
End of explanation
"""
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
"""
Explanation: Initialize the weights of your networks
To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the original DCGAN paper, they say:
All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.
So, your next task will be to define a weight initialization function that does just this!
You can refer back to the lesson on weight initialization or even consult existing model code, such as that from the networks.py file in CycleGAN Github repository to help you complete this function.
Exercise: Complete the weight initialization function
This should initialize only convolutional and linear layers
Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
The bias terms, if they exist, may be left alone or set to 0.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
"""
Explanation: Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
End of explanation
"""
# Define model hyperparams
d_conv_dim =
g_conv_dim =
z_size =
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
"""
Explanation: Exercise: Define model hyperparameters
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
"""
Explanation: Training on GPU
Check if you can train on GPU. Here, we'll set this as a boolean variable train_on_gpu. Later, you'll be responsible for making sure that
Models,
Model inputs, and
Loss function arguments
Are moved to GPU, where appropriate.
End of explanation
"""
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
loss =
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
loss =
return loss
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses for both types of adversarial networks.
Discriminator Losses
For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss.
Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to think its generated images are real.
Exercise: Complete real and fake loss functions
You may choose to use either cross entropy or a least squares error loss to complete the following real_loss and fake_loss functions.
End of explanation
"""
import torch.optim as optim
# Create optimizers for the discriminator D and generator G
d_optimizer =
g_optimizer =
"""
Explanation: Optimizers
Exercise: Define optimizers for your Discriminator (D) and Generator (G)
Define optimizers for your models with appropriate hyperparameters.
End of explanation
"""
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
d_loss =
# 2. Train the generator with an adversarial loss
g_loss =
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
"""
Explanation: Training
Training will involve alternating between training the discriminator and the generator. You'll use your functions real_loss and fake_loss to help you calculate the discriminator losses.
You should train the discriminator by alternating on real and fake images
Then the generator, which tries to trick the discriminator and should have an opposing loss function
Saving Samples
You've been given some code to print out some loss statistics and save some generated "fake" samples.
Exercise: Complete the training function
Keep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
End of explanation
"""
# set number of epochs
n_epochs =
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
"""
Explanation: Set your number of training epochs and train your GAN!
End of explanation
"""
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Plot the training losses for the generator and discriminator, recorded after each epoch.
End of explanation
"""
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
"""
Explanation: Generator samples from training
View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
End of explanation
"""
|
opencobra/cobrapy | documentation_builder/building_model.ipynb | gpl-2.0 | from cobra import Model, Reaction, Metabolite
model = Model('example_model')
reaction = Reaction('R_3OAS140')
reaction.name = '3 oxoacyl acyl carrier protein synthase n C140 '
reaction.subsystem = 'Cell Envelope Biosynthesis'
reaction.lower_bound = 0. # This is the default
reaction.upper_bound = 1000. # This is the default
"""
Explanation: Building a Model
Model, Reactions and Metabolites
This simple example demonstrates how to create a model, create a reaction, and then add the reaction to the model.
We'll use the '3OAS140' reaction from the STM_1.0 model:
1.0 malACP[c] + 1.0 h[c] + 1.0 ddcaACP[c] $\rightarrow$ 1.0 co2[c] + 1.0 ACP[c] + 1.0 3omrsACP[c]
First, create the model and reaction.
End of explanation
"""
ACP_c = Metabolite(
'ACP_c',
formula='C11H21N2O7PRS',
name='acyl-carrier-protein',
compartment='c')
omrsACP_c = Metabolite(
'M3omrsACP_c',
formula='C25H45N2O9PRS',
name='3-Oxotetradecanoyl-acyl-carrier-protein',
compartment='c')
co2_c = Metabolite('co2_c', formula='CO2', name='CO2', compartment='c')
malACP_c = Metabolite(
'malACP_c',
formula='C14H22N2O10PRS',
name='Malonyl-acyl-carrier-protein',
compartment='c')
h_c = Metabolite('h_c', formula='H', name='H', compartment='c')
ddcaACP_c = Metabolite(
'ddcaACP_c',
formula='C23H43N2O8PRS',
name='Dodecanoyl-ACP-n-C120ACP',
compartment='c')
"""
Explanation: We need to create metabolites as well. If we were using an existing model, we could use Model.get_by_id to get the appropriate Metabolite objects instead.
End of explanation
"""
reaction.add_metabolites({
malACP_c: -1.0,
h_c: -1.0,
ddcaACP_c: -1.0,
co2_c: 1.0,
ACP_c: 1.0,
omrsACP_c: 1.0
})
reaction.reaction # This gives a string representation of the reaction
"""
Explanation: Side note: SId
It is highly recommended that the ids for reactions, metabolites and genes are valid SBML identifiers (SId).
SId is a data type derived from the basic XML typestring, but with restrictions about the characters
permitted and the sequences in which those characters may appear.
letter ::= ’a’..’z’,’A’..’Z’
digit ::= ’0’..’9’
idChar ::= letter | digit | ’_’
SId ::= ( letter | ’_’ ) idChar*
The main limitation is that ids cannot start with numbers. Using SIds allows serialization to SBML. In addition
features such as code completion and object access via the dot syntax will work in cobrapy.
Adding metabolites to a reaction uses a dictionary of the metabolites and their stoichiometric coefficients. A group of metabolites can be added all at once, or they can be added one at a time.
End of explanation
"""
reaction.gene_reaction_rule = '( STM2378 or STM1197 )'
reaction.genes
"""
Explanation: The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9):1290-307. We will assign the gene reaction rule string, which will automatically create the corresponding gene objects.
End of explanation
"""
print(f'{len(model.reactions)} reactions initially')
print(f'{len(model.metabolites)} metabolites initially')
print(f'{len(model.genes)} genes initially')
"""
Explanation: At this point in time, the model is still empty
End of explanation
"""
model.add_reactions([reaction])
# The objects have been added to the model
print(f'{len(model.reactions)} reactions')
print(f'{len(model.metabolites)} metabolites')
print(f'{len(model.genes)} genes')
"""
Explanation: We will add the reaction to the model, which will also add all associated metabolites and genes
End of explanation
"""
# Iterate through the the objects in the model
print("Reactions")
print("---------")
for x in model.reactions:
print("%s : %s" % (x.id, x.reaction))
print("")
print("Metabolites")
print("-----------")
for x in model.metabolites:
print('%9s : %s' % (x.id, x.formula))
print("")
print("Genes")
print("-----")
for x in model.genes:
associated_ids = (i.id for i in x.reactions)
print("%s is associated with reactions: %s" %
(x.id, "{" + ", ".join(associated_ids) + "}"))
"""
Explanation: We can iterate through the model objects to observe the contents
End of explanation
"""
model.objective = 'R_3OAS140'
"""
Explanation: Objective
Last we need to set the objective of the model. Here, we just want this to be the maximization of the flux in the single reaction we added and we do this by assigning the reaction's identifier to the objective property of the model.
End of explanation
"""
print(model.objective.expression)
print(model.objective.direction)
"""
Explanation: The created objective is a symbolic algebraic expression and we can examine it by printing it
End of explanation
"""
import tempfile
from pprint import pprint
from cobra.io import write_sbml_model, validate_sbml_model
with tempfile.NamedTemporaryFile(suffix='.xml') as f_sbml:
write_sbml_model(model, filename=f_sbml.name)
report = validate_sbml_model(filename=f_sbml.name)
pprint(report)
"""
Explanation: which here shows that the solver will maximize the flux in the forward direction.
Model Validation
For exchange with other tools you can validate and export the model to SBML.
For more information on serialization and available formats see the section "Reading and Writing Models"
End of explanation
"""
print("exchanges", model.exchanges)
print("demands", model.demands)
print("sinks", model.sinks)
"""
Explanation: The model is valid with no COBRA or SBML errors or warnings.
Exchanges, Sinks and Demands
Boundary reactions can be added using the model's method add_boundary.
There are three different types of pre-defined boundary reactions: exchange, demand, and sink reactions. All of them are unbalanced pseudo reactions, that means they fulfill a function for modeling by adding to or removing metabolites from the model system but are not based on real biology. An exchange reaction is a reversible reaction that adds to or removes an extracellular metabolite from the extracellular compartment. A demand reaction is an irreversible reaction that consumes an intracellular metabolite. A sink is similar to an exchange but specifically for intracellular metabolites, i.e., a reversible reaction that adds or removes an intracellular metabolite.
End of explanation
"""
model.add_metabolites([
Metabolite(
'glycogen_c',
name='glycogen',
compartment='c'
),
Metabolite(
'co2_e',
name='CO2',
compartment='e'
),
])
# create exchange reaction
model.add_boundary(model.metabolites.get_by_id("co2_e"), type="exchange")
# create exchange reaction
model.add_boundary(model.metabolites.get_by_id("glycogen_c"), type="sink")
# Now we have an additional exchange and sink reaction in the model
print("exchanges", model.exchanges)
print("sinks", model.sinks)
print("demands", model.demands)
"""
Explanation: Boundary reactions are defined on metabolites. First we add two metabolites to the model then
we define the boundary reactions. We add glycogen to the cytosolic compartment c and CO2 to the external compartment e.
End of explanation
"""
# boundary reactions
model.boundary
"""
Explanation: To create a demand reaction instead of a sink use type demand instead of sink.
Information on all boundary reactions is available via the model's property boundary.
End of explanation
"""
# metabolic reactions
set(model.reactions) - set(model.boundary)
"""
Explanation: A neat trick to get all metabolic reactions is
End of explanation
"""
|
crawles/automl_service | modelling_and_usage.ipynb | mit | %matplotlib inline
import json
import matplotlib.pylab as plt
import numpy as np
import pandas as pd
import pprint
import requests
import seaborn as sns
from sklearn.metrics import roc_auc_score
import tsfresh
from tsfresh.examples.har_dataset import download_har_dataset, load_har_dataset, load_har_classes
from tsfresh import extract_features, extract_relevant_features, select_features
from tsfresh.utilities.dataframe_functions import impute
from tsfresh.feature_extraction import ComprehensiveFCParameters, MinimalFCParameters
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import classification_report
import logging
"""
Explanation: AutoML service
End of explanation
"""
# python automl_service:app
"""
Explanation: Start service
End of explanation
"""
c = sns.color_palette()
def get_model_results(evaluated_individuals):
"""For processing model run results, store results as dictionary of AUCS"""
tpot_results = []
for i,(k, (steps, auc)) in enumerate(evaluated_individuals.iteritems()):
model_type = k.split('(')[0]
tpot_results.append([model_type, i, auc])
return tpot_results
params = {'legend.fontsize': 'x-large',
'figure.figsize': (15, 5),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
plt.rcParams.update(params)
label_train = pd.read_json('data/label_train.json')
df = pd.read_json('data/data_train.json')
x_train = df.groupby('example_id').sum()
df.index = df.index.astype(np.int)
label_train = pd.read_json('data/label_train.json')
"""
Explanation: Prep
End of explanation
"""
df = pd.read_json('data/data_train.json')
df.index = df.index.astype(int)
for i,(_,_df) in enumerate(df[df.example_id.isin([0, 1])].groupby('example_id')):
plt.figure()
_df.index = _df.index%128
_df.sort_index().measurement.plot(linewidth=2, color=c[0])
plt.ylabel('Amplitude')
if i==0:
plt.title('Class 0: Raw time series data examples', size=24)
plt.xlabel('Sample');
for i,(_,_df) in enumerate(df[df.example_id.isin([1246, 1248])].groupby('example_id')):
plt.figure()
_df.index = _df.index%128
_df.sort_index().measurement.plot(linewidth=2, color=c[2])
plt.ylabel('Amplitude')
if i==0:
plt.title('Class 1: Raw time series data examples', size=24)
plt.xlabel('Sample');
"""
Explanation: Show raw time series data
End of explanation
"""
train_url = 'http://0.0.0.0:8080/train_pipeline'
train_files = {'raw_data': open('data/data_train.json', 'rb'),
'labels' : open('data/label_train.json', 'rb'),
'params' : open('parameters/train_parameters_model2.yml', 'rb')}
r_train = requests.post(train_url, files=train_files)
result_df = json.loads(r_train.json())
r=requests.get('http://0.0.0.0:8080/models')
pipelines = json.loads(r.json())
automl_experiments = get_model_results(pipelines['2']['evaluated_models'])
del result_df['evaluated_models'] # too long to print out
pprint.pprint(result_df)
automl_experiments = pd.DataFrame(automl_experiments, columns=['model', 'id', 'auc']).sort_values('model')
# automl_experiments = automl_experiments.query('model != "LinearSVC"').query('model != "MultinomialNB"').query('model != "BernoulliNB"')
sns.set_style(style='darkgrid')
f, ax = plt.subplots(figsize=(10,10))
box = automl_experiments.boxplot(column='auc', by='model', rot=0, vert=False,
ax=ax, patch_artist=True, return_type='dict',
widths=0.8)
ax.grid(axis='y')
ax.set_title('AUC range by model type', size=20)
plt.suptitle('')
for b in box['auc']['boxes']:
color = sns.color_palette()[4]
b.set(color=color, linewidth=2)
b.set(facecolor=color, linewidth=2)
# for median in bp['medians']:
for median in box['auc']['medians']:
median.set(color='grey', linewidth=3)
plt.xlim(0.8)
"""
Explanation: Use model serve API
Train Model
End of explanation
"""
serve_url = 'http://0.0.0.0:8080/serve_prediction'
test_files = {'raw_data': open('data/data_test.json', 'rb'),
'params' : open('parameters/test_parameters_model2.yml', 'rb')}
r_test = requests.post(serve_url, files=test_files)
result = pd.read_json(r_test.json()).set_index('id')
result.head()
label_test = pd.read_json('data/label_test.json')
result.index = result.index.astype(np.int)
result = result.loc[label_test.example_id]
auc = roc_auc_score(label_test.label, result.score)
print "AUC: {:1.2f}".format(auc)
"""
Explanation: Serve model prediction
End of explanation
"""
# fetch dataset from uci
download_har_dataset()
# load data
df = load_har_dataset()
y = load_har_classes()
# binary classification
class1, class2 = 2, 3
two_classes = (y==class1) | (y==class2)
df = df[two_classes]
y = y[two_classes]
# change lavel names
y[y==class1] = 0
y[y==class2] = 1
df = df.reset_index(drop=True)
y = y.reset_index(drop=True)
df.loc[0].plot()
plt.xlabel('Samples')
plt.ylabel('Value');
"""
Explanation: Appendix
Build an AutoML Pipeline
Load Data
The dataset consists of timeseries for 7352 accelerometer readings. Each reading represents an accelerometer reading for 2.56 sec at 50hz (for a total of 128 samples per reading). Each reading corresponds one of six activities (walking, walking upstairs, walking downstairs, sitting, standing and laying). We use only two labels to create a binary classification problem for demonstration purposes.
The dataset is available here: https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
End of explanation
"""
# values
v = df.stack().values
# ids
ids = []
for i in range(len(y)):
ids.extend(128*[i])
ids = np.array(ids)
master_df = pd.DataFrame(v, columns=['measurement'])
master_df['example_id'] = ids
"""
Explanation: Prep data for feature building
We need to get the data in the format required by TSFRESH:
If there are 100 examples, where each example has 50 samples, we need to go from a (100, 50) dataframe to a (100*50, 2) dataframe as follows:
~~~
measurement|example_id
0.5235 |0
0.4284 |0
0.9042 |0
...
0.9042 |100
~~~
See the TSFRESH docs for more details
End of explanation
"""
# build label dataframe
label_df = pd.DataFrame(y.reset_index(drop=True))\
.reset_index()
label_df.columns = ['example_id', 'label']
# split into training and test
train_id, test_id = train_test_split(label_df.example_id, random_state=43, test_size=0.2)
train_id = pd.DataFrame(train_id)
test_id = pd.DataFrame(test_id)
data_train = master_df.merge(train_id, on='example_id')
data_test = master_df.merge(test_id, on='example_id')
print float(data_train.shape[0])/(data_train.shape[0] + data_test.shape[0])
label_train = label_df.merge(train_id, on='example_id')
label_test = label_df.merge(test_id, on='example_id')
"""
Explanation: Build Train/Test Set
End of explanation
"""
%%time
extraction_settings = MinimalFCParameters()
X_train = extract_features(data_train, column_id='example_id', impute_function=eval('tsfresh.utilities.dataframe_functions.impute'), default_fc_parameters=extraction_settings);
X_test = extract_features(data_test, column_id='example_id', impute_function=impute, default_fc_parameters=extraction_settings);
from tpot import TPOTClassifier
from sklearn.metrics import roc_auc_score
tpot = TPOTClassifier(generations=5, population_size=20, max_time_mins=0.2)
tpot.fit(X_train, label_train.label)
roc_auc_score(label_test.label, tpot.predict_proba(X_test)[:,1])
"""
Explanation: Build a model
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
cl = RandomForestClassifier(n_estimators=100, n_jobs=-1)
cl.fit(X_train, label_train.label)
roc_auc_score(label_test.label, cl.predict_proba(X_test)[:,1])
import sklearn
scoring = ['roc_auc', 'accuracy']
cv = sklearn.model_selection.cross_validate(cl, X_train, label_train.label, cv=5, scoring=scoring)
mean_accuracy = cv['test_accuracy'].mean()
mean_roc_auc = cv['test_roc_auc'].mean()
mean_accuracy, mean_roc_auc
def plot_importances(cl, column_names, n_features=10, ax=None, error_bars = True):
df_imp = pd.DataFrame({'features': column_names,
'importances': cl.feature_importances_})
errors = np.std([tree.feature_importances_ for tree in cl.estimators_], axis=0)
df_imp_sub = df_imp.set_index('features').sort_values('importances').tail(n_features)
if error_bars:
df_errors = pd.DataFrame({'features': column_names,
'importances': errors})
df_err_sub = df_errors.set_index('features').loc[df_imp_sub.index]
else:
df_err_sub = None
ax = df_imp_sub.plot(kind='barh', width=.7, legend=False, ax=ax, xerr=df_err_sub, ecolor='g')
for i,t in enumerate(df_imp_sub.index.tolist()):
t = ax.text(0.001, i-.06,t)
t.set_bbox(dict(facecolor='white', alpha=0.4, edgecolor='grey'))
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.set_title('Feature Importances')
ax.set_xlim(0.0)
ax.set_xlabel('importance')
return df_imp_sub
plot_importances(cl, X_train.columns);
"""
Explanation: Compare to Random Forest (baseline)
End of explanation
"""
# import os
# output_dir = 'data'
# data_train.to_json(os.path.join(output_dir, 'data_train.json'))
# data_test.to_json(os.path.join(output_dir, 'data_test.json'))
# label_train.to_json(os.path.join(output_dir, 'label_train.json'))
# label_test.to_json(os.path.join(output_dir, 'label_test.json'))
"""
Explanation: Export Data
Save training/testing data so we can build and test the AutoML Flask service
End of explanation
"""
|
danielfather7/teach_Python | DSMCER_Hw/dsmcer-hw-3-statistics-danielfather7/HW3-Tai-Yu Pan.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
"""
Explanation: If a cell begins with DNC: do not change it and leave the markdown there so I can expect a basic level of organization that is common to all HW (will help me with grading). This also clearly delineates the sections for me
DNC: preamble leave any general comments here and, in keeping with good practice, I suggest you load all needed modules in the preamble
End of explanation
"""
unirand = np.random.uniform(low = 25.0, high = 35.0, size = 10)
print(unirand)
unidataframe = pd.DataFrame(unirand)
print(unidataframe)
print('\nnp.std = %f; pd.std = %f' %(np.std(unidataframe), unidataframe.std()))
print('By default, degree of freedoms of np.std and pd.std are 0 and 1 respectively.')
unirand = np.random.uniform(low = 25.0, high = 35.0, size = 1000000)
unidataframe = pd.DataFrame(unirand)
print('1E6 random numbers:\nnp.std = %f; pd.std = %f' %(np.std(unidataframe), unidataframe.std()))
print('The valuses are very close, because when n is large enough(infinity), the distribution of the sample(dof=1) will be same as population(dof=0).')
print('We can get same results by using ddof.\ndof = 0:\nnp.std = %f; pd.std(ddof=0) = %f\ndof = 1:\nnp.std(ddof=1) = %f; pd.std = %f' %(np.std(unidataframe), unidataframe.std(ddof = 0),np.std(unidataframe, ddof = 1), unidataframe.std()))
"""
Explanation: DNC: Begin Part 1: Descriptive Statistics
Part 1: Problems for descriptive statistics
1-1: Understanding statistical calculations in python
Create a numpy array that has 10 uniform random numbers between 25.0 and 35.0, store it as a variable
Createa pandas dataframe with one frame based on your numpy array
Use "np.std" and "pd.std" formulas to calculate the standard deviation from both arrays, do not change any of the default arguments
The numbers should be different, explain why
Repeat the exercise with 1E6 uniform random numbers drawn from the same range
Comment on whether the numbers are different or not and explain why
Demonstrate that by using the proper function arguments you can obtain the same answer in both methods
1-2: Box plots
The data file GerberdingElectricityChilledWater.csv shows chilled water (energy) and electricity usage for Gerberding Hall over approximately an 18 month period from Jan 2013 to June 2014.
Note: these are real data! If you make any executive decisions (e.g., remove some points for very specific reasons), clearly explain your decision and motivation for doing so
Load the data into Python (your choice of method) and prepare a box plot summary of the data.
Present the plot in the nicest possible format (e.g., improve it from the default and prepare it for publication or presentation format) - this is open ended and you can use your judgement
Explain in your own words what each part on the box plot means
Part 1-1
End of explanation
"""
data = pd.read_csv('GerberdingElectricityChilledWater.csv')
print(data.head())
plt.figure(figsize=(10,5))
plt.subplot(121)
data.boxplot(['Btu'])
plt.xticks([1], ['Chilled water usuage'])
plt.ylabel('Btu')
plt.subplot(122)
data.boxplot(['kWh'])
plt.xticks([1],['Electricity usuage'])
plt.ylabel('kWh')
plt.suptitle('Chilled Water and Electricity Usage for Gerberding Hall (with outliers)')
print(data.describe())
"""
Explanation: Part 1-2
End of explanation
"""
IQR = data.quantile(q = 0.75) - data.quantile(q = 0.25)
upermild = data.quantile(q = 0.75) + 1.5*IQR
uperextre = data.quantile(q = 0.75) + 3*IQR
lowermild = data.quantile(q = 0.25) - 1.5*IQR
lowerextre = data.quantile(q = 0.25) - 3*IQR
print(lowerextre)
print(lowermild)
print(upermild)
print(uperextre)
"""
Explanation: As shown in above, the line(green one) in the box is the median(Q2, 50th percentile, the value separating the higher half from the lower half) of data. The lower boundary of the box is first quartile(Q1, 25th percentile), which is defined as the middle number between the smallest number and the median of the data. The upper boundary of the box is third quartile(Q3, 75th percentile), which is the middle value between the median and the highest value of the data. The interquartile range(IQR) is Q3 - Q1. The upper line outside the box is Q3 + 1.5 x IQR, and the lower line is Q1 - 1.5 x IQR. Data can be viewed as outliers outside these two lines. There is also a definition that the range [Q3+1.5 x IQR, Q3+3 x IQR] and [Q1-3 x IQR, Q1-1.5 x IQR] are called mild outiliers, and outside the 3 x IQR are called extreme outliers. I calculate these values below.
End of explanation
"""
for index, row in data.iterrows():
if row['Btu'] > uperextre[0]:
data.drop(index,inplace = True)
elif row['kWh'] > uperextre[1]:
data.drop(index,inplace = True)
elif row['kWh'] < 0:
data.drop(index,inplace = True)
#plot again
plt.figure(figsize=(10,5))
plt.subplot(121)
data.boxplot(['Btu'])
plt.xticks([1], ['Chilled water usuage'])
plt.ylabel('Btu')
plt.subplot(122)
data.boxplot(['kWh'])
plt.xticks([1],['Electricity usuage'])
plt.ylabel('kWh')
plt.suptitle('Chilled Water and Electricity Usage for Gerberding Hall (without extreme outliers)')
"""
Explanation: The data fall in extreme outliers are most probably wrong data (or mostly not common one). Furthermore, data in this case can't be negative. Thus, I rearrange data by eliminating those values.
End of explanation
"""
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.boxplot(data.Btu,showfliers = False)
plt.xticks([1], ['Chilled water usuage'])
plt.ylabel('Btu')
plt.grid()
plt.subplot(122)
plt.boxplot(data.kWh,showfliers = False)
plt.xticks([1],['Electricity usuage'])
plt.ylabel('kWh')
plt.ylim([-2,42])
plt.grid()
plt.suptitle('Chilled Water and Electricity Usage for Gerberding Hall (without any outliers)')
"""
Explanation: Also, I can plot without showing any outliers, shown below.
End of explanation
"""
from scipy.stats import norm
#left
x=np.arange(lowerextre[0],uperextre[0])
y = norm.pdf(x,data['Btu'].mean(),data['Btu'].std())
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.plot(x, y, label = 'PDF of the normal distribution', linewidth = 1.5,color = 'k')
plt.hist(data['Btu'], bins = 20, normed = True, label = 'normalized histogram', color = '#668cff',edgecolor='k', lw=1.5)
plt.ylim([0,0.000027])
plt.grid()
plt.legend()
plt.xlabel('Chilled water usuage (Btu)')
plt.ylabel('Frequency')
#right
y = norm.cdf(x,data['Btu'].mean(),data['Btu'].std())
counts, bin_edges = np.histogram(data['Btu'], bins= 20, density=True)
cdf = np.cumsum(counts * np.diff(bin_edges))
plt.subplot(122)
plt.plot(x, y, label = 'CDF from the normal distribution',color = 'k')
plt.plot(bin_edges[1:], cdf, label = 'measured CDF',color = '#668cff')
plt.grid()
plt.legend(loc='upper right')
plt.xlabel('Chilled water usuage (Btu)')
plt.ylim([0,1.2])
plt.ylabel('Cumulative frequency')
plt.suptitle('Chilled Water Usage for Gerberding Hall')
#left
x=np.arange(lowerextre[1],uperextre[1])
y = norm.pdf(x,data['kWh'].mean(),data['kWh'].std())
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.plot(x, y, label = 'PDF of the normal distribution', linewidth = 1.5,color = 'k')
plt.hist(data['kWh'], bins = 20, normed = True, label = 'normalized histogram', color = '#668cff',edgecolor='k', lw=1.5)
plt.ylim([0,0.14])
plt.grid()
plt.legend()
plt.xlabel('Electricity usuage (kWh)')
plt.ylabel('Frequency')
#right
y = norm.cdf(x,data['kWh'].mean(),data['kWh'].std())
counts, bin_edges = np.histogram(data['kWh'], bins= 20, density=True)
cdf = np.cumsum(counts * np.diff(bin_edges))
plt.subplot(122)
plt.plot(x, y, label = 'CDF from the normal distribution',color = 'k')
plt.plot(bin_edges[1:], cdf, label = 'measured CDF',color = '#668cff')
plt.grid()
plt.legend(loc='upper right')
plt.xlabel('Electricity usuage (kWh)')
plt.ylim([0,1.2])
plt.ylabel('Cumulative frequency')
plt.suptitle('Electricity Usage for Gerberding Hall')
"""
Explanation: DNC: Begin Part 2
Part 2: Distributions
Your goal is to prepare a side-by-side plot describing the distribution of data from part 1 related to Gerberding Hall.
1) To complete the plot you should assume the data are normally distributed and determine the mean and standard deviation of the chilled water data series.
2) The left panel of the plot should be a normalized histogram of the chilled water data with an overlay of the PDF of the normal distribution based on the $\bar x$ and $s$ values from the data series. The right panel of the plot should be contain two lines corresponding to the measured CDF and CDF from the normal distribution estimated from the data.
3) Repeat (2) for the electricity water
4) Comment in a Markdown cell on the ability of a normal distribution to describe this data
End of explanation
"""
eusedata = pd.read_csv('energyuse.csv')
eusedata
"""
Explanation: Comment:
Chilled water usuage is closer to normal distribution than electricity usuage is. And we can see that the CDF from the normal distribution is a good approxmation to the measured CDF, so I think it has a good ability to describe chilled water usuage. In contrast, in electricity usuage, both plots show that PDF and CDF from normal distribution don't fit the measured data very well.
DNC: Begin Part 3
Part 3: Hypothesis testing
The file energyuse.csv contains energy use data for 6 UW students. The data include electricity for lighting, all other electricity use and total electricity use. The final entry in the data file is the national average of the same values.
Please do the following
1) Formulate a statistical hypothesis to test about the three data sets and clearly state it
2) Perform a test a significance level of P=0.05 (make sure to clearly comment your work so I can follow what you are doing)
3) Clearly state the meaning of the results in plain language
End of explanation
"""
from scipy import stats
#Calculate statistic value.
mean = eusedata['Total'][0:6].mean()
std = eusedata['Total'][0:6].std(ddof = 1)
N = np.shape(eusedata)[0]-1
sem = stats.sem(eusedata['Total'][0:6],ddof=1)
t = (mean - 3.040)/ sem
p = stats.t.sf(np.abs(t),N-1)*2
print('mean = %.4f, std = %.4f, sem = %.4f, t = %.4f, p = %.4f' %(mean, std, sem, t, p))
#Confirm t and p
[tcalc,p]=stats.ttest_1samp(eusedata['Total'][0:6],3.040)
tcalc,p
"""
Explanation: Statement
Null hypothesis(H0): The average total usuage for 6 UW students is equal to the national Avg = 3.040.
Alternative hypothesis(Ha): The average total usuage for 6 UW students is not equal to the national Avg = 3.040.
End of explanation
"""
x=np.arange(2.0,4.0,0.01)
#Normal distribution based on mu and sem
y = norm.pdf(x,3.040,sem)
plt.figure(figsize=(10,10))
plt.subplot(111)
plt.plot(x, y, label = 'Normal distribution by sem', linewidth = 1.5,color = 'k')
plt.xlabel('Total energy usuage')
plt.ylabel('Frequency')
plt.ylim([-0.15,2.25])
plt.xlim([1.8,4.2])
#tag mu
x = np.full(2,3.040)
plt.plot(x,[-0.15,2.0],'--',label = '$\mu$ = 3.040')
#tag mean
x = np.full(2,mean)
plt.plot(x,[-0.15, p],'--',label = 'mean = 2.363, p=0.199', color = 'g')
y = np.full(2,p)
plt.plot([1.8,mean],y,'--',color = 'g')
#tag alpha
[L,U]=stats.t.interval(.95,N-1,loc=3.040, scale=sem)
x = np.full(2,L)
plt.plot(x,[-0.15, 0.05],label = 'Rejection area based on alpha = 0.05', color = 'g')
y = np.full(2,0.05)
plt.plot([1.8,L],y,color = 'g')
plt.grid()
plt.legend(loc='upper right')
"""
Explanation: Explanation
In statistics, the p-value represents the probablity of extreme value by assuming H0 is true. When p-value is smaller enough(under desired significance level, $\alpha$), extreme value has very low probablity, but it still exists. Thus, we have confidence to reject the assumption H0 is true. That is, we can state that Ha is true. In contrast, when p-value is larger than $\alpha$, we can only state that we don't have enough evidence to reject H0.
In this case, p-value is 0.0192, which is smaller than $\alpha$ = 0.05, so we reject H0 and state that Ha is true. That is, the average total usuage for 6 UW students is not equal to the national Avg = 3.040 in this case.
I try to plot the p-value concept(based on my understanding) below. We can see that the average total usuage for 6 UW students falls into the rejection area, so we reject H0.
End of explanation
"""
|
jhjungCode/pytorch-tutorial | 08_Flowers_retraining.ipynb | mit | !if [ ! -d "/tmp/flower_photos" ]; then curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C /tmp ;rm /tmp/flower_photos/LICENSE.txt; fi
%matplotlib inline
"""
Explanation: Flowers retraining example
이미 학습된 잘 알려진 모델을 이용하여 꽃의 종류를 예측하는 예제입니다.
기존의 Minst 예제와는 거의 차이점이 없습니다. 단지 2가지만 다를 뿐입니다.
숫자이미지 대신에 꽃이미지이름으로 분류되어 있는 folder를 dataset으로 이용한다.
이미 잘 짜여진 Neural model과 사전학습된(pretrained) parameter를 사용한다.
classificaion 숫자를 조정한 새로운 Network로 재구성한다.
pytorch의 imageFolder라는 dataset클래스를 사용하였습니다.
python
traindir = './flower_photos'
batch_size = 8
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True)
Microsoft에서 발표한 resnet152를 사용합니다.
python
model = torchvision.models.resnet152(pretrained=True)
새로운 Network로 재구성합니다.
```python
don't update model parameters
for param in model.parameters() :
param.requires_grad = False
modify last fully connected layter
model.fc = nn.Linear(model.fc.in_features, cls_num)
```
일단, 밑의 명령어를 수행시켜서, 실행디렉토리 밑에 꽃 이미지 압축파일을 풀어 놓습니다.
End of explanation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda 사용가능시, True
traindir = '/tmp/flower_photos'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
batch_size = 256
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True,
num_workers=4)
cls_num = len(datasets.folder.find_classes(traindir)[0])
test_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True,
num_workers=1)
"""
Explanation: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 imagefolder, batch 사이즈 32, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 imagefoder, batch 사이즈 32,shuffle를 실행)
End of explanation
"""
model = torchvision.models.resnet152(pretrained = True)
### don't update model parameters
for param in model.parameters() :
param.requires_grad = False
#modify last fully connected layter
model.fc = nn.Linear(model.fc.in_features, cls_num)
fc_parameters = [
{'params': model.fc.parameters()},
]
optimizer = torch.optim.Adam(fc_parameters, lr=1e-4, weight_decay=1e-4)
loss_fn = nn.CrossEntropyLoss()
if is_cuda : model.cuda(), loss_fn.cuda()
"""
Explanation: 2. 사전 설정
model
여기서 약간 특별한 처리를 해주어야 합니다.
1. resnet152는 pretrained된 것은 분류개수가 1000개이라서, flower폴더에 있는 5개로 분류개수를 가지도록 재구성합니다.
2. 마지막 parameter만를 update할 수 있도록 나머지 layer는 requires_grad를 False로 합니다.
loss
opimizer
그리고 최적화에는 재구성한 마지막 layer만을 update하도록 설정합니다.
End of explanation
"""
# trainning
model.train()
train_loss = []
train_accu = []
i = 0
for epoch in range(35):
for image, target in train_loader:
image, target = Variable(image.float()), Variable(target) # 입력image Target 설정
if is_cuda : image, target = image.cuda(), target.cuda()
output = model(image) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
i += 1
plt.plot(train_accu)
plt.plot(train_loss)
"""
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
```
End of explanation
"""
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
"""
Explanation: 4. Predict & Evaluate
End of explanation
"""
|
igabr/Metis_Projects_Chicago_2017 | 05-project-kojack/Notebook_1_DataFrame_Construction.ipynb | mit | import pandas as pd
import arrow # way better than datetime
import numpy as np
import random
import re
%run helper_functions.py
"""
Explanation: Notebook 1
This notebook contains code used to construct the dataframe that contains our raw data.
End of explanation
"""
df = pd.read_csv("tweets_formatted.txt", sep="| |", header=None)
df.shape
list_of_dicts = []
for i in range(df.shape[0]):
temp_dict = {}
temp_lst = df.iloc[i,0].split("||")
temp_dict['handle'] = temp_lst[0]
temp_dict['tweet'] = temp_lst[1]
try: #sometimes the date/time is missing - we will have to infer
temp_dict['date'] = arrow.get(temp_lst[2]).date()
except:
temp_dict['date'] = np.nan
try:
temp_dict['time'] = arrow.get(temp_lst[2]).time()
except:
temp_dict['time'] = np.nan
list_of_dicts.append(temp_dict)
list_of_dicts[0].keys()
new_df = pd.DataFrame(list_of_dicts) #magic!
new_df.head() #unsorted!
new_df.sort_values(by=['date', 'time'], ascending=False, inplace=True)
new_df.reset_index(inplace=True)
del new_df['index']
pickle_object(new_df, "new_df")
new_df.head() #sorted first on date and then on time
"""
Explanation: Above, I used the arrow library instead of datetime. In my opinion, Arrow overcomes a lot of the shortfalls and syntactic complexity of the datetime library!
Here is the documentation: https://arrow.readthedocs.io/en/latest/
End of explanation
"""
sample_duplicate_indicies = []
for i in new_df.index:
if "Multiplayer #Poker" in new_df.iloc[i, 3]:
sample_duplicate_indicies.append(i)
new_df.iloc[sample_duplicate_indicies, :]
"""
Explanation: Evidence of Duplicates
It is clear that we have some duplicates. Let's first clean out the URL's.
End of explanation
"""
|
linhbngo/cpsc-4770_6770 | 11-intro-to-hadoop-03.ipynb | gpl-3.0 | !hdfs dfs -rm -r intro-to-hadoop/output-movielens-02
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-02 \
-file ./codes/avgRatingMapper04.py \
-mapper avgRatingMapper04.py \
-file ./codes/avgRatingReducer01.py \
-reducer avgRatingReducer01.py \
-file ./movielens/movies.csv
"""
Explanation: <center> Introduction to Hadoop MapReduce </center>
3. Optimization
First principle of optimizing Hadoop workflow: Reduce data movement in the shuffle phase
End of explanation
"""
%%writefile codes/avgRatingReducer02.py
#!/usr/bin/env python
import sys
import csv
movieFile = "./movies.csv"
movieList = {}
with open(movieFile, mode = 'r') as infile:
reader = csv.reader(infile)
for row in reader:
movieList[row[0]] = {}
movieList[row[0]]["title"] = row[1]
movieList[row[0]]["genre"] = row[2]
current_movie = None
current_rating_sum = 0
current_rating_count = 0
for line in sys.stdin:
line = line.strip()
movie, rating = line.split("\t", 1)
try:
rating = float(rating)
except ValueError:
continue
if current_movie == movie:
current_rating_sum += rating
current_rating_count += 1
else:
if current_movie:
rating_average = current_rating_sum / current_rating_count
movieTitle = movieList[current_movie]["title"]
movieGenres = movieList[current_movie]["genre"]
print ("%s\t%s\t%s" % (movieTitle, rating_average, movieGenres))
current_movie = movie
current_rating_sum = rating
current_rating_count = 1
if current_movie == movie:
rating_average = current_rating_sum / current_rating_count
movieTitle = movieList[current_movie]["title"]
movieGenres = movieList[current_movie]["genre"]
print ("%s\t%s\t%s" % (movieTitle, rating_average, movieGenres))
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-03
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-03 \
-file ./codes/avgRatingMapper02.py \
-mapper avgRatingMapper02.py \
-file ./codes/avgRatingReducer02.py \
-reducer avgRatingReducer02.py \
-file ./movielens/movies.csv
!hdfs dfs -ls intro-to-hadoop/output-movielens-02
!hdfs dfs -ls intro-to-hadoop/output-movielens-03
!hdfs dfs -cat intro-to-hadoop/output-movielens-03/part-00000 \
2>/dev/null | head -n 10
"""
Explanation: What is being passed from Map to Reduce?
Can reducer do the same thing as mapper, that is, to load in external data?
If we load external data on the reduce side, do we need to do so on the map side?
End of explanation
"""
%%writefile codes/avgGenreMapper01.py
#!/usr/bin/env python
import sys
import csv
# for nonHDFS run
movieFile = "./movielens/movies.csv"
# for HDFS run
#movieFile = "./movies.csv"
movieList = {}
with open(movieFile, mode = 'r') as infile:
reader = csv.reader(infile)
for row in reader:
movieList[row[0]] = {}
movieList[row[0]]["title"] = row[1]
movieList[row[0]]["genre"] = row[2]
for oneMovie in sys.stdin:
oneMovie = oneMovie.strip()
ratingInfo = oneMovie.split(",")
try:
genreList = movieList[ratingInfo[1]]["genre"]
rating = float(ratingInfo[2])
for genre in genreList.split("|"):
print ("%s\t%s" % (genre, rating))
except ValueError:
continue
%%writefile codes/avgGenreReducer01.py
#!/usr/bin/env python
import sys
import csv
import json
current_genre = None
current_rating_sum = 0
current_rating_count = 0
for line in sys.stdin:
line = line.strip()
genre, rating = line.split("\t", 1)
if current_genre == genre:
try:
current_rating_sum += float(rating)
current_rating_count += 1
except ValueError:
continue
else:
if current_genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
current_genre = genre
try:
current_rating_sum = float(rating)
current_rating_count = 1
except ValueError:
continue
if current_genre == genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-04
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-04 \
-file ./codes/avgGenreMapper01.py \
-mapper avgGenreMapper01.py \
-file ./codes/avgGenreReducer01.py \
-reducer avgGenreReducer01.py \
-file ./movielens/movies.csv
!hdfs dfs -ls intro-to-hadoop/output-movielens-04
!hdfs dfs -cat intro-to-hadoop/output-movielens-04/part-00000
"""
Explanation: How does the number shuffle bytes in this example compare to the previous example?
Find genres which have the highest average ratings over the years
Common optimization approaches:
In-mapper reduction of key/value pairs
Additional combiner function
End of explanation
"""
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10 \
| python ./codes/avgGenreMapper01.py \
%%writefile codes/avgGenreMapper02.py
#!/usr/bin/env python
import sys
import csv
import json
# for nonHDFS run
# movieFile = "./movielens/movies.csv"
# for HDFS run
movieFile = "./movies.csv"
movieList = {}
genreList = {}
with open(movieFile, mode = 'r') as infile:
reader = csv.reader(infile)
for row in reader:
movieList[row[0]] = {}
movieList[row[0]]["title"] = row[1]
movieList[row[0]]["genre"] = row[2]
for oneMovie in sys.stdin:
oneMovie = oneMovie.strip()
ratingInfo = oneMovie.split(",")
try:
genres = movieList[ratingInfo[1]]["genre"]
rating = float(ratingInfo[2])
for genre in genres.split("|"):
if genre in genreList:
genreList[genre]["total_rating"] += rating
genreList[genre]["total_count"] += 1
else:
genreList[genre] = {}
genreList[genre]["total_rating"] = rating
genreList[genre]["total_count"] = 1
except ValueError:
continue
for genre in genreList:
print ("%s\t%s" % (genre, json.dumps(genreList[genre])))
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10 \
| python ./codes/avgGenreMapper02.py \
%%writefile codes/avgGenreReducer02.py
#!/usr/bin/env python
import sys
import csv
import json
current_genre = None
current_rating_sum = 0
current_rating_count = 0
for line in sys.stdin:
line = line.strip()
genre, ratingString = line.split("\t", 1)
ratingInfo = json.loads(ratingString)
if current_genre == genre:
try:
current_rating_sum += ratingInfo["total_rating"]
current_rating_count += ratingInfo["total_count"]
except ValueError:
continue
else:
if current_genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
current_genre = genre
try:
current_rating_sum = ratingInfo["total_rating"]
current_rating_count = ratingInfo["total_count"]
except ValueError:
continue
if current_genre == genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10 \
| python ./codes/avgGenreMapper02.py \
| sort \
| python ./codes/avgGenreReducer02.py
# make sure that the path to movies.csv is correct inside avgGenreMapper02.py
!hdfs dfs -rm -R intro-to-hadoop/output-movielens-05
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-05 \
-file ./codes/avgGenreMapper02.py \
-mapper avgGenreMapper02.py \
-file ./codes/avgGenreReducer02.py \
-reducer avgGenreReducer02.py \
-file ./movielens/movies.csv
!hdfs dfs -cat intro-to-hadoop/output-movielens-05/part-00000
!hdfs dfs -cat intro-to-hadoop/output-movielens-04/part-00000
"""
Explanation: 2.2.1 Optimization through in-mapper reduction of Key/Value pairs
End of explanation
"""
!hdfs dfs -ls /repository/
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/complete-shakespeare.txt \
-output intro-to-hadoop/output-wordcount-01 \
-file ./codes/wordcountMapper.py \
-mapper wordcountMapper.py \
-file ./codes/wordcountReducer.py \
-reducer wordcountReducer.py
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/complete-shakespeare.txt \
-output intro-to-hadoop/output-wordcount-02 \
-file ./codes/wordcountMapper.py \
-mapper wordcountMapper.py \
-file ./codes/wordcountReducer.py \
-reducer wordcountReducer.py \
-combiner wordcountReducer.py
%%writefile codes/avgGenreCombiner.py
#!/usr/bin/env python
import sys
import csv
import json
genreList = {}
for line in sys.stdin:
line = line.strip()
genre, ratingString = line.split("\t", 1)
ratingInfo = json.loads(ratingString)
if genre in genreList:
genreList[genre]["total_rating"] += ratingInfo["total_rating"]
genreList[genre]["total_count"] += ratingInfo["total_count"]
else:
genreList[genre] = {}
genreList[genre]["total_rating"] = ratingInfo["total_rating"]
genreList[genre]["total_count"] = 1
for genre in genreList:
print ("%s\t%s" % (genre, json.dumps(genreList[genre])))
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-06
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-06 \
-file ./codes/avgGenreMapper02.py \
-mapper avgGenreMapper02.py \
-file ./codes/avgGenreReducer02.py \
-reducer avgGenreReducer02.py \
-file ./codes/avgGenreCombiner.py \
-combiner avgGenreCombiner.py \
-file ./movielens/movies.csv
"""
Explanation: How different are the number of shuffle bytes between the two jobs?
2.2.2 Optimization through combiner function
End of explanation
"""
|
OpenWeavers/openanalysis | doc/OpenAnalysis/03 - Searching.ipynb | gpl-3.0 | x = list(range(10))
x
6 in x
100 in x
x.index(6)
x.index(100)
"""
Explanation: Searching Analysis
Consider a finite collection of element. Finding whether element exsists in collection is known as Searching. Following are some of the comparision based Searching Algorithms.
Linear Search
Binary Search
Before looking at the analysis part, we shall examine the Language in built methods to searching
The in operator and list.index()
We have already seen the in operator in several contexts. Let's see the working of in operator again
End of explanation
"""
from openanalysis.searching import SearchingAlgorithm,SearchAnalyzer
%matplotlib inline
%config InlineBackend.figure_formats={"svg", "pdf"}
"""
Explanation: Standard import statement
End of explanation
"""
class BinarySearch(SearchingAlgorithm): # Inheriting
def __init__(self):
SearchingAlgorithm.__init__(self, "Binary Search") # Initailizing with name
def search(self, arr, key):
SearchingAlgorithm.search(self, arr, key) # call base class search
low, high = 0, arr.size - 1
while low <= high:
mid = int((low + high) / 2)
self.count += 1 # Increment for each basic operation performed
if arr[mid] == key:
return True
elif arr[mid] < key:
low = mid + 1
else:
high = mid - 1
return False
"""
Explanation: SearchingAlgorithm is the base class providing the standards to implement searching algorithms, SearchAnalyzer analyses the algorithm
SearchingAlgorithm class
Any searching algorithm, which has to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class.
Data Members
name - Name of the Searching Algorithm
count - Holds the number of basic operations performed
Member Functions
__init__(self, name): - Initializes algorithm with a name
search(self, array, key): _ The base searching function. Sets count to 0. array is 1D numpy array,key is the key of element to be found out
An example .... Binary Search
Now we shall implement the class BinarySearch
End of explanation
"""
bin_visualizer = SearchAnalyzer(BinarySearch)
bin_visualizer.analyze(progress=False)
"""
Explanation: SearchAnalyzer class
This class provides the visualization and analysis methods. Let's see its methods in detail
__init__(self, searcher): Initializes visualizer with a Searching Algorithm.
searcher is a class, which is derived from SearchingAlgorithm
analyze(self, maxpts=1000):
Plots the running time of searching algorithm by searching in 3 cases
Key is the first element, Key is the last element, Key at random index
Analysis is done by inputting sorted integer arrays with size staring
from 100, and varying upto maxpts in the steps of 100, and counting the number of
basic operations
maxpts Upper bound on size of elements chosen for analysing efficiency
End of explanation
"""
|
ijingo/incubator-singa | doc/en/docs/notebook/model.ipynb | apache-2.0 | from singa import tensor, device, layer
#help(layer.Layer)
layer.engine='singacpp'
"""
Explanation: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
SINGA Model Classes
<img src="http://singa.apache.org/en/_static/images/singav1-sw.png" width="500px"/>
Layer
Typically, the life cycle of a layer instance includes:
1. construct layer without input_sample_shapes, goto 2; or,
construct layer with input_sample_shapes, goto 3;
call setup to create the parameters and setup other meta fields;
initialize the parameters of the layer
call forward or access layer members
call backward and get parameters for update
End of explanation
"""
from singa.layer import Dense, Conv2D, MaxPooling2D, Activation, BatchNormalization, Softmax
"""
Explanation: Common layers
End of explanation
"""
dense = Dense('dense', 3, input_sample_shape=(2,))
#dense.param_names()
w, b = dense.param_values()
print(w.shape, b.shape)
w.gaussian(0, 0.1)
b.set_value(0)
x = tensor.Tensor((2,2))
x.uniform(-1, 1)
y = dense.forward(True, x)
tensor.to_numpy(y)
gx, [gw, gb] = dense.backward(True, y)
print(gx.shape, gw.shape, gb.shape)
"""
Explanation: Dense Layer
End of explanation
"""
conv = Conv2D('conv', 4, 3, 1, input_sample_shape=(3, 6, 6))
print(conv.get_output_sample_shape())
"""
Explanation: Convolution Layer
End of explanation
"""
pool = MaxPooling2D('pool', 3, 2, input_sample_shape=(4, 6, 6))
print(pool.get_output_sample_shape())
"""
Explanation: Pooling Layer
End of explanation
"""
from singa.layer import Split, Merge, Slice, Concat
split = Split('split', 2, input_sample_shape=(4, 6, 6))
print(split.get_output_sample_shape())
merge = Merge('merge', input_sample_shape=(4, 6, 6))
print(merge.get_output_sample_shape())
sli = Slice('slice', 1, [2], input_sample_shape=(4, 6, 6))
print(sli.get_output_sample_shape())
concat = Concat('concat', 1, input_sample_shapes=[(3, 6, 6), (1, 6, 6)])
print(concat.get_output_sample_shape())
"""
Explanation: Branch layers
End of explanation
"""
from singa import metric
import numpy as np
x = tensor.Tensor((3, 5))
x.uniform(0, 1) # randomly genearte the prediction activation
x = tensor.softmax(x) # normalize the prediction into probabilities
print(tensor.to_numpy(x))
y = tensor.from_numpy(np.array([0, 1, 3], dtype=np.int)) # set the truth
f = metric.Accuracy()
acc = f.evaluate(x, y) # averaged accuracy over all 3 samples in x
print(acc)
from singa import loss
x = tensor.Tensor((3, 5))
x.uniform(0, 1) # randomly genearte the prediction activation
y = tensor.from_numpy(np.array([0, 1, 3], dtype=np.int)) # set the truth
f = loss.SoftmaxCrossEntropy()
l = f.forward(True, x, y) # l is tensor with 3 loss values
g = f.backward() # g is a tensor containing all gradients of x w.r.t l
print(l.l1())
print(tensor.to_numpy(g))
"""
Explanation: Metric and Loss
End of explanation
"""
from singa import optimizer
sgd = optimizer.SGD(lr=0.01, momentum=0.9, weight_decay=1e-4)
p = tensor.Tensor((3,5))
p.uniform(-1, 1)
g = tensor.Tensor((3,5))
g.gaussian(0, 0.01)
sgd.apply(1, g, p, 'param') # use the global lr=0.1 for epoch 1
sgd.apply_with_lr(2, 0.03, g, p, 'param') # use lr=0.03 for epoch 2
"""
Explanation: Optimizer
End of explanation
"""
from singa import net as ffnet
layer.engine = 'singacpp'
net = ffnet.FeedForwardNet(loss.SoftmaxCrossEntropy(), metric.Accuracy())
net.add(layer.Conv2D('conv1', 32, 5, 1, input_sample_shape=(3,32,32,)))
net.add(layer.Activation('relu1'))
net.add(layer.MaxPooling2D('pool1', 3, 2))
net.add(layer.Flatten('flat'))
net.add(layer.Dense('dense', 10))
# init parameters
for p in net.param_values():
if len(p.shape) == 0:
p.set_value(0)
else:
p.gaussian(0, 0.01)
print(net.param_names())
layer.engine = 'cudnn'
net = ffnet.FeedForwardNet(loss.SoftmaxCrossEntropy(), metric.Accuracy())
net.add(layer.Conv2D('conv1', 32, 5, 1, input_sample_shape=(3,32,32,)))
net.add(layer.Activation('relu1'))
net.add(layer.MaxPooling2D('pool1', 3, 2))
net.add(layer.Flatten('flat'))
net.add(layer.Dense('dense', 10))
# init parameters
for p in net.param_values():
if len(p.shape) == 0:
p.set_value(0)
else:
p.gaussian(0, 0.01)
# move net onto gpu
dev = device.create_cuda_gpu()
net.to_device(dev)
"""
Explanation: FeedForwardNet
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree | sentiment-analysis/Sentiment Analysis with TFLearn.ipynb | mit | import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
from collections import Counter
total_counts = Counter()
words = []
for index, review in reviews.iterrows():
words = review[0].split(" ")
for word in words:
total_counts[word]+=1
print("Total words in data set: ", len(total_counts))
total_counts.most_common()
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = {vocab[i]: i for i in range(0, len(vocab))}
print(word2idx)
"""
Explanation: The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
def text_to_vector(text):
wordvec = np.zeros(len(word2idx))
for word in text.split(" "):
if(word2idx.get(word)!=None):
wordvec[word2idx.get(word)]+=1
return wordvec
print(text_to_vector('oogamooga'))
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
trainX.shape
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000])
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
"""
Explanation: Try out your own text!
End of explanation
"""
|
nanodan/branca | examples/Elements.ipynb | mit | e = Element("This is fancy text")
"""
Explanation: Element
This is the base brick of branca. You can create an Element in providing a template string:
End of explanation
"""
print(e._name, e._id)
print(e.get_name())
"""
Explanation: Each element has an attribute _name and a unique _id. You also have a method get_name to get a unique string representation of the element.
End of explanation
"""
e.render()
"""
Explanation: You can render an Element using the method render:
End of explanation
"""
e = Element("Hello {{kwargs['you']}}, my name is `{{this.get_name()}}`.")
e.render(you='World')
"""
Explanation: In the template, you can use keyword this for accessing the object itself ; and the keyword kwargs for accessing any keyword argument provided in the render method:
End of explanation
"""
child = Element('This is the child.')
parent = Element('This is the parent.').add_child(child)
parent = Element('This is the parent.')
child = Element('This is the child.').add_to(parent)
"""
Explanation: Well, this is not really cool for now. What makes elements useful lies in the fact that you can create trees out of them. To do so, you can either use the method add_child or the method add_to.
End of explanation
"""
print(parent.render(), child.render())
"""
Explanation: Now in the example above, embedding the one in the other does not change anything.
End of explanation
"""
parent = Element("<parent>{% for child in this._children.values() %}{{child.render()}}{% endfor %}</parent>")
Element('<child1/>').add_to(parent)
Element('<child2/>').add_to(parent)
parent.render()
"""
Explanation: But you can use the tree structure in the template.
End of explanation
"""
parent = Element("<parent>{% for child in this._children.values() %}{{child.render()}}{% endfor %}</parent>")
Element('<child1/>').add_to(parent, name='child_1')
parent._children
"""
Explanation: As you can see, the child of an element are referenced in the _children attibute in the form of an OrderedDict. You can choose the key of each child in specifying a name in the add_child (or add_to) method:
End of explanation
"""
Element('<child1_overwritten/>').add_to(parent, name='child_1')
parent.render()
"""
Explanation: That way, it's possible to overwrite a child in specifying the same name:
End of explanation
"""
f = Figure()
print(f.render())
"""
Explanation: I hope you start to find it useful.
In fact, the real interest of Element lies in the classes that inherit from it. The most important one is Figure described in the next section.
Figure
A Figure represents an HTML document. It's composed of 3 parts (attributes):
header : corresponds to the <head> part of the HTML document,
html : corresponds to the <body> part,
script : corresponds to a <script> section that will be appended after the <body> section.
End of explanation
"""
f.header.add_child(Element("<style>body {background-color: #00ffff}</style>"))
f.html.add_child(Element("<h1>Hello world</h1>"))
print(f.render())
"""
Explanation: You can for example create a beatiful cyan "hello-world" webpage in doing:
End of explanation
"""
f.save('foo.html')
print(open('foo.html').read())
"""
Explanation: You can simply save the content of the Figure to a file, thanks to the save method:
End of explanation
"""
f
"""
Explanation: If you want to visualize it in the notebook, you can let Figure._repr_html_ method do it's job in typing:
End of explanation
"""
f.width = 300
f.height = 200
f
"""
Explanation: If this rendering is too large for you, you can force it's width and height:
End of explanation
"""
Figure(figsize=(5,5))
"""
Explanation: Note that you can also define a Figure's size in a matplotlib way:
End of explanation
"""
macro = MacroElement()
macro._template = Template(
'{% macro header(this, kwargs) %}'
'This is header of {{this.get_name()}}'
'{% endmacro %}'
'{% macro html(this, kwargs) %}'
'This is html of {{this.get_name()}}'
'{% endmacro %}'
'{% macro script(this, kwargs) %}'
'This is script of {{this.get_name()}}'
'{% endmacro %}'
)
print(Figure().add_child(macro).render())
"""
Explanation: MacroElement
It happens you need to create elements that have multiple effects on a Figure. For this, you can use MacroElement whose template contains macros ; each macro writes something into the parent Figure's header, body and script.
End of explanation
"""
js_link = JavascriptLink('https://example.com/javascript.js')
js_link.render()
css_link = CssLink('https://example.com/style.css')
css_link.render()
"""
Explanation: Link
To embed javascript and css links in the header, you can use these class:
End of explanation
"""
html = Html('Hello world')
html.render()
"""
Explanation: Html
An Html element enables you to create custom div to put in the body of your page.
End of explanation
"""
Html('<b>Hello world</b>').render()
"""
Explanation: It's designed to render the text as you gave it, so it won't work directly it you want to embed HTML code inside the div.
End of explanation
"""
Html('<b>Hello world</b>', script=True).render()
"""
Explanation: For this, you have to set script=True and it will work:
End of explanation
"""
iframe = IFrame('Hello World')
iframe.render()
"""
Explanation: IFrame
If you need to embed a full webpage (with separate javascript environment), you can use IFrame.
End of explanation
"""
f = Figure(height=180)
f.html.add_child(Element("Before the frame"))
f.html.add_child(IFrame('In the frame', height='100px'))
f.html.add_child(Element("After the frame"))
f
"""
Explanation: As you can see, it will embed the full content of the iframe in a base64 string so that the ouput looks like:
End of explanation
"""
div = Div()
div.html.add_child(Element('Hello world'))
print(Figure().add_child(div).render())
"""
Explanation: Div
At last, you have the Div element that behaves almost like Html with a few differences:
The style is put in the header, while Html's style is embedded inline.
Div inherits from MacroElement so that:
It cannot be rendered unless it's embedded in a Figure.
It is a useful object toinherit from when you create new classes.
End of explanation
"""
|
bt3gl/Machine-Learning-Resources | deep_art/deepdream/examples/00-classification.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make sure that caffe is on the python path:
caffe_root = '../' # this file is expected to be in {caffe_root}/examples
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
import os
if not os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print("Downloading pre-trained CaffeNet model...")
!../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
"""
Explanation: Instant Recognition with Caffe
In this example we'll classify an image with the bundled CaffeNet model based on the network architecture of Krizhevsky et al. for ImageNet. We'll compare CPU and GPU operation then reach into the model to inspect features and the output.
(These feature visualizations follow the DeCAF visualizations originally by Yangqing Jia.)
First, import required modules, set plotting parameters, and run ./scripts/download_model_binary.py models/bvlc_reference_caffenet to get the pretrained CaffeNet model if it hasn't already been fetched.
End of explanation
"""
caffe.set_mode_cpu()
net = caffe.Net(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',
caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
caffe.TEST)
# input preprocessing: 'data' is the name of the input blob == net.inputs[0]
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
transformer.set_mean('data', np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1)) # mean pixel
transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1]
transformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB
"""
Explanation: Set Caffe to CPU mode, load the net in the test phase for inference, and configure input preprocessing.
End of explanation
"""
# set net to batch size of 50
net.blobs['data'].reshape(50,3,227,227)
"""
Explanation: Let's start with a simple classification. We'll set a batch of 50 to demonstrate batch processing, even though we'll only be classifying one image. (Note that the batch size can also be changed on-the-fly.)
End of explanation
"""
net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(caffe_root + 'examples/images/cat.jpg'))
out = net.forward()
print("Predicted class is #{}.".format(out['prob'][0].argmax()))
"""
Explanation: Feed in the image (with some preprocessing) and classify with a forward pass.
End of explanation
"""
plt.imshow(transformer.deprocess('data', net.blobs['data'].data[0]))
"""
Explanation: What did the input look like?
End of explanation
"""
# load labels
imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'
try:
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
except:
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
# sort top k predictions from softmax output
top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]
print labels[top_k]
"""
Explanation: Adorable, but was our classification correct?
End of explanation
"""
# CPU mode
net.forward() # call once for allocation
%timeit net.forward()
"""
Explanation: Indeed! But how long did it take?
End of explanation
"""
# GPU mode
caffe.set_device(0)
caffe.set_mode_gpu()
net.forward() # call once for allocation
%timeit net.forward()
"""
Explanation: That's a while, even for a batch size of 50 images. Let's switch to GPU mode.
End of explanation
"""
[(k, v.data.shape) for k, v in net.blobs.items()]
"""
Explanation: Much better. Now let's look at the net in more detail.
First, the layer features and their shapes (1 is the batch size, corresponding to the single input image in this example).
End of explanation
"""
[(k, v[0].data.shape) for k, v in net.params.items()]
"""
Explanation: The parameters and their shapes. The parameters are net.params['name'][0] while biases are net.params['name'][1].
End of explanation
"""
# take an array of shape (n, height, width) or (n, height, width, channels)
# and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)
def vis_square(data, padsize=1, padval=0):
data -= data.min()
data /= data.max()
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((0, 0),) * (data.ndim - 3)
data = np.pad(data, padding, mode='constant', constant_values=(padval, padval))
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
plt.imshow(data)
"""
Explanation: Helper functions for visualization
End of explanation
"""
# the parameters are a list of [weights, biases]
filters = net.params['conv1'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
"""
Explanation: The input image
The first layer filters, conv1
End of explanation
"""
feat = net.blobs['conv1'].data[0, :36]
vis_square(feat, padval=1)
"""
Explanation: The first layer output, conv1 (rectified responses of the filters above, first 36 only)
End of explanation
"""
filters = net.params['conv2'][0].data
vis_square(filters[:48].reshape(48**2, 5, 5))
"""
Explanation: The second layer filters, conv2
There are 256 filters, each of which has dimension 5 x 5 x 48. We show only the first 48 filters, with each channel shown separately, so that each filter is a row.
End of explanation
"""
feat = net.blobs['conv2'].data[0, :36]
vis_square(feat, padval=1)
"""
Explanation: The second layer output, conv2 (rectified, only the first 36 of 256 channels)
End of explanation
"""
feat = net.blobs['conv3'].data[0]
vis_square(feat, padval=0.5)
"""
Explanation: The third layer output, conv3 (rectified, all 384 channels)
End of explanation
"""
feat = net.blobs['conv4'].data[0]
vis_square(feat, padval=0.5)
"""
Explanation: The fourth layer output, conv4 (rectified, all 384 channels)
End of explanation
"""
feat = net.blobs['conv5'].data[0]
vis_square(feat, padval=0.5)
"""
Explanation: The fifth layer output, conv5 (rectified, all 256 channels)
End of explanation
"""
feat = net.blobs['pool5'].data[0]
vis_square(feat, padval=1)
"""
Explanation: The fifth layer after pooling, pool5
End of explanation
"""
feat = net.blobs['fc6'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
"""
Explanation: The first fully connected layer, fc6 (rectified)
We show the output values and the histogram of the positive values
End of explanation
"""
feat = net.blobs['fc7'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
"""
Explanation: The second fully connected layer, fc7 (rectified)
End of explanation
"""
feat = net.blobs['prob'].data[0]
plt.plot(feat.flat)
"""
Explanation: The final probability output, prob
End of explanation
"""
# load labels
imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'
try:
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
except:
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
# sort top k predictions from softmax output
top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]
print labels[top_k]
"""
Explanation: Let's see the top 5 predicted labels.
End of explanation
"""
|
dtamayo/reboundx | ipython_examples/GeneralRelativity.ipynb | gpl-3.0 | import rebound
sim = rebound.Simulation()
sim.add(m=1., hash="star") # Sun
sim.add(m=1.66013e-07,a=0.387098,e=0.205630, hash="planet") # Mercury-like
sim.move_to_com() # Moves to the center of momentum frame
ps = sim.particles
sim.integrate(10.)
print("pomega = %.16f"%sim.particles[1].pomega)
"""
Explanation: Adding Post-Newtonian general relativity corrections
It's easy to add post-newtonian corrections to your REBOUND simulations with REBOUNDx. Let's start with a simulation without GR:
End of explanation
"""
import reboundx
rebx = reboundx.Extras(sim)
gr = rebx.load_force("gr")
rebx.add_force(gr)
"""
Explanation: As expected, the pericenter did not move at all. Now let's add GR
End of explanation
"""
from reboundx import constants
gr.params["c"] = constants.C
"""
Explanation: The GR effects need you to set the speed of light in the right units. The constants module has a set of constants in REBOUND's default units of AU, solar masses and yr/$2\pi$ (such that G=1). If you want to use other units, you'd need to calculate c.
End of explanation
"""
ps["star"].params["gr_source"] = 1
"""
Explanation: By default, the gr and gr_potential effects assume that the massive particle is at index 0 (with gr_full all particles are "sources" so this is not an issue). If the massive particle has a different index, or you think it might move from index 0 in the particles array (e.g. due to a custom merger routine), you can attach a gr_source flag to it to identify it as the massive particle with:
End of explanation
"""
deltat = 100.
E0 = rebx.gr_hamiltonian(gr)
sim.integrate(sim.t + deltat)
Ef = rebx.gr_hamiltonian(gr)
print("pomega = %.16f"%sim.particles[1].pomega)
juliancentury = 628.33195 # in yr/2pi
arcsec = 4.8481368e-06 # in rad
print("Rate of change of pomega = %.4f [arcsec / Julian century]"% (sim.particles[1].pomega/deltat*juliancentury/arcsec))
print("Relative error on the relativistic Hamiltonian = {0}".format(abs(Ef-E0)/abs(E0)))
"""
Explanation: Now we integrate as normal. We monitor the total Hamiltonian. Unlike other forces where we can calculate a separate potential, here and with gr_full the forces are velocity dependent, which means the momentum is not just mv in a Hamiltonian framework. So rather than using sim.calculate_energy and adding a potential, gr_hamiltonian calculates the full thing (classical Hamiltonian + gr).
End of explanation
"""
|
tcfuji/python-cn-workflow | PresentableNotebook.ipynb | mit | from pandas import Series
from igraph import *
from numba import jit
import numpy as np
import os
import time
"""
Explanation: Motivation:
An application of the Louvain algorithm on fMRI time seres data.
Necessary Packages
End of explanation
"""
# Gather all the files.
files = os.listdir('timeseries/')
# Concatenate (or stack) all the files.
# Approx 12.454981 seconds
i = 0
for f in files:
if i == 0:
ts_matrix = np.loadtxt('timeseries/' + f).T
i += 1
else:
new_ts = np.loadtxt('timeseries/' + f).T
ts_matrix = np.hstack((ts_matrix, new_ts))
"""
Explanation: Phase 1: Construction
Step 1:
Concatenate time series from all subjects into a 264 x Time matrix
where times 1-720 are from person 1, times 721 - 1441 are from person
2, etc. through person N.
End of explanation
"""
"""
Compute the correlation matrix
"""
corr_mat = np.corrcoef(ts_matrix.T)
# Save in .npz file
# np.savez_compressed('corr_mat.npz', corr_mat=corr_mat)
"""
Explanation: Step 2:
Compute a Time x Time correlation matrix using Pearson correlation
coefficients.
End of explanation
"""
# X = np.load('corr_mat.npz')
# X = X['corr_mat']
# a flatten function optimized by numba
@jit
def fast_flatten(X):
k = 0
length = X.shape[0] * X.shape[1]
X_flat = np.empty(length)
for i in xrange(X.shape[0]):
for j in xrange(X.shape[1]):
X_flat[k] = X[i, j]
k += 1
return X_flat
# helper function that returns the min of the number of
# unique values depending on the threshold
def min_thresh_val(X, threshold):
X_flat = fast_flatten(X)
index = int(len(X_flat) * threshold)
return np.unique(sort(X_flat))[::-1][:index].min()
# Computes the threshold matrix without killing the python kernel
@jit
def thresh_mat(X, threshold):
min_val = min_thresh_val(X, threshold)
print("Done with min_thresh_val")
# M = zeros((X.shape[0], X.shape[1]))
for i in xrange(X.shape[0]):
for j in xrange(X.shape[1]):
# if X[i, j] >= min_val:
# M[i, j] = X[i, j]
if X[i, j] < min_val:
X[i, j] = 0
thresh_mat(X, .01)
print("Finished Threshold Matrix")
# savez_compressed('threshold_mat.npz', threshold_mat=X)
"""
Explanation: Step 3:
Threshold the Time x Time correlation matrix to retain x% of the
strongest connections.
End of explanation
"""
# from: http://stackoverflow.com/questions/29655111/igraph-graph-from-numpy-or-pandas-adjacency-matrix
# get the row, col indices of the non-zero elements in your adjacency matrix
conn_indices = np.where(X)
# get the weights corresponding to these indices
weights = X[conn_indices]
# a sequence of (i, j) tuples, each corresponding to an edge from i -> j
edges = zip(*conn_indices)
# initialize the graph from the edge sequence
G = Graph(edges=edges, directed=False)
# assign node names and weights to be attributes of the vertices and edges
# respectively
G.vs['label'] = np.arange(X.shape[0])
G.es['weight'] = weights
# get the vertex clustering corresponding to the best modularity
cm = G.community_multilevel()
# save the cluster membership of each node in a csv file
Series(cm.membership).to_csv('mem.csv', index=False)
"""
Explanation: Step 4:
Feed the thresholded Time x Time correlation matrix into igraph to
maximize modularity (a community detection technique) which will
provide us with an association of time points to brain states (a.k.a.
modules, communities, or clusters).
End of explanation
"""
def index_list(num, ind_list, ts_matrix):
i = 0
for z in zip(ind_list, ts_matrix):
if z[0] == num and i == 0:
output = np.array([z[1]])
i += 1
elif z[0] == num and i != 0:
output = np.append(output, [z[1]], axis=0)
return output
louvain_ind = read_csv('mem.csv').values.T
for f in files:
ts_matrix = np.loadtxt('timeseries/' + f).T
for i in range(1, 65):
subject = louvain_ind[:722 * i][0]
for j in range(4):
i_list = index_list(j, subject, ts_matrix)
avg = np.average(i_list, axis=1)
Series(avg).to_csv("module_matrices/subject" + str(i)
+ "mod" + str(j), index=False, sep="\t")
"""
Explanation: Phase 2: Validation
Retrieve the vector containing the list of assigned clusters at each time (0 - 46208).
Each number in the vector of cluster assignments tells you which time
point and which subject is assigned to which cluster. So now you need
to go back and time the original data for that time point. Remember
that the original data you had is a vector of 264 activities for each
time point.
End of explanation
"""
|
HAOzj/Classic-ML-Methods-Algo | ipynbs/appendix/topics_in_sklearn/sklearn构建管道.ipynb | mit | import numpy as np
from sklearn.preprocessing import FunctionTransformer
transformer = FunctionTransformer(np.log1p)
X = np.array([[0, 1], [2, 3]])
transformer.transform(X)
"""
Explanation: sklearn构建管道
sklearn支持使用管道(Pipeline)连接多个sklearn中的模型类实例,但要求过程中的模型类对象带transform方法的且最后一个需要是分类器,回归器或者同样是带transform方法的模型类对象.
带transform方法的类对象叫做转换器,可以使用sklearn.preprocessing.FunctionTransformer自定义.
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42)
text_clf = Pipeline([('vect', CountVectorizer()),# 分词并向量化
('tfidf', TfidfTransformer()), # tfidf算法提取关键字
('clf', MultinomialNB())]) # 分类器
text_clf.fit(twenty_train.data, twenty_train.target)
"""
Explanation: 管道通常用在将向量化(vectorizer) => 转换器(transformer) => 分类器(classifier) 过程封装为一个连贯的过程.
例:以fetch_20newsgroups数据为例做贝叶斯分类器模型
End of explanation
"""
import numpy as np
twenty_test = fetch_20newsgroups(subset='test',categories=categories, shuffle=True, random_state=42)
docs_test = twenty_test.data
predicted = text_clf.predict(docs_test)
np.mean(predicted == twenty_test.target)
"""
Explanation: 评估性能
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/int_logistic_regression.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))
"""
Explanation: Custom training: walkthrough
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/custom_training_walkthrough.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/customization/custom_training_walkthrough.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/customization/custom_training_walkthrough.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This guide uses machine learning to categorize Iris flowers by species. It uses TensorFlow to:
1. Build a model,
2. Train this model on example data, and
3. Use the model to make predictions about unknown data.
TensorFlow programming
This guide uses these high-level TensorFlow concepts:
Use TensorFlow's default eager execution development environment,
Import data with the Datasets API,
Build models and layers with TensorFlow's Keras API.
This tutorial is structured like many TensorFlow programs:
Import and parse the dataset.
Select the type of model.
Train the model.
Evaluate the model's effectiveness.
Use the trained model to make predictions.
Setup program
Configure imports
Import TensorFlow and the other required Python modules. By default, TensorFlow uses eager execution to evaluate operations immediately, returning concrete values instead of creating a computational graph that is executed later. If you are used to a REPL or the python interactive console, this feels familiar.
End of explanation
"""
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}".format(train_dataset_fp))
"""
Explanation: The Iris classification problem
Imagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their sepals and petals.
The Iris genus entails about 300 species, but our program will only classify the following three:
Iris setosa
Iris virginica
Iris versicolor
<table>
<tr><td>
<img src="https://www.tensorflow.org/images/iris_three_species.jpg"
alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://commons.wikimedia.org/w/index.php?curid=170298">Iris setosa</a> (by <a href="https://commons.wikimedia.org/wiki/User:Radomil">Radomil</a>, CC BY-SA 3.0), <a href="https://commons.wikimedia.org/w/index.php?curid=248095">Iris versicolor</a>, (by <a href="https://commons.wikimedia.org/wiki/User:Dlanglois">Dlanglois</a>, CC BY-SA 3.0), and <a href="https://www.flickr.com/photos/33397993@N05/3352169862">Iris virginica</a> (by <a href="https://www.flickr.com/photos/33397993@N05">Frank Mayfield</a>, CC BY-SA 2.0).<br/>
</td></tr>
</table>
Fortunately, someone has already created a dataset of 120 Iris flowers with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems.
Import and parse the training dataset
Download the dataset file and convert it into a structure that can be used by this Python program.
Download the dataset
Download the training dataset file using the tf.keras.utils.get_file function. This returns the file path of the downloaded file:
End of explanation
"""
!head -n5 {train_dataset_fp}
"""
Explanation: Inspect the data
This dataset, iris_training.csv, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the head -n5 command to take a peek at the first five entries:
End of explanation
"""
# column order in CSV file
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("Features: {}".format(feature_names))
print("Label: {}".format(label_name))
"""
Explanation: From this view of the dataset, notice the following:
The first line is a header containing information about the dataset:
There are 120 total examples. Each example has four features and one of three possible label names.
Subsequent rows are data records, one example per line, where:
The first four fields are features: these are the characteristics of an example. Here, the fields hold float numbers representing flower measurements.
The last column is the label: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.
Let's write that out in code:
End of explanation
"""
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
"""
Explanation: Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:
0: Iris setosa
1: Iris versicolor
2: Iris virginica
For more information about features and labels, see the ML Terminology section of the Machine Learning Crash Course.
End of explanation
"""
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
"""
Explanation: Create a tf.data.Dataset
TensorFlow's Dataset API handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training.
Since the dataset is a CSV-formatted text file, use the tf.data.experimental.make_csv_dataset function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (shuffle=True, shuffle_buffer_size=10000), and repeat the dataset forever (num_epochs=None). We also set the batch_size parameter:
End of explanation
"""
features, labels = next(iter(train_dataset))
print(features)
"""
Explanation: The make_csv_dataset function returns a tf.data.Dataset of (features, label) pairs, where features is a dictionary: {'feature_name': value}
These Dataset objects are iterable. Let's look at a batch of features:
End of explanation
"""
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
"""
Explanation: Notice that like-features are grouped together, or batched. Each example row's fields are appended to the corresponding feature array. Change the batch_size to set the number of examples stored in these feature arrays.
You can start to see some clusters by plotting a few features from the batch:
End of explanation
"""
def pack_features_vector(features, labels):
"""Pack the features into a single array."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
"""
Explanation: To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: (batch_size, num_features).
This function uses the tf.stack method which takes values from a list of tensors and creates a combined tensor at the specified dimension:
End of explanation
"""
train_dataset = train_dataset.map(pack_features_vector)
"""
Explanation: Then use the tf.data.Dataset#map method to pack the features of each (features,label) pair into the training dataset:
End of explanation
"""
features, labels = next(iter(train_dataset))
print(features[:5])
"""
Explanation: The features element of the Dataset are now arrays with shape (batch_size, num_features). Let's look at the first few examples:
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
"""
Explanation: Select the type of model
Why model?
A model is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.
Could you determine the relationship between the four features and the Iris species without using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach determines the model for you. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.
Select the model
We need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. Neural networks can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more hidden layers. Each hidden layer consists of one or more neurons. There are several categories of neural networks and this program uses a dense, or fully-connected neural network: the neurons in one layer receive input connections from every neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer:
<table>
<tr><td>
<img src="https://www.tensorflow.org/images/custom_estimators/full_network.png"
alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs">
</td></tr>
<tr><td align="center">
<b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/>
</td></tr>
</table>
When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called inference. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: 0.02 for Iris setosa, 0.95 for Iris versicolor, and 0.03 for Iris virginica. This means that the model predicts—with 95% probability—that an unlabeled example flower is an Iris versicolor.
Create a model using Keras
The TensorFlow tf.keras API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.
The tf.keras.Sequential model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two tf.keras.layers.Dense layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's input_shape parameter corresponds to the number of features from the dataset, and is required:
End of explanation
"""
predictions = model(features)
predictions[:5]
"""
Explanation: The activation function determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many tf.keras.activations, but ReLU is common for hidden layers.
The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.
Using the model
Let's have a quick look at what this model does to a batch of features:
End of explanation
"""
tf.nn.softmax(predictions[:5])
"""
Explanation: Here, each example returns a logit for each class.
To convert these logits to a probability for each class, use the softmax function:
End of explanation
"""
print("Prediction: {}".format(tf.argmax(predictions, axis=1)))
print(" Labels: {}".format(labels))
"""
Explanation: Taking the tf.argmax across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions:
End of explanation
"""
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y, training):
# training=training is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
y_ = model(x, training=training)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels, training=False)
print("Loss test: {}".format(l))
"""
Explanation: Train the model
Training is the stage of machine learning when the model is gradually optimized, or the model learns the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn too much about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called overfitting—it's like memorizing the answers instead of understanding how to solve a problem.
The Iris classification problem is an example of supervised machine learning: the model is trained from examples that contain labels. In unsupervised machine learning, the examples don't contain labels. Instead, the model typically finds patterns among the features.
Define the loss and gradient function
Both training and evaluation stages need to calculate the model's loss. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.
Our model will calculate its loss using the tf.keras.losses.SparseCategoricalCrossentropy function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples.
End of explanation
"""
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
"""
Explanation: Use the tf.GradientTape context to calculate the gradients used to optimize your model:
End of explanation
"""
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
"""
Explanation: Create an optimizer
An optimizer applies the computed gradients to the model's variables to minimize the loss function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.
<table>
<tr><td>
<img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%"
alt="Optimization algorithms visualized over time in 3D space.">
</td></tr>
<tr><td align="center">
<b>Figure 3.</b> Optimization algorithms visualized over time in 3D space.<br/>(Source: <a href="http://cs231n.github.io/neural-networks-3/">Stanford class CS231n</a>, MIT License, Image credit: <a href="https://twitter.com/alecrad">Alec Radford</a>)
</td></tr>
</table>
TensorFlow has many optimization algorithms available for training. This model uses the tf.keras.optimizers.SGD that implements the stochastic gradient descent (SGD) algorithm. The learning_rate sets the step size to take for each iteration down the hill. This is a hyperparameter that you'll commonly adjust to achieve better results.
Let's setup the optimizer:
End of explanation
"""
loss_value, grads = grad(model, features, labels)
print("Step: {}, Initial Loss: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("Step: {}, Loss: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels, training=True).numpy()))
"""
Explanation: We'll use this to calculate a single optimization step:
End of explanation
"""
## Note: Rerunning this cell uses the same model variables
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
# Compare predicted label to actual label
# training=True is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
"""
Explanation: Training loop
With all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:
Iterate each epoch. An epoch is one pass through the dataset.
Within an epoch, iterate over each example in the training Dataset grabbing its features (x) and label (y).
Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.
Use an optimizer to update the model's variables.
Keep track of some stats for visualization.
Repeat for each epoch.
The num_epochs variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. num_epochs is a hyperparameter that you can tune. Choosing the right number usually requires both experience and experimentation:
End of explanation
"""
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
"""
Explanation: Visualize the loss function over time
While it's helpful to print out the model's training progress, it's often more helpful to see this progress. TensorBoard is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the matplotlib module.
Interpreting these charts takes some experience, but you really want to see the loss go down and the accuracy go up:
End of explanation
"""
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
"""
Explanation: Evaluate the model's effectiveness
Now that the model is trained, we can get some statistics on its performance.
Evaluating means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's predictions against the actual label. For example, a model that picked the correct species on half the input examples has an accuracy of 0.5. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy:
<table cellpadding="8" border="0">
<colgroup>
<col span="4" >
<col span="1" bgcolor="lightblue">
<col span="1" bgcolor="lightgreen">
</colgroup>
<tr bgcolor="lightgray">
<th colspan="4">Example features</th>
<th colspan="1">Label</th>
<th colspan="1" >Model prediction</th>
</tr>
<tr>
<td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align="center">1</td><td align="center">1</td>
</tr>
<tr>
<td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align="center">2</td><td align="center">2</td>
</tr>
<tr>
<td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align="center">0</td><td align="center">0</td>
</tr>
<tr>
<td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align="center">1</td><td align="center" bgcolor="red">2</td>
</tr>
<tr>
<td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align="center">1</td><td align="center">1</td>
</tr>
<tr><td align="center" colspan="6">
<b>Figure 4.</b> An Iris classifier that is 80% accurate.<br/>
</td></tr>
</table>
Setup the test dataset
Evaluating the model is similar to training the model. The biggest difference is the examples come from a separate test set rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.
The setup for the test Dataset is similar to the setup for training Dataset. Download the CSV text file and parse that values, then give it a little shuffle:
End of explanation
"""
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
"""
Explanation: Evaluate the model on the test dataset
Unlike the training stage, the model only evaluates a single epoch of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set:
End of explanation
"""
tf.stack([y,prediction],axis=1)
"""
Explanation: We can see on the last batch, for example, the model is usually correct:
End of explanation
"""
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(predict_dataset, training=False)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p))
"""
Explanation: Use the trained model to make predictions
We've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on unlabeled examples; that is, on examples that contain features but not a label.
In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:
0: Iris setosa
1: Iris versicolor
2: Iris virginica
End of explanation
"""
|
nohmapp/acme-for-now | essential_algorithms/Graphs and Trees.ipynb | mit | '''
Depth-First Search
Here is a depth-fist search for an undirected graph that may be
disconnected. It is writeen as an adjacency matrix.
'''
graph = {
'a': ['b'],
'b': ['']
}
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
def dfs(self, node):
visited = []
def dfs(self, v):
visited = [False]*(len(self.graph))
print(self.graph)
self.dfsUtil(v, visited)
"""
Explanation: 4.1 Route between Nodes. Given a directed graph, design an algorithm to find out whether there is a route between two nodes.
4.2 Minimal Tree. Given a sorted array with unique integer elements, write an algorithim to create a binary search tree with minimal height.
4.9 BST sequences. A binary tree was created by traversing through an array from left to right and inserting each element. Given a binary search tree with distinct elements, print all possible arrays that could have led to this tree.
4.10 Check subtree. t1 and t2 are two very large binary tres, with t1 much bigger than t2. Create an algorithm to determine if t2 is a subtree of t1. A tree t2 is a subtree of t1 if there exists a node n in t1 such that the subtree of n is identical to t2. That is, if you cut off the tree at node n, the two trees would be identical .
4.11 Random node. You are implementing a binary tree calss from scratch which, in addition to insert, find, and delete, has a method getRandomNode() which returns a random node from the tree. All nodes should be equally likely to be chosen. Design and implement an algorithim for getRandomNode, and explain how you would implement the rest of the methods.
4.12 Paths with sum. You are given a binary tree in which each node contains an integer value (which might be positive or negaitve). Design an algorithm to count the number of paths that sum to a given value. The path does not need to start or end at the root or a leaf, but it must go downwards ( traveling only from a parent nodes to child nodes).
End of explanation
"""
class Node:
def __init__(self, value):
self.right = None
self.left = None
self.val = value
def __str__(self):
return '('+str(self.left)+':L ' + "V:" + str(self.val) + " R:" + str(self.right)+')'
def buildBinaryTree(nodes):
if len(nodes) == 0:
raise ValueError('list is empty')
return binaryTree(nodes, 0, len(nodes)-1)
def binaryTree(nodes, start, end):
if start > end:
return ''
middle = (start + end) // 2
root = Node(nodes[middle])
root.left = binaryTree(nodes, start, middle -1)
root.right = binaryTree(nodes, middle+1, end)
return root
test1 = [1, 2, 3, 4, 5, 6, 7, 8]
test2 = [-1, 0, 9, 10]
#test3 = []
test4 = [0, 1, 2, 3, 3, 3, 5]
print(buildBinaryTree(test1))
"""
Explanation: Build Binary Search Tree from Sorted Array
End of explanation
"""
class Node:
def __init(self)__:
self.value = value
self.left = None
self.right = None
def checkBalanced(root):
if root is None:
return 0
left = checkBalanced(root.left)
right = checkBalanced(root.right)
if abs(left - right) > 1:
return -1
return max(left, right) + 1
"""
Explanation: 4.4 Check balanced.
Implement a function to check if a binary tree is balanced. For the purpose of this quesiton, a balanced tree is defined to be a tree such that the heights of the two subtress of any node never differ by more than one.
End of explanation
"""
class Node:
def __init__(self, value):
self.v = value
self.right = None
self.left= None
def checkBST(node):
return (checkBSThelper(node, -math.inf, math.inf))
def checkBSThelper(node, mini, maxi):
if node is None:
return True
if node.v < mini or node.v >= maxi:
return False
return checkBSThelper(node.left, mini, node.v) and checkBSThelper(node.right, node.v, maxi)
"""
Explanation: 4.5 Validate BST.
Implement a function to check if a binary tree is a binary search tree.
Solution1: In-order traversal, cannot handle duplicate values
Solution2: Pass down min/max values that each node must fall between
End of explanation
"""
class Node:
def __init__(self, data):
self.data = data
self.left = None
self.right = None
def inOrderSucc(root, n):
if n.right is not None:
#get minimum value in right subtree
return minValue(n.right)
p = n.parent
#succ is parent of leftchild to parent
while(p is not None):
if n != p.right:
break
n = p
p = p.parent
return p
def minValue(node):
current = node
while(current is not None):
if current.left is None:
break
current = current.left
return current
"""
Explanation: 4.6 Successor.
Write an algorithm to find the 'next' node of a given node in a binary search tree. You may assume that each node has a link to its parent.
The inorder successor can be defined as the node with the smallest key greater than the key of input node.
If right subtree of node is not null, then succ lies in the right subtree, get the minimum value in the right subtree.
If right subtree is null, then succ is one of the ancestors. Travel up parent point until you see a node which is left child of its parent. The parent of node is succ.
This works if all nodes have a parent pointer.
If no parent pointer
If right subtree of node is not null, then succ lies in the right subtree. Get minimum value in the right subtree
If the right subtree is null, then start from root and search. Travel down the tree, if a node's data is greater than root's data then go right, otherwise go left.
End of explanation
"""
from collections import defaultdict
class Graph:
def __init__(self, vertices):
self.graph = defaultdict(list)
self.v = vertices
def addEdge(self, u, v):
self.graph[u].append(v)
def topologicalSortUtil(self, v, visited, stack):
visited[v] = True
for i in self.graph[v]:
if visited[i] == False:
self.topologicalSortUtil(i, visited, stack)
else:
raise ValueError('cycle: %s' % i)
stack.insert(0, v)
def topologicalSort(self):
visited = [False]*self.v
stack = []
print(self.graph)
for i in range(self.v):
if visited[i] == False:
self.topologicalSortUtil(i, visited, stack)
print(stack)
g= Graph(6)
g.addEdge(5, 2)
g.addEdge(5, 0)
g.addEdge(4, 3)
g.addEdge(4, 1)
g.topologicalSort()
"""
Explanation: 4.7 Build order.
You are given a list of projects and a list of dependencies. All of a project's dependencies must be built before the project is. Find a build order that will allow the projects to be built. If there is no valid build order, return an error.
This is a tolopological sort problem, it can also be approached as a depth first search problem but the most important aspect is that we deal with cycles, when using a DFS approach the most important part is that we have a 'visiting' label so that if we come back to a node and it is labeled as 'visiting' then we have come across a cycle.
End of explanation
"""
from collections import deque
class Node:
def __init__(self, val):
self.val = val
self.edges = []
def __eq__(self, other):
return self.val == other.val
def __hash__(self):
return self.val
class Graph:
def __init__(self, nodes=[]):
self.nodes = nodes
def add_node(self, val):
new_node = Node(val)
self.nodes.append(new_node)
def add_edge(self, node1, node2):
node1.edges.append(node2)
node2.edges.append(node1)
def bfs(self):
if not self.nodes:
return []
start = self.nodes[0]
visited, queue, result = set([start]), deque([start]), []
while queue:
node = queue.popleft()
result.append(node)
for nd in node.edges:
if nd not in visited:
queue.append(nd)
visited.add(nd)
return result
def dfs(self):
if not self.nodes:
return []
start = self.nodes[0]
visited, stack, result = set([start]), [start], []
while stack:
node = stack.pop()
result.append(node)
for nd in node.edges:
if nd not in visited:
stack.append(nd)
visited.add(nd)
return result
graph = Graph()
graph.add_node(5)
graph.add_node(3)
graph.add_node(8)
graph.add_node(1)
graph.add_node(9)
graph.add_node(2)
graph.add_node(10)
# 2
# /
# 5 - 3 - 8 - 9 - 10
# \ /
# 1
graph.add_edge(graph.nodes[0], graph.nodes[1])
graph.add_edge(graph.nodes[0], graph.nodes[3])
graph.add_edge(graph.nodes[1], graph.nodes[2])
graph.add_edge(graph.nodes[0], graph.nodes[1])
graph.add_edge(graph.nodes[2], graph.nodes[3])
graph.add_edge(graph.nodes[2], graph.nodes[5])
graph.add_edge(graph.nodes[2], graph.nodes[4])
graph.add_edge(graph.nodes[4], graph.nodes[6])
dfs_result = graph.dfs()
bfs_result = graph.bfs()
print("DFS")
for i in range(len(dfs_result)):
print(dfs_result[i].val)
print("BFS")
for i in range(len(bfs_result)):
print(bfs_result[i].val)
"""
Explanation: 4.8 First common ancestor.
Design an algorithm and write code to find the first common ancestor of two nodes in a binary tree. Avoid storing additional nodes in a data structure.
There are several variations on this problem. In a binary search tree we can begin at root and we know from the values of our two nodes that the lca, or lowest common ancestor is a number between node 1 and node 2. We can use this to make decisions where we should go.
In a binary tree we do not have any information abou tthe order, so all we can do is follow each node up to the root, and see where the two nodes share a path.
BFS and DFS
A few things that matter about this implementation of breath first search is the use of set to check for vertices in visited and not_visited. To create a set with user defined objects, not natively hashable, implement a version of hash and eq.
A way of approaching the dfs and bfs difference is that dfs can be represented as storing nodes in a stack and then traversing and bfs is like storing nodes in a queue and then traversing.
End of explanation
"""
def invert_tree(self, root):
if root is not None:
temp = root.left
root.left = self.invert_tree(root.right)
root.right = self.inver_tree(temp)
# because mutliple assignment
# root.left, root.right = invert_tree(roo.right),
# invert_tree(root.left)
return root
'''
Complete: A binary tree in which every level of the tree if fully
filled except for perhaps the last level
'''
'''
Full: A binary tree in which every node has either zero or two
children. No nodes have only one child.
'''
'''
Check if two binary trees are identical
Runtime complexity: O(n)
Memory: O(h) # height of the tree
'''
def level_order_traversal(root):
if root == None:
return
q = deque()
q.append(root)
while q:
temp = q.popleft()
print(str(temp.data) + ",")
if temp.left != None:
q.append(temp.left)
if temp.right != None:
q.append(temp.right)
def are_identical(root1, root2):
if root1 == None and root2 == None:
return True
if root1 != None and root2 != None:
return (root1.data == root2.data and
are_identical(root1.left, root2.left) and
are_identical(root1.right, root2.right))
return False
'''
Clone Directed Graph
Runtime: O(n)
Memory: O(n)
'''
class Node:
def __init__(self, d):
self.data = d
self.neighbors = []
def clone_rec(root, nodes_completed):
if root == None:
return None
pNew = Node(root.data)
nodes_completed[root] = pNew
for p in root.neighbors:
x = nodes_completed.get(p)
if x == None:
pNew.neighbors += [clone_rec(p, nodes_completed)]
else:
pNew.neighbors += [x]
return pNew
def clone(root):
nodes_completed = {}
return clone_rec(root, nodes_completed)
# this is un-directed graph i.e.
# if there is an edge from x to y
# that means there must be an edge from y to x
# and there is no edge from a node to itself
# hence there can maximim of (nodes * nodes - nodes) / 2 edgesin this graph
def create_test_graph_undirected(nodes_count, edges_count):
vertices = []
for i in xrange(0, nodes_count):
vertices += [Node(i)]
all_edges = []
for i in xrange(0, nodes_count):
for j in xrange(i + 1, nodes_count):
all_edges.append([i, j])
shuffle(all_edges)
for i in xrange(0, min(edges_count, len(all_edges))):
edge = all_edges[i]
vertices[edge[0]].neighbors += [vertices[edge[1]]]
vertices[edge[1]].neighbors += [vertices[edge[0]]]
return vertices
def print_graph(vertices):
for n in vertices:
print str(n.data) + ": {",
for t in n.neighbors:
print str(t.data) + " ",
print
def print_graph_rec(root, visited_nodes):
if root == None or root in visited_nodes:
return
visited_nodes.add(root)
print str(root.data) + ": {",
for n in root.neighbors:
print str(n.data) + " ",
print"}"
for n in root.neighbors:
print_graph_rec(n, visited_nodes)
def print_graph(root):
visited_nodes = set()
print_graph_rec(root, visited_nodes)
def main():
vertices = create_test_graph_undirected(7, 18)
print_graph(vertices[0])
cp = clone(vertices[0])
print
print "After copy."
print_graph(cp)
main()
"""
Explanation: Invert a Binary Tree
Invert a binary tree from left to right. It should now be re-sorted to a descending order if traversed inorder.
End of explanation
"""
import collections
import math
class Graph:
def __init__(self):
self.vertices = set()
# makes the default value for all vertices an empty list
self.edges = collections.defaultdict(list)
self.weights = {}
def add_vertex(self, value):
self.vertices.add(value)
def add_edge(self, from_vertex, to_vertex, distance):
if from_vertex == to_vertex: pass # no cycles allowed
self.edges[from_vertex].append(to_vertex)
self.weights[(from_vertex, to_vertex)] = distance
def __str__(self):
string = "Vertices: " + str(self.vertices) + "\n"
string += "Edges: " + str(self.edges) + "\n"
string += "Weights: " + str(self.weights)
return string
def dijkstra(graph, start):
# initializations
S = set()
# delta represents the length shortest distance paths from start -> v, for v in delta.
# We initialize it so that every vertex has a path of infinity (this line will break if you run python 2)
delta = dict.fromkeys(list(graph.vertices), math.inf)
previous = dict.fromkeys(list(graph.vertices), None)
# then we set the path length of the start vertex to 0
delta[start] = 0
# while there exists a vertex v not in S
while S != graph.vertices:
# let v be the closest vertex that has not been visited...it will begin at 'start'
v = min((set(delta.keys()) - S), key=delta.get)
# for each neighbor of v not in S
for neighbor in set(graph.edges[v]) - S:
new_path = delta[v] + graph.weights[v,neighbor]
# is the new path from neighbor through
if new_path < delta[neighbor]:
# since it's optimal, update the shortest path for neighbor
delta[neighbor] = new_path
# set the previous vertex of neighbor to v
previous[neighbor] = v
S.add(v)
return (delta, previous)
def shortest_path(graph, start, end):
delta, previous = dijkstra(graph, start)
path = []
vertex = end
while vertex is not None:
path.append(vertex)
vertex = previous[vertex]
path.reverse()
return path
G = Graph()
G.add_vertex('a')
G.add_vertex('b')
G.add_vertex('c')
G.add_vertex('d')
G.add_vertex('e')
G.add_edge('a', 'b', 2)
G.add_edge('a', 'c', 8)
G.add_edge('a', 'd', 5)
G.add_edge('b', 'c', 1)
G.add_edge('c', 'e', 3)
G.add_edge('d', 'e', 4)
print(G)
print(dijkstra(G, 'a'))
print(shortest_path(G, 'a', 'e'))
"""
Explanation: Dijkstra's Algorithm
Get the shortest path between source and any node in a weighted graph, called the shortest path tree. A shortest path tree is not the same as the MST
Dijkstra vs Prim:
Dijkstra greedly adds the edge with the minimal weight from the current tree to a new node- calculating the entire edge distance from source to the new node. Prim calculates only the weight of the node from current tree to new node. What is 'minimal' for the two algorithms is different.
The result is two different trees where Dijkstra may construct a graph that has a higher total weight than the MST but a minimal weight for the paths from the root to the vertices.
Kruskal is different in that is solves a minimum spanning tree but chooses edges that may not form a tree- it merely avoids cycles and thus the partial solutions may not be connected.
Time Complexity: O((|E|+|V|))log(|V|)
Space Complexity: O(|V|), up to |v| vertices have to be stored.
End of explanation
"""
import math
class Graph:
def __init__(self, vertices):
self.v = vertices
self.graph = [[0 for column in range(vertices)]
for row in range(vertices)]
def printMST(self, parent):
for i in range(1, self.v):
print(parent[i], "-", i, "\t", self.graph[i][parent[i]])
def minKey(self, key, mstSet):
minimum = math.inf
for v in range(self.v):
print(key)
print(v)
print(mstSet[v])
if key[v] < minimum and mstSet[v] == False:
minimum = key[v]
min_index = v
return min_index
def primMST(self):
key = [math.inf] * self.v #I think this should be frontier
parent = [None] * self.v
key[0] = 0
mstSet = [False] * self.v
parent[0] = -1
for neighbors in range(self.v):
minimum = (self.minKey(key, mstSet))
mstSet[minimum] = True
for u in range(self.v):
if self.graph[u][v] > 0 and mstSet[v] == False and key[v] > self.graph[u][v]:
parent[v] = u
self.printMST(parent)
g = Graph(5)
g.graph = [[0, 2, 0, 6, 0],
[2, 0, 3, 8, 5],
[0, 3, 0, 0, 7],
[6, 8, 0, 0, 9],
[0, 5, 7, 9, 0],
]
g.primMST()
"""
Explanation: Prim's Algorithm
Prim's algorithm works on undirected graphs only, where dijkstra's algorithm works for both directed and undirected.
There might be multiple minimum spanning trees for a graph and not all edges are necessarily used.
Initialize all values with infinite and begin at the root with a value of zero
Look at all the edges connected to the nodes currently in our path that go to vertices we havent' yet been to. Choose the lowest weight edge.
In the case of a tie, pick one at random
End of explanation
"""
class Graph:
def __init__(self, vertices):
self.V= vertices
self.graph= []
def addEdge(self,u,v,w):
self.graph.append([u, v, w])
# utility function used to print the solution
def printArr(self, dist):
print("Vertex Distance from Source")
for i in range(self.V):
print("%d \t\t %d" % (i, dist[i]))
# The main function that finds shortest distances from src to
# all other vertices using Bellman-Ford algorithm. The function
# also detects negative weight cycle
def BellmanFord(self, src):
# Step 1: Initialize distances from src to all other vertices
# as INFINITE
dist = [float("Inf")] * self.V
dist[src] = 0
# Step 2: Relax all edges |V| - 1 times. A simple shortest
# path from src to any other vertex can have at-most |V| - 1
# edges
for i in range(self.V - 1):
# Update dist value and parent index of the adjacent vertices of
# the picked vertex. Consider only those vertices which are still in
# queue
for u, v, w in self.graph:
if dist[u] != float("Inf") and dist[u] + w < dist[v]:
dist[v] = dist[u] + w
# Step 3: check for negative-weight cycles. The above step
# guarantees shortest distances if graph doesn't contain
# negative weight cycle. If we get a shorter path, then there
# is a cycle.
for u, v, w in self.graph:
if dist[u] != float("Inf") and dist[u] + w < dist[v]:
print "Graph contains negative weight cycle"
return
# print all distance
self.printArr(dist)
g = Graph(5)
g.addEdge(0, 1, -1)
g.addEdge(0, 2, 4)
g.addEdge(1, 2, 3)
g.addEdge(1, 3, 2)
g.addEdge(1, 4, 2)
g.addEdge(3, 2, 5)
g.addEdge(3, 1, 1)
g.addEdge(4, 3, -3)
#Print the solution
g.BellmanFord(0)
"""
Explanation: Bellman- Ford
Bellman - Ford algorithm. Bellman is also the creator of dynamic programming. The Bellman-Ford algorithim finds the shortest path on a weighted directed graph.
A way to think of Bellman-Ford is that the first run is a depth first search to do a topological sort followed by one round of Bellman-Ford where we find the shortest paths. The first round is about finding any path and discovering the nodes and the subsequent rounds are about finding short-cuts between the nodes.
Here's another bellman-ford: https://gist.github.com/joninvski/701720
End of explanation
"""
|
dawenl/cofactor | src/Cofactorization_ML20M.ipynb | apache-2.0 | import itertools
import glob
import os
import sys
os.environ['OPENBLAS_NUM_THREADS'] = '1'
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from scipy import sparse
import seaborn as sns
sns.set(context="paper", font_scale=1.5, rc={"lines.linewidth": 2}, font='DejaVu Serif')
import cofacto
import rec_eval
"""
Explanation: Fit CoFactor model to the binarized ML20M
End of explanation
"""
DATA_DIR = '/hdd2/dawen/data/ml-20m/pro/'
unique_uid = list()
with open(os.path.join(DATA_DIR, 'unique_uid.txt'), 'r') as f:
for line in f:
unique_uid.append(line.strip())
unique_sid = list()
with open(os.path.join(DATA_DIR, 'unique_sid.txt'), 'r') as f:
for line in f:
unique_sid.append(line.strip())
n_items = len(unique_sid)
n_users = len(unique_uid)
print n_users, n_items
def load_data(csv_file, shape=(n_users, n_items)):
tp = pd.read_csv(csv_file)
timestamps, rows, cols = np.array(tp['timestamp']), np.array(tp['uid']), np.array(tp['sid'])
seq = np.concatenate((rows[:, None], cols[:, None], np.ones((rows.size, 1), dtype='int'), timestamps[:, None]), axis=1)
data = sparse.csr_matrix((np.ones_like(rows), (rows, cols)), dtype=np.int16, shape=shape)
return data, seq
train_data, train_raw = load_data(os.path.join(DATA_DIR, 'train.csv'))
watches_per_movie = np.asarray(train_data.astype('int64').sum(axis=0)).ravel()
print("The mean (median) watches per movie is %d (%d)" % (watches_per_movie.mean(), np.median(watches_per_movie)))
user_activity = np.asarray(train_data.sum(axis=1)).ravel()
print("The mean (median) movies each user wathced is %d (%d)" % (user_activity.mean(), np.median(user_activity)))
vad_data, vad_raw = load_data(os.path.join(DATA_DIR, 'validation.csv'))
plt.semilogx(1 + np.arange(n_users), -np.sort(-user_activity), 'o')
plt.ylabel('Number of items that this user clicked on')
plt.xlabel('User rank by number of consumed items')
pass
plt.semilogx(1 + np.arange(n_items), -np.sort(-watches_per_movie), 'o')
plt.ylabel('Number of users who watched this movie')
plt.xlabel('Movie rank by number of watches')
pass
"""
Explanation: Construct the positive pairwise mutual information (PPMI) matrix
Change this to wherever you saved the pre-processed data following this notebook.
End of explanation
"""
def _coord_batch(lo, hi, train_data):
rows = []
cols = []
for u in xrange(lo, hi):
for w, c in itertools.permutations(train_data[u].nonzero()[1], 2):
rows.append(w)
cols.append(c)
np.save(os.path.join(DATA_DIR, 'coo_%d_%d.npy' % (lo, hi)),
np.concatenate([np.array(rows)[:, None], np.array(cols)[:, None]], axis=1))
pass
from joblib import Parallel, delayed
batch_size = 5000
start_idx = range(0, n_users, batch_size)
end_idx = start_idx[1:] + [n_users]
Parallel(n_jobs=8)(delayed(_coord_batch)(lo, hi, train_data) for lo, hi in zip(start_idx, end_idx))
pass
X = sparse.csr_matrix((n_items, n_items), dtype='float32')
for lo, hi in zip(start_idx, end_idx):
coords = np.load(os.path.join(DATA_DIR, 'coo_%d_%d.npy' % (lo, hi)))
rows = coords[:, 0]
cols = coords[:, 1]
tmp = sparse.coo_matrix((np.ones_like(rows), (rows, cols)), shape=(n_items, n_items), dtype='float32').tocsr()
X = X + tmp
print("User %d to %d finished" % (lo, hi))
sys.stdout.flush()
"""
Explanation: Generate co-occurrence matrix based on the user's entire watching history
End of explanation
"""
np.save(os.path.join(DATA_DIR, 'coordinate_co_binary_data.npy'), X.data)
np.save(os.path.join(DATA_DIR, 'coordinate_co_binary_indices.npy'), X.indices)
np.save(os.path.join(DATA_DIR, 'coordinate_co_binary_indptr.npy'), X.indptr)
float(X.nnz) / np.prod(X.shape)
"""
Explanation: Note: Don't forget to delete all the temporary coo_LO_HI.npy files
End of explanation
"""
# or co-occurrence matrix from the entire user history
dir_predix = DATA_DIR
data = np.load(os.path.join(dir_predix, 'coordinate_co_binary_data.npy'))
indices = np.load(os.path.join(dir_predix, 'coordinate_co_binary_indices.npy'))
indptr = np.load(os.path.join(dir_predix, 'coordinate_co_binary_indptr.npy'))
X = sparse.csr_matrix((data, indices, indptr), shape=(n_items, n_items))
float(X.nnz) / np.prod(X.shape)
def get_row(Y, i):
lo, hi = Y.indptr[i], Y.indptr[i + 1]
return lo, hi, Y.data[lo:hi], Y.indices[lo:hi]
count = np.asarray(X.sum(axis=1)).ravel()
n_pairs = X.data.sum()
"""
Explanation: Or load the pre-saved co-occurrence matrix
End of explanation
"""
M = X.copy()
for i in xrange(n_items):
lo, hi, d, idx = get_row(M, i)
M.data[lo:hi] = np.log(d * n_pairs / (count[i] * count[idx]))
M.data[M.data < 0] = 0
M.eliminate_zeros()
print float(M.nnz) / np.prod(M.shape)
"""
Explanation: Construct the SPPMI matrix
End of explanation
"""
# number of negative samples
k_ns = 1
M_ns = M.copy()
if k_ns > 1:
offset = np.log(k_ns)
else:
offset = 0.
M_ns.data -= offset
M_ns.data[M_ns.data < 0] = 0
M_ns.eliminate_zeros()
plt.hist(M_ns.data, bins=50)
plt.yscale('log')
pass
float(M_ns.nnz) / np.prod(M_ns.shape)
"""
Explanation: Now $M$ is the PPMI matrix. Depending on the number of negative examples $k$, we can obtain the shifted PPMI matrix as $\max(M_{wc} - \log k, 0)$
End of explanation
"""
scale = 0.03
n_components = 100
max_iter = 20
n_jobs = 8
lam_theta = lam_beta = 1e-5 * scale
lam_gamma = 1e-5
c0 = 1. * scale
c1 = 10. * scale
save_dir = os.path.join(DATA_DIR, 'ML20M_ns%d_scale%1.2E' % (k_ns, scale))
reload(cofacto)
coder = cofacto.CoFacto(n_components=n_components, max_iter=max_iter, batch_size=1000, init_std=0.01, n_jobs=n_jobs,
random_state=98765, save_params=True, save_dir=save_dir, early_stopping=True, verbose=True,
lam_theta=lam_theta, lam_beta=lam_beta, lam_gamma=lam_gamma, c0=c0, c1=c1)
coder.fit(train_data, M_ns, vad_data=vad_data, batch_users=5000, k=100)
test_data, _ = load_data(os.path.join(DATA_DIR, 'test.csv'))
test_data.data = np.ones_like(test_data.data)
n_params = len(glob.glob(os.path.join(save_dir, '*.npz')))
params = np.load(os.path.join(save_dir, 'CoFacto_K%d_iter%d.npz' % (n_components, n_params - 1)))
U, V = params['U'], params['V']
print 'Test Recall@20: %.4f' % rec_eval.recall_at_k(train_data, test_data, U, V, k=20, vad_data=vad_data)
print 'Test Recall@50: %.4f' % rec_eval.recall_at_k(train_data, test_data, U, V, k=50, vad_data=vad_data)
print 'Test NDCG@100: %.4f' % rec_eval.normalized_dcg_at_k(train_data, test_data, U, V, k=100, vad_data=vad_data)
print 'Test MAP@100: %.4f' % rec_eval.map_at_k(train_data, test_data, U, V, k=100, vad_data=vad_data)
np.savez('CoFactor_K100_ML20M.npz', U=U, V=V)
"""
Explanation: Train the model
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/dev/.ipynb_checkpoints/n04B_evaluation_infrastructure-checkpoint.ipynb | mit | from predictor import evaluation as ev
from predictor.dummy_mean_predictor import DummyPredictor
predictor = DummyPredictor()
y_train_true_df, y_train_pred_df, y_val_true_df, y_val_pred_df = ev.run_single_val(x, y, ahead_days, predictor)
print(y_train_true_df.shape)
print(y_train_pred_df.shape)
print(y_val_true_df.shape)
print(y_val_pred_df.shape)
y_train_true_df.head()
y_train_pred_df.head()
y_val_true_df.head()
y_val_pred_df.head()
"""
Explanation: Get the results of a single run
End of explanation
"""
y_train_true_rs = ev.reshape_by_symbol(y_train_true_df)
print(y_train_true_rs.shape)
y_train_true_rs.head()
y_train_pred_rs = ev.reshape_by_symbol(y_train_pred_df)
print(y_train_pred_rs.shape)
y_train_pred_rs.head()
y_val_true_rs = ev.reshape_by_symbol(y_val_true_df)
print(y_val_true_rs.shape)
y_val_true_rs.head()
"""
Explanation: Done. Let's test the reshape_by_symbol function
End of explanation
"""
u = x.index.levels[0][0]
print(u)
fe.SPY_DF.sort_index().index.unique()
md = fe.SPY_DF.index.unique()
u in md
fe.add_market_days(u,6)
"""
Explanation: So, the reshape_by_symbol function seems to work with run_single_val. It could be added to it. Let's test the roll_evaluate function.
End of explanation
"""
# Getting the data
GOOD_DATA_RATIO = 0.99
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
data_df = pp.drop_irrelevant_symbols(data_df, GOOD_DATA_RATIO)
train_time = -1 # In real time days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
tic = time()
x, y = fe.generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
print(data_df.shape)
data_df.head()
SAMPLES_GOOD_DATA_RATIO = 0.9
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, SAMPLES_GOOD_DATA_RATIO)
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
x_y_df.isnull().sum()
x.isnull().sum().sum()
y.isnull().sum()
x_reshaped = ev.reshape_by_symbol(x)
x_reshaped.head()
x_reshaped.isnull().sum().max()
x.shape
x_reshaped.shape
x_reshaped[x_reshaped.notnull()]
y_train_true_df, y_train_pred_df, y_val_true_df, y_val_pred_df = ev.run_single_val(x, y, ahead_days, predictor)
from sklearn.metrics import r2_score
r2_score(y_train_true_df, y_train_pred_df, multioutput='raw_values')
tickers = y_train_true_df.index.levels[1]
tickers
y_train_true_df.loc[(slice(None), 'AAPL'),:]
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
r2_train_score = []
mae_train = []
for ticker in tickers:
y_true = y_train_true_df.loc[(slice(None), 'AAPL'),:]
y_pred = y_train_pred_df.loc[(slice(None), 'AAPL'),:]
r2_train_score.append(r2_score(y_true, y_pred))
mae_train.append(mean_absolute_error(y_true, y_pred))
np.mean(r2_train_score)
np.mean(mae_train)
train_days = 252
step_eval_days = 252
r2_train_means, r2_train_stds, y_val_true_df, y_val_pred_df = ev.roll_evaluate(x,
y,
train_days,
step_eval_days,
ahead_days,
predictor,
verbose=True)
print(len(r2_train_means))
print(len(r2_train_stds))
print(y_val_true_df.shape)
print(y_val_pred_df)
plt.plot(r2_train_means)
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_val_true_df, y_val_pred_df)
mae
"""
Explanation: Let's do some previous filtering to avoid problems
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/05_review/4_preproc.ipynb | apache-2.0 | #Ensure that we have Apache Beam version installed.
!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0
import tensorflow as tf
import apache_beam as beam
import shutil
import os
print(tf.__version__)
"""
Explanation: Preprocessing Using Dataflow
Learning Objectives
- Creating datasets for Machine Learning using Dataflow
Introduction
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
End of explanation
"""
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
"""
Explanation: Next, set the environment variables related to your GCP Project.
End of explanation
"""
# Create SQL query using natality data after the year 2000
query_string = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
"""
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df = bq.query(query_string + "LIMIT 100").to_dataframe()
df.head()
"""
Explanation: Save the query from earlier
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
"""
import apache_beam as beam
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = "weight_pounds,is_male,mother_age,plurality,gestation_weeks".split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don"t know sex of the baby. Let"s assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound["is_male"] = "Unknown"
if rowdict["plurality"] > 1:
no_ultrasound["plurality"] = "Multiple(2+)"
else:
no_ultrasound["plurality"] = "Single(1)"
# Change the plurality column to strings
w_ultrasound["plurality"] = ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"][rowdict["plurality"] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else "None" for k in CSV_COLUMNS])
yield str("{}".format(data))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-babyweight-features" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/babyweight/preproc/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
"""
if in_test_mode:
query = query + " LIMIT 100"
for step in ["train", "eval"]:
if step == "train":
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) < 80".format(query)
elif step == "eval":
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 80 AND ABS(MOD(hashmonth, 100)) < 90".format(query)
else:
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 90".format(query)
(p
| "{}_read".format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| "{}_csv".format(step) >> beam.FlatMap(to_csv)
| "{}_out".format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{}.csv".format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
"""
Explanation: Create ML dataset using Dataflow
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
The preprocess function below includes an arugment in_test_mode. When this is set to True, running preprocess initiates a local Beam job. This is helpful for quickly debugging your pipeline and ensuring it works before submitting a job to the Cloud. Setting in_test_mode to False will launch a processing that is happening on the Cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://YOUR_BUCKET/
</pre>
End of explanation
"""
!gsutil ls gs://$BUCKET/babyweight/preproc/*-00000*
"""
Explanation: For a Cloud preprocessing job (i.e. setting in_test_mode to False), the above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the follwing step.
View results
We can have a look at the elements in our bucket to see the results of our pipeline above.
End of explanation
"""
query = """
WITH CTE_Raw_Data AS (
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0)
-- Ultrasound
SELECT
weight_pounds,
is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
ELSE "NULL"
END AS plurality,
gestation_weeks,
hashmonth
FROM
CTE_Raw_Data
UNION ALL
-- No ultrasound
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality > 1 THEN "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
CTE_Raw_Data
"""
"""
Explanation: Preprocessing with BigQuery
Create SQL query for BigQuery that will union all both the ultrasound and no ultrasound datasets.
End of explanation
"""
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
# Set dataset_id to the ID of the dataset to create.
dataset_name = "temp_babyweight_dataset"
dataset_id = "{}.{}".format(client.project, dataset_name)
# Construct a full Dataset object to send to the API.
dataset = bigquery.Dataset.from_string(dataset_id)
# Specify the geographic location where the dataset should reside.
dataset.location = "US"
# Send the dataset to the API for creation.
# Raises google.api_core.exceptions.Conflict if the Dataset already
# exists within the project.
try:
dataset = client.create_dataset(dataset) # API request
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
except:
print("Dataset {}.{} already exists".format(client.project, dataset.dataset_id))
"""
Explanation: Create temporary BigQuery dataset
End of explanation
"""
job_config = bigquery.QueryJobConfig()
for step in ["train", "eval"]:
if step == "train":
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) < 80".format(query)
elif step == "eval":
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 80 AND ABS(MOD(hashmonth, 100)) < 90".format(query)
else:
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 90".format(query)
# Set the destination table
table_name = "babyweight_{}".format(step)
table_ref = client.dataset(dataset_name).table(table_name)
job_config.destination = table_ref
job_config.write_disposition = "WRITE_TRUNCATE"
# Start the query, passing in the extra configuration.
query_job = client.query(
query=selquery,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
"""
Explanation: Execute query and write to BigQuery table.
End of explanation
"""
dataset_ref = client.dataset(dataset_id=dataset_name, project=PROJECT)
for step in ["train", "eval"]:
destination_uri = "gs://{}/{}".format(BUCKET, "babyweight/bq_data/{}*.csv".format(step))
table_name = "babyweight_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(PROJECT, dataset_name, table_name, destination_uri))
"""
Explanation: Export BigQuery table to CSV in GCS.
End of explanation
"""
!gsutil ls gs://$BUCKET/babyweight/bq_data/*000000000000*
"""
Explanation: View results
We can have a look at the elements in our bucket to see the results of our pipeline above.
End of explanation
"""
|
plipp/informatica-pfr-2017 | nbs/3/2-Geo-Plotting-with-Cartopy-Exercise.ipynb | mit | birds = pd.read_csv('../../data/bird_tracking.csv')
birds.head()
"""
Explanation: Birds Migration Data
End of explanation
"""
# TODO
"""
Explanation: Exercise 1
The migration data of which birds (bird_names) are in the tracking dataset?
End of explanation
"""
# TODO
"""
Explanation: Exercise 2
Draw a basic plot of the track(x-axis: longitude, y-axis: latitude) of each bird:
The title of the plot should be Bird Migration.
The axes should be named.
A legend should show to which bird the single tracks belong.
End of explanation
"""
import cartopy.crs as ccrs
"""
Explanation: Exercise 3
Draw the flight route on Cartopy
```bash
conda install -c scitools cartopy=0.15.0
or (if former does not work)
conda install -c conda-forge cartopy=0.15.1
```
End of explanation
"""
plt.figure(figsize=(10,10))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
# TODO 1: comment in/out and see, what happens
# ax.set_extent((-25,20,52,10))
# TODO 2: draw the single tracks with title and legend as in Exercise 2
"""
Explanation: Exercise 3.1
Draw the flight route with PlateCarree-Projection
See TODOs, where to change/insert your code.
End of explanation
"""
import cartopy.feature as cfeature
# TODO add your code here
"""
Explanation: Exercise 3.2
Draw the flight route with Mercator-Projection
Use Features to show
- LAND
- OCEAN
- COASTLINE
- BORDERS
- LAKES
- RIVERS
See TODOs, where to change/insert your code.
End of explanation
"""
|
GeoNet/fits | examples/Notebook_4.ipynb | apache-2.0 | # Import packages
import cairosvg
import io
from PIL import Image
import matplotlib.pyplot as plt
"""
Explanation: Station location plotting using FITS (FIeld Time Series) database
In this notebook we will look at discovering the location of sites in the FITS (FIeld Time Series) database. However, as the Python packages we have used previously are not capable of handling SVG files (the format FITS uses for its maps), we will need a few extra packages to handle SVG images pulled from the web: cairosvg, io, and PIL.
Before downloading any packages or running any code, take a look at the map at the bottom of this notebook. Through some clever Python scripting you too could plot this figure, but it may prove easier to just build the URL and save the figure from the FITS web service!
If you do want to run the code, first import the packages we need:
End of explanation
"""
sites = ['AUCK', 'WGTN', 'MQZG', 'DUND']
region = 'NewZealand'
#region = '165,-48,-175,-34' # uncomment and change this if a user-defined extent is desired
"""
Explanation: We will now plot the locations of a list of sites. The FITS API for maps takes a comma separated list of sites, so we will need to shape this list before we insert it into the query. The map query takes this site list and one other argument: the plotting region. The plotting region sets the map extent, and while it is optional the alternative is to allow the automatic system to take over (which doesn't always work as expected). The regions available are:
- ChathamIsland
- LakeTaupo
- NewZealand
- NewZealandRegion
- RaoulIsland
- WhiteIsland
- latmin, longmin, latmax, longmax (i.e. a user-defined extent)
The default region is NewZealand.
Let's set our sites and region:
End of explanation
"""
# Build query
query_suffix = ''
for site in sites:
query_suffix += site + ','
query_suffix = query_suffix[:-1]
URL = 'http://fits.geonet.org.nz/map/site?sites=' + query_suffix + '&bbox=' + region + '&width=300'
# Open image from query result
# Create a file-object in the system memory
memory_stream = io.BytesIO()
# Convert the SVG returned by our query into a PNG stored in the file-object
cairosvg.svg2png(url = URL, write_to = memory_stream)
# Return the memory buffer to the start of the file object
memory_stream.seek(0)
# Use PIL to open the PNG in an easy to use format
im = Image.open(memory_stream)
# Plot the PNG using matplotlib
plt.imshow(im)
plt.show()
"""
Explanation: Now we will build the query and plot the result. As python's SVG plotting ability is limited (even with matplotlib), we will use the combination of cairosvg - which will convert our query result into a PNG image - io - which will store the PNG in the system's memory - and PIL - which will allow us to convert the PNG in system memory into a plottable object in matplotlib.
End of explanation
"""
print(URL)
"""
Explanation: In this map all water-land borders are drawn and each station's location is overlain as a red triangle. Station labels cannot be added due to the loss of information in the SVG -> PNG translation, so while this map shows where the stations are located, we cannot distinguish which station is which! For this we'll need a much more sophisticated plotting system. The best alternative is to run the cell below and to click on the URL output - this gives the map above, but in the web browser hovering your mouse over a station will show its name.
If you want to see a higher resolution map, change the width specified in the query to something >300. This width is the number of pixels wide the PNG image will be, so it can be increased - but be warned: too many pixels will make the station locations very small!
End of explanation
"""
|
RuthAngus/granola | granola/inference_explore.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as pl
%matplotlib inline
import emcee
dw = pd.read_csv("data/dwarf.txt")
c = -3
a1, j1 = dw.age.values[:c], dw.jz.values[:c]
a2, j2 = dw.age.values[c-1:], dw.jz.values[c-1:]
x1, y1 = np.log(a1), np.log(j1)
x2, y2 = np.log(a2), np.log(j2)
xlabel, ylabel = "$\ln (\mathrm{Age, Gyr})$", "$\ln(\sigma J_z, \mathrm{Kpc kms}^{-1})$"
pl.plot(x1, y1)
pl.plot(x2, y2)
pl.xlabel(xlabel)
pl.ylabel(ylabel)
def fit_line(x, y, yerr):
AT = np.vstack((x, np.ones(len(x))))
ATA = np.dot(AT, AT.T)
return np.linalg.solve(ATA, np.dot(AT, y))
m1, c1 = fit_line(x1, y1, np.ones_like(x1)*.1)
pl.plot(x1, m1*x1 + c1, "k", ls="--")
m2, c2 = fit_line(x2, y2, np.ones_like(x2)*.1)
pl.plot(x2, m2*x2 + c2, "k", ls="--")
pl.axvline(x2[c-1], color=".7", ls="--")
print("Line break at sigma Jz =", y1[c-1], "ln age = ", x2[c-1])
print("m1 =", m1, "c1 =", c1)
"""
Explanation: How to infer gyrochronal age precision using vertical actions
I want to infer the precision on gyrochronal age as a function of vertical action. I need to marginalise over the radial velocities as I don't have these for the majority of the stars.
Here is a model of the vertical action dispersion as a function of time:
End of explanation
"""
data = pd.read_csv("data/ages_and_actions_mod.csv")
data
pl.plot(np.log(data.Jz), np.log(data.age), "k.")
pl.xlabel("ln(sigma_Jz)")
pl.ylabel("ln(Age)")
"""
Explanation: Now, infer age as function of observed vertical action dispersion.
End of explanation
"""
def lnprob(beta, *args):
"""
Given a vertical action, calculate an age.
Vertical action dispersion increases with time.
Vertical action is drawn from a Normal distribution with zero mean and
dispersion that is a function of time.
"""
ages, Jz, Jz_err = args
if beta > 0:
lnlike = np.sum(-.5*(Jz**2/(beta*ages + Jz_err**2)) - .5*np.log(2*np.pi*(beta*ages + Jz_err**2)))
if np.isfinite(lnlike):
return lnlike
else:
return -np.inf
else:
return -np.inf
beta_init = 100
args = [data.age.values, data.Jz.values, data.Jz.values*.1 + .1]
nwalkers, ndim, nsteps, bi = 64, 1, 2000, 1000
p0 = [beta_init + np.random.randn(1)*1e-4 for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=args)
sampler.run_mcmc(p0, nsteps);
flat = np.reshape(sampler.chain[:, bi:, :], (nwalkers*(nsteps - bi), ndim))
pl.hist(flat[:, 0])
print(np.median(flat[:, 0]))
for i in range(nwalkers):
pl.plot(sampler.chain[i, bi:, 0])
"""
Explanation: $$ \ln(\mathrm{A}) = m\ln(\sigma_{Jz}^2) + c$$
$$ p(A|\sigma_A, Jz, \beta) = \ln\mathcal{L} = -\frac{1}{2}\sum_{i=1}^{N} \frac{(A_i - [m\sigma_{Jz}^2 + c])^2}{\sigma_A^2} - \frac{1}{2} \ln(2\pi \sigma_A) $$
$$ J_z^2 \sim \mathcal{N} \left(0, [\beta A + \sigma_{Jz}^2 ]\right) $$
$$ p(A, J_z, \sigma_{Jz}, \beta) = \ln\mathcal{L} = -\frac{1}{2}\sum_{i=1}^{N} \frac{J_{z,i}^2}{\beta A + \sigma_{Jz}^2} - \frac{1}{2} \ln(2\pi [\beta A + \sigma_{Jz}^2]) $$
Here is a likelihood function (also the log-prob):
End of explanation
"""
pl.plot(data.age, abs(np.random.randn(len(data.age))*(22.66*data.age + data.Jz.values*.1 + .1)), ".", alpha=.5)
pl.plot(data.age, data.Jz**2, "r.", alpha=.5)
pl.ylim(0, 100)
pl.xlim(0, 20)
pl.ylabel("$\sigma_{Jz}$")
pl.xlabel("Age")
pl.plot(np.log(data.age), np.log(abs(np.random.randn(len(data.age))*(22.66*data.age + data.Jz.values*.1 + .1))),
".", alpha=.5)
pl.plot(np.log(data.age), np.log(data.Jz), "r.", alpha=.5)
pl.ylabel("ln(sigma_Jz)")
pl.xlabel("ln(Age)")
import scipy.stats as sps
xs = np.linspace(-40, 40, 2*len(data.Jz))
d2 = np.abs(np.random.randn(len(data.age))*(1*data.age+data.Jz.values*.1+.1))
d3 = np.abs(np.random.randn(len(data.age))*2)
N = len(data.Jz)
data1, data2, data3 = np.zeros(2*N), np.zeros(2*N), np.zeros(2*N)
data1[N:], data1[:N] = data.Jz, -data.Jz[::-1]
data2[N:], data2[:N] = d2, -d2[::-1]
data3[N:], data3[:N] = d3, -d3[::-1]
kernel1 = sps.gaussian_kde(data1, bw_method=.1)
kde1 = kernel1(xs)
kernel2 = sps.gaussian_kde(data2, bw_method=.1)
kde2 = kernel2(xs)
kernel3 = sps.gaussian_kde(data3, bw_method=.1)
kde3 = kernel3(xs)
#pl.hist(data1, 200, normed=True);
#pl.hist(data2, 200, normed=True, alpha=.7);
#pl.hist(data3, 20, normed=True, alpha=.5, edgecolor="k", histtype="stepfilled");
gauss = lambda xs, A, mu, sigma: A**2*np.exp(-.5*(xs - mu)**2/sigma**2)
#pl.plot(xs, gauss(xs, .428, 0, 2), ls="--")
#pl.plot(xs, gauss(xs, .428, 0, 1*np.mean(data.age)+np.mean(data.Jz.values)*.1+.1), ls="--")
pl.plot(xs, kde1, label="Data")
pl.plot(xs, kde2, label="Model")
pl.xlim(0, 20)
pl.plot(np.log(data.age), np.log(.05+abs(np.random.randn(len(data.age))*(1*data.age+data.Jz.values*.1+.1))),
".", alpha=.5)
pl.plot(np.log(data.age), np.log(data.Jz), "r.", alpha=.5)
pl.ylabel("ln(sigma_Jz)")
pl.xlabel("ln(Age)")
pl.plot(np.log(data.age), np.random.randn(len(data.age))*(1+np.log(data.age)*.3),
".", alpha=.5)
pl.plot(np.log(data.age), np.log(data.Jz), "r.", alpha=.5)
pl.ylabel("ln(sigma_Jz)")
pl.xlabel("ln(Age)")
def lnprob(par, *args):
"""
Given a vertical action, calculate an age.
Vertical action dispersion increases with time.
Vertical action is drawn from a Normal distribution with zero mean and
dispersion that is a function of time.
"""
beta, alpha = par
ages, Jz, Jz_err = args
if beta > 0 and 0 < alpha < 100:
lnlike = np.sum(-.5*(Jz**2/(beta*ages+alpha + Jz_err**2))
- .5*np.log(2*np.pi*(beta*ages+alpha + Jz_err**2)))
if np.isfinite(lnlike):
return lnlike
else:
return -np.inf
else:
return -np.inf
par_init = [.3, 1]
m = data.age.values > 0
args = [data.age.values[m], data.Jz.values[m], data.Jz.values[m]*.35 + .1]
nwalkers, ndim, nsteps, bi = 64, len(par_init), 5000, 1000
p0 = [par_init + np.random.randn(1)*1e-4 for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=args)
sampler.run_mcmc(p0, nsteps);
flat = np.reshape(sampler.chain[:, bi:, :], (nwalkers*(nsteps - bi), ndim))
beta, alpha = np.median(flat, axis=0)
pl.plot(np.log(data.age), np.random.randn(len(data.age))*((alpha+data.age*beta)**.5),
".", alpha=.5)
pl.plot(np.log(data.age), np.log(data.Jz), "r.", alpha=.5)
pl.ylabel("ln(sigma_Jz)")
pl.xlabel("ln(Age)")
xs = np.linspace(-40, 40, 2*len(data.Jz))
gauss = lambda xs, A, mu, sigma: A**2*np.exp(-.5*(xs - mu)**2/sigma**2)
pl.plot(xs, gauss(xs, .428, 0, (alpha+np.mean(data.age)*beta)**.5), ls="--")
pl.plot(xs, kde1, label="Data")
pl.plot(xs, kde2, label="Model")
pl.xlim(0, 20)
for i in range(nwalkers):
pl.plot(sampler.chain[i, bi:, 0])
for i in range(nwalkers):
pl.plot(sampler.chain[i, bi:, 1])
import corner
corner.corner(flat, labels=["beta", "alpha"])
"""
Explanation: So $$ J_z^2 \sim \mathcal{N}\left( 0, 22.66A + \sigma_{Jz}^2 \right) $$
End of explanation
"""
true_a, true_b = 100, 10
x = np.random.uniform(0, 10, 1000)
err = 2
y = np.random.randn(len(x))*((true_a+x*true_b)**.5) + np.random.randn(len(x))*.5
yerr = np.ones_like(y)*err
pl.errorbar(x, y, yerr=yerr, fmt="k.", alpha=.5)
pl.ylabel("Jz")
pl.xlabel("Age")
def lnprob(par, *args):
"""
Given a vertical action, calculate an age.
Vertical action dispersion increases with time.
Vertical action is drawn from a Normal distribution with zero mean and
dispersion that is a function of time.
"""
beta, alpha = par
x, y, yerr = args
if beta > 0 and 0 < alpha < 1000:
lnlike = np.sum(-.5*(y**2/(beta*x+alpha + yerr**2))
- .5*np.log(2*np.pi*(beta*x+alpha + yerr**2)))
if np.isfinite(lnlike):
return lnlike
else:
return -np.inf
else:
return -np.inf
par_init = [.3, 5]
args = [x, y, yerr]
nwalkers, ndim, nsteps, bi = 64, len(par_init), 5000, 1000
p0 = [par_init + np.random.randn(1)*1e-4 for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=args)
sampler.run_mcmc(p0, nsteps);
flat = np.reshape(sampler.chain[:, bi:, :], (nwalkers*(nsteps - bi), ndim))
corner.corner(flat, labels=["slope", "intercept"], truths=[true_b, true_a]);
b, a = np.median(flat, axis=0)
print("b = ", true_b, b, "a = ", true_a, a)
pl.errorbar(x, y, yerr=yerr, fmt="k.", alpha=.5)
pl.plot(x, np.random.randn(len(x))*((b*x+a)**.5), "r.", alpha=.5)
pl.ylabel("Jz")
pl.xlabel("Age")
"""
Explanation: Test the LHF on some fake data.
End of explanation
"""
|
NEONInc/NEON-Data-Skills | code/Python/lidar/Calc_Biomass.ipynb | gpl-2.0 | import numpy as np
import os
import gdal, osr
import matplotlib.pyplot as plt
import sys
import matplotlib.pyplot as plt
from scipy import ndimage as ndi
%matplotlib inline
"""
Explanation: Calculating Biomass
Background
In this lesson we will calculate the Biomass for a section of the SJER site. We will be using the Canopy Height Model discrete LiDAR data product as well as field data collected by the TOS group at NEON. This lesson will calculate Biomass for individual trees in the forest. The calculation of biomass consists of four primary steps
1) Delineating individual tree crowns
2) Calculating predictor variables for all individuals
3) Collecting training data
4) Applying a regression model to estiamte biomass from predictors
In this lesson we will use a watershed segmentation algorithm for delineating tree crowns (step 1) and and a Random Forest (RF) machine learning algorithm for relating the predictor variables to biomass (part 4). The predictor variables were selected following suggestions by Gleason et al. (2012) and biomass estimates were determined from DBH (diamter at breast height) measurements following relationships given in Jenkins et al. (2003).
Objectives
I this lesson we will
1) Learn how to apply a guassian smoothing fernal for high-frequency spatial filtering
2) Apply a watershed segmentation algorithm for delineating tree crowns
3) Calculate biomass predictor variables from a CHM
4) See how to setup training data for Biomass predictions
5) Apply a Random Forest machine learning approach to calculate biomass
First we will import several of the typical libraries
End of explanation
"""
#Import biomass specific libraries
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.measure import regionprops
from sklearn.ensemble import RandomForestRegressor
"""
Explanation: Next we will add libraries from skilearn which will help with the watershed delination, determination of predictor variables and random forest algorithm
End of explanation
"""
#Define plot band array function
def plot_band_array(band_array,image_extent,title,cmap_title,colormap,colormap_limits):
plt.imshow(band_array,extent=image_extent)
cbar = plt.colorbar(); plt.set_cmap(colormap); plt.clim(colormap_limits)
cbar.set_label(cmap_title,rotation=270,labelpad=20)
plt.title(title); ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
"""
Explanation: Define a function that will allow us to plot our spatial data
End of explanation
"""
def array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array,epsg):
cols = array.shape[1]
rows = array.shape[0]
originX = rasterOrigin[0]
originY = rasterOrigin[1]
driver = gdal.GetDriverByName('GTiff')
outRaster = driver.Create(newRasterfn, cols, rows, 1, gdal.GDT_Float32)
outRaster.SetGeoTransform((originX, pixelWidth, 0, originY, 0, pixelHeight))
outband = outRaster.GetRasterBand(1)
outband.WriteArray(array)
outRasterSRS = osr.SpatialReference()
outRasterSRS.ImportFromEPSG(epsg)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
"""
Explanation: Define a function that will allow us to output geotiff files
End of explanation
"""
chm_file = 'C:/RSDI_2017/Day4/Biomass/NEON_D17_SJER_DP3_256000_4106000_CHM.tif'
"""
Explanation: Now we will define the file path to our CHM file
End of explanation
"""
#Get info from chm file for outputting results
just_chm_file = os.path.basename(chm_file)
just_chm_file_split = just_chm_file.split(sep="_")
#Open the CHM file with GDAL
chm_dataset = gdal.Open(chm_file)
#Get the raster band object
chm_raster = chm_dataset.GetRasterBand(1)
#Get the NO DATA value
noDataVal_chm = chm_raster.GetNoDataValue()
#Get required metadata from CHM file
cols_chm = chm_dataset.RasterXSize
rows_chm = chm_dataset.RasterYSize
bands_chm = chm_dataset.RasterCount
mapinfo_chm =chm_dataset.GetGeoTransform()
xMin = mapinfo_chm[0]
yMax = mapinfo_chm[3]
xMax = xMin + chm_dataset.RasterXSize/mapinfo_chm[1]
yMin = yMax + chm_dataset.RasterYSize/mapinfo_chm[5]
image_extent = (xMin,xMax,yMin,yMax)
"""
Explanation: We will want to output the results with the same file information as the input, so we will gather the file name information
End of explanation
"""
#Plot the original CHM
plt.figure(1)
chm_array = chm_raster.ReadAsArray(0,0,cols_chm,rows_chm).astype(np.float)
#PLot the CHM figure
plot_band_array(chm_array,image_extent,'Canopy height Model','Canopy height (m)','Greens',[0, 9])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'CHM.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
"""
Explanation: Now we will get the CHM data, plot it and save the figure
End of explanation
"""
#Smooth the CHM using a gaussian filter to remove spurious points
chm_array_smooth = ndi.gaussian_filter(chm_array,2,mode='constant',cval=0,truncate=2.0)
chm_array_smooth[chm_array==0] = 0
"""
Explanation: Now we will run a Gaussian smoothing kernal (convolution) across the data set to remove spurious high vegetation points. This will help ensure we are finding the treetops properly before running the watershed segmentation algorithm. For different forest types it may be necessary to change the input parameters. Information on the function can be found at (https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.filters.gaussian_filter.html). Of most importance are the second and fourth inputs. The second input defines the standard deviation of the Gaussian smoothing kernal. Too large a value will apply too much smoothing, to small and some spurious high points may be left behind. The truncate value controls after how many standard deviations the Gaussian kernal will get cut off (since it theoretically goes to infinity).
End of explanation
"""
#Save the smoothed CHM
array2raster('C:/RSDI_2017/Day4/Biomass/chm_filter.tif',(xMin,yMax),1,-1,np.array(chm_array_smooth/10000,dtype=float),32611)
"""
Explanation: Now save a copy of filtered CHM
End of explanation
"""
#Calculate local maximum points in the smoothed CHM
local_maxi = peak_local_max(chm_array_smooth,indices=False, footprint=np.ones((5, 5)))
"""
Explanation: Now we will run an algorithm to determine local maximums within the image. Setting indices to 'False' returns a raster of the maximum points, as opposed to a list of coordinates. The footprint parameter is an area where only a single peak can be found. This should be approximately the size of the smallest tree. Information on more sophisticated methods to define the window can be found in Chen (2006).
End of explanation
"""
#Plot the local maximums
plt.figure(2)
plot_band_array(local_maxi,image_extent,'Maximum','Maxi','Greys',[0, 1])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'Maximums.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
"""
Explanation: Plot the raster of local maximums. The following figure shows the difference in finding local maximums for a filtered vs. non-filtered CHM.
Max_filtred_non_filtered.JPG\n",
End of explanation
"""
#Identify all the maximum points
markers = ndi.label(local_maxi)[0]
"""
Explanation: Apply labels to all of the local maximum points
End of explanation
"""
#Create a CHM mask so the segmentation will only occur on the trees
chm_mask = chm_array_smooth
chm_mask[chm_array_smooth != 0] = 1
"""
Explanation: Next we will create a mask layer of all of the vegettion points so that the watershed segmentation will only occur on the trees and not extend into the surrounding ground points. Since 0 represent ground points in the CHM, setting the mask to 1 where the CHM is not zero will define the mask
End of explanation
"""
#Perfrom watershed segmentation
labels = watershed(chm_array_smooth, markers, mask=chm_mask)
"""
Explanation: Next we will perfrom the watershed segmentation, which produces a raster of labels
End of explanation
"""
#Get the properties of each segment
tree_properties = regionprops(labels,chm_array, ['Area','BoundingBox','Centroid','Orientation','MajorAxisLength','MinorAxisLength','MaxIntensity','MinIntensity'])
"""
Explanation: Max_filtred_non_filtered.JPG
Now we will get several properties of the individual trees which are used as predictor variables
End of explanation
"""
#Determine how many individual trees were identified
max_labels = labels.max()
segment_labels = np.zeros(max_labels+1)
segment_id = np.zeros(max_labels+1)
for counter in range (1,max_labels+1):
segment_labels[counter] = len(labels[labels==counter])
segment_id[counter]=counter
#Remove the non-zero elements
segment_id = segment_id[np.nonzero(segment_labels)]
"""
Explanation: It was found that occasionally the segmenting skippen an integer number. We want to be able to match our segments to the trees in later steps, so we will create an array with only the segment numbers used.
End of explanation
"""
#Change the lebels to flow and plot them and save as raster
labels = np.array((labels),dtype=float)
plt.figure(3)
array2raster('C:/RSDI_2017/Day4/Biomass/SegmentedData.tif',(xMin,yMax),1,-1,labels,32611)
#Change the zero labels to nans so they won't show up in the plot
labels[labels==0] = np.nan
#Plot the segments
plot_band_array(labels,image_extent,'Crown Segmentation','Tree Crown Number','Spectral',[0, max_labels])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'Segmentation.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
"""
Explanation: Next we will save the segments as a geotiff and plot them
End of explanation
"""
#Define several of the predictor variables
area=np.zeros(len(tree_properties))
diameter=np.zeros(len(tree_properties))
max_tree_height=np.zeros(len(tree_properties))
min_tree_height=np.zeros(len(tree_properties))
#Retreive the predictor variables from the region properties
for counter in range(0,len(tree_properties)):
area[counter] = tree_properties[counter]['Area']
diameter[counter] = tree_properties[counter]['MajorAxisLength']
max_tree_height[counter] = tree_properties[counter]['MaxIntensity']
min_tree_height[counter] = tree_properties[counter]['MinIntensity']
"""
Explanation: Now we will define the predictor variables and begin to fill out their values
End of explanation
"""
#Define the remaining predictor variables
crown_geometric_volume_full=np.zeros(len(segment_id))
crown_geometric_volume_50th_percentile=np.zeros(len(segment_id))
crown_geometric_volume_60th_percentile=np.zeros(len(segment_id))
crown_geometric_volume_70th_percentile=np.zeros(len(segment_id))
percentile_50th=np.zeros(len(segment_id))
percentile_60th=np.zeros(len(segment_id))
percentile_70th=np.zeros(len(segment_id))
"""
Explanation: Now we will define the remaining predictor variables
End of explanation
"""
#Cycle through all of the tree segments
counter=0
for segment in segment_id:
#Pull out the tree of interest
indexes_of_tree = np.asarray(np.where(labels==segment)).T
tree_data = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
#Calculate the geometric volume
crown_geometric_volume_full[counter]=np.sum([tree_data-np.min(tree_data)])
#Pull out 50th percentile stats
percentile_50th[counter]=np.percentile(tree_data,50)
tree_data_50th = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
tree_data_50th[tree_data_50th>percentile_50th[counter]] = percentile_50th[counter]
crown_geometric_volume_50th_percentile[counter]=np.sum([tree_data_50th-min_tree_height[counter]])
#Pull out 60th percentile stats
percentile_60th[counter]=np.percentile(tree_data,60)
tree_data_60th = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
tree_data_60th[tree_data_60th>percentile_60th[counter]] = percentile_60th[counter]
crown_geometric_volume_60th_percentile[counter]=np.sum([tree_data_60th-min_tree_height[counter]])
#Pull out 60th percentile stats
percentile_70th[counter]=np.percentile(tree_data,70)
tree_data_70th = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
tree_data_70th[tree_data_70th>percentile_70th[counter]] = percentile_70th[counter]
crown_geometric_volume_70th_percentile[counter]=np.sum([tree_data_70th-min_tree_height[counter]])
counter=counter+1
"""
Explanation: We will now run through a loop of all tree segments and gather the remaining predictor variables which include height percentiles and crown geometric volume percentiles. Inside the loop, we use logical indexing to retrieve each individual tree. We then calculate our predictor variables of interest.
End of explanation
"""
#Define the file of training data
training_data_file = 'C:/RSDI_2017/Day4/Biomass/training/SJER_Biomass_Training.csv'
#Read in the training data from a CSV file
training_data = np.genfromtxt(training_data_file,delimiter=',')
#Grab the biomass (Y) from the first line
biomass = training_data[:,0]
#Grab the biomass prdeictors from the remaining lines
biomass_predictors = training_data[:,1:12]
"""
Explanation: We now bring in the training data file which is a simple CSV file with no header. The first column is biomass, and the remaining columns are the same predictor variables defined above. The tree diameter and max height were dfined in the TOS data along with the DBH. The field validated values are used for training, while the other were determined from the CHM and camera images by manually delineating the tree crowns and pulling out the relevant information from the CHM. Biomass was calculated from DBH accordaing to the formulas in Jenkins et al. (2003).
End of explanation
"""
#Define paraemters for Random forest regressor
max_depth = 30
#Define regressor rules
regr_rf = RandomForestRegressor(max_depth=max_depth, random_state=2)
#Fit the biomass to regressor variables
regr_rf.fit(biomass_predictors,biomass)
"""
Explanation: We then define paraemters of the Random Forest classifier and fit the predictor variables from the training data to the Biomass estaimtes.
End of explanation
"""
#Stack the predictor variables for all the individual trees
all_training_data = np.stack([area,diameter,max_tree_height,min_tree_height,percentile_50th,percentile_60th,percentile_70th,crown_geometric_volume_full,crown_geometric_volume_50th_percentile,crown_geometric_volume_60th_percentile,crown_geometric_volume_70th_percentile],axis=-1)
"""
Explanation: Now we will gather the predictor variables gathered from all the segmented trees into a single array
End of explanation
"""
#Apply the model to the
pred_biomass = regr_rf.predict(all_training_data)
"""
Explanation: We know apply the Random Forest model to the predictor variables to retreive biomass
End of explanation
"""
#Set an out raster with the same size as the labels
biomass_out = labels
#Set counter to zero
counter = 0
#Assign each tree by the associated biomass
for segment in segment_id:
biomass_out[biomass_out==segment] = pred_biomass[counter]
counter = counter+1
"""
Explanation: For outputting a raster, copy the labels raster to a biomass raster, then cycle through the segments and assign the biomass estaimte to each individual tree segment.
End of explanation
"""
#Get biomass stats for plotting
mean_biomass = np.mean(pred_biomass)
std_biomass = np.std(pred_biomass)
min_biomass = np.min(pred_biomass)
sum_biomass = np.sum(pred_biomass)
print('Sum of biomass is ',sum_biomass,' kg')
#Plot the biomass!
plt.figure(5)
plot_band_array(biomass_out,image_extent,'Biomass (kg)','Biomass (kg)','winter',[min_biomass+std_biomass, mean_biomass+std_biomass*3])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'Biomass.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
array2raster('biomass.tif',(xMin,yMax),1,-1,np.array(biomass_out,dtype=float),32611)
"""
Explanation: Collect some of the biomass statistics and then plot the results and save an output geotiff
End of explanation
"""
|
ccphillippi/train-a-smartcab | README.ipynb | mit | import numpy as np
import pandas as pd
import seaborn as sns
import pylab
%matplotlib inline
def expected_trials(total_states):
n_drawn = np.arange(1, total_states)
return pd.Series(
total_states * np.cumsum(1. / n_drawn[::-1]),
n_drawn
)
expected_trials(96).plot(
title='Expected number of trials until $k$ distinct states are seen',
figsize=(15, 10))
_ = pylab.xlabel('$k$ (# of states seen)')
"""
Explanation: Train A Smartcab to Drive
Christopher Phillippi
This project forks Udacity's Machine Learning Nanodegree Smartcab project with my solution, modifying/adding smartcab/agent.py and smartcab/notebookhelpers.py as well as this README.
Overall summary of the final agent learning algorithm:
In order to build a reinforcement learning agent to solve this problem, I ended up implementing $Q$ learning from the transitions. In class, we covered $\epsilon$-greedy exploration, where we selected the optimal action based on $Q$ with some probability 1 - $\epsilon$ and randomly otherwise. This obviously puts more weight on the current optimal strategy, but I wanted to put more or less weight on more or less suboptimal strategies as well. I did this by sampling actions in a simualted annealing fashion, assigning actions softmax probabilities of being sampled using the current $Q$ value with a decaying temperature. Further, each $Q(s, a_i)$ value is updated based on it's own exponentially decaying learning rate: $\alpha(s, a_i)$. The current temperature, $T(s)$, is defined as the mean of the decaying $\alpha(s, a)$ over all actions such that:
$$T(s) = \frac{1}{n}\sum_{i=0}^{n}{\alpha(s', a_j')}$$
$$P(a_i|Q,s) = \frac{e^{Q(s, a_i) / T(s)}}{\sum_{i=0}^{n}{e^{Q(s, a_i) / T(s)}}}$$
Once the action for exploration, $a_i$, is sampled, the algorithm realizes a reward, $R(s, a_i)$, and new state, $s'$. I then update $Q$ using the action that maximizes Q for the new state. The update equations for $Q$ and $\alpha(s, a_i)$ are below:
$$Q_{t+1}(s, a_i) = (1 - \alpha_t(s, a_i))Q_t(s, a_i) + \alpha_t(s, a_i)[R(s, a_i) + 0.05 \max_{a'}{Q_t(s', a')}]$$
$$\alpha_t(s, a_i) = 0.5(\alpha(s, a_i) - 0.05) + 0.05$$
and initially:
$$Q_{0}(s, a_i) = 0$$
$$\alpha_{0}(s, a_i) = 1.0$$
Note that while $\alpha(s, a_i)$ is decaying at each update, it hits a minimum of 0.05 (thus it never quits learning fully). Also, I chose a very low $\gamma=0.05$ here to discount the next maximum $Q$.
In terms of my state space, I use the following:
- waypoint: {left, right, forward}
- light: {green, red}
- oncoming: {None, left, right, forward}
- left: {True, False}
- right: {True, False}
Before implementing Q-Learning, did the smartcab eventually make it to the target location?
When randomly selecting actions, it's very literally acting out a random walk. It's worthing noting that on a 2D lattice, it's been proven that a random-walking agent will almost surely reach any point as the number of steps approaches infinity (McCrea Whipple, 1940). In other words, it will almost surely make it to the target location, especially because this 2D grid also has a finite number of points.
Justification behind the state space, and how it models the agent and environment.
I picked the state space mentioned above based on features I believed mattered to the optimal solution. The waypoint effectively proxies the shortest path, and the light generally signals whether None is the right action. These two features alone should be sufficient to get a fairly good accuracy, though I did not test it. Further, I added traffic because this information can help optimize certain actions. For example, you can turn right on red conditional on no traffic from the left. You can turn left on green conditional on no oncoming traffic.
I did not include the deadline here because we are incentivized to either follow the waypoint or stop to avoid something illegal. If we were learning our own waypoint based on the header, the deadline may be useful as a boolean feature once we’re close. Perhaps this would signal whether or not it would be efficient to take a right turn on red. Again, the deadline doesn’t help much given the game rewards.
I also compressed left/right which previously could be {None, left, right, forward} based on the other agents signals. Now they are True/False based on whether or not cars existed left/right. You could also likely compress the state space conditional on a red light, where only traffic on the left matters. I strayed from this approach as it involved too much hard coding for rules the Reinforcement Learner could learn with sufficient exploration.
There are only 96 unique states. Assuming each trial runs at least 5 steps, 100 trials views at least 500 states. Estimating the probability that each state will be seen here is tough since each state has a different probability of being picked based on the unknown true state distribution. Assuming the chance a state is picked is uniform, this becomes the Coupon Collector’s problem, where the expected number of trials, $T$, until $k$ coupons are collected out of a total of $n$ is:
$$E[T_{n,k}] = n \sum_{i=n-k}^{n}\frac{1}{i}$$
We can see below that assuming states are drawn uniformly, we’d expect to see all of the states after about 500 runs, and about 90% after only 250 runs:
End of explanation
"""
from smartcab.notebookhelpers import generated_sim_stats
def plot_cumulative_success_rate(stats, sim_type, ax=None):
columns = [
'always_reached_destination',
'reached_destination',
'missed_destination',
]
stats[columns].cumsum().plot(
ax=ax, kind='area', stacked=False,
title='%s Success Rate Over Trials: %.2f%%' % (
sim_type, stats.reached_destination.mean()*100
)
)
pylab.xlabel('Trial#')
def train_test_plots(train_stats, test_stats, plot):
_, (top, bottom) = pylab.subplots(
2, sharex=True, sharey=True, figsize=(15, 12))
plot(train_stats, 'Train', ax=top)
plot(test_stats, 'Test', ax=bottom)
# Generate training, and test simulations
learned_agent_env, train_stats = generated_sim_stats(
n_trials=100,
gamma=0.95,
alpha_span=100,
min_alpha=0.05,
initial_alpha=0.2,
)
_, test_stats = generated_sim_stats(
agent_env=learned_agent_env, n_trials=100)
train_test_plots(train_stats, test_stats,
plot_cumulative_success_rate)
"""
Explanation: Obviously, states are not drawn uniformly, but rather based on the simulated distribution with 3 dummy cars. Thus we’re more likely to have sampled the most likely states, and the missing states are less likely to be encountered later than if we had drawn states uniformly. In a production environment, I would make sure I run this until every possible state has been seen a sufficient number of times (potentially through stratification). For this project, I think seeing around 500 states is sufficient, and thus 100 trials should train a fairly reasonable agent.
Changes in agent behavior after implementing Q-Learning
Initially after training the agent, it would consistently approach the destination, but would take very odd paths. For example, on a red light, it would commonly take a right turn, perhaps optimistic the next intersection would allow for a left turn on green, despite the penalty for disobeying the waypoint. I found this was due to my gamma being extremely high (0.95). Overall I was ultimately weighting the future rewards much more than the current penalties and rewards for taking each correct turn. The resulting agent somewhat ignored the waypoint and took it’s own optimal course, likely based on the fact that right turns on red just tend to be optimal. I think it’s reasonable that over time, the agent would have learned to follow the waypoint, assuming it’s the most efficient way to the destination, and perhaps the high gamma was causing slow convergence. It’s also possible the agent, weighting the final outcomes higher, found a more optimal waypoint to the end goal (ignoring illegal and waypoint penalties), but I think this is unlikely.
During training, the agent would occasionally pick a suboptimal action (based on Q). Usually this was akin to taking a legal right turn on red, when the waypoint wanted to wait and go forward. This was done to ensure the agent sufficiently explored the state space. If I simply picked the action corresponding to the maximum $Q$ value, the agent would likely get stuck in a local optima. Instead the randomness allows it to eventually converge to a global optima.
To visualize, the success rate while the initial $Q$-Learning model is shown below, followed by that same agent (now learned) using the optimal policy only:
End of explanation
"""
def plot_cumulative_crimes(stats, sim_type, ax=None):
(stats['crimes'.split()] > 0).cumsum().plot(
ax=ax, kind='area', stacked=False, figsize=(15, 8),
title='Cumulative %s Trials With Any Crimes: %.0f%%' % (
sim_type, (stats.crimes > 0).mean()*100
)
)
pylab.ylabel('# of Crimes')
pylab.xlabel('Trial#')
train_test_plots(train_stats, test_stats,
plot_cumulative_crimes)
"""
Explanation: What I noticed here was that the train performance was very similiar to the test performance in terms of the success rate. My intuition is that this is mainly due to the high gamma, which results in Q values that are slow to converge. Finally my $\alpha$ were decaying fairly slowly due to a span of 100, this caused my temperatures to stay high and randomly sample many suboptimal actions. Combined, this exacerbated bad estimates of Q values, which caused the test run to fail to significantly improve the overall success rate.
However, I did find that the test run was much safer after taking a look at the cumulative trips with crimes, thus it was learning:
End of explanation
"""
# Generate training, and test simulations
learned_agent_env, train_stats = generated_sim_stats(
n_trials=100,
gamma=0.05,
initial_alpha=1.0,
min_alpha=0.05,
alpha_span=2.0,
)
_, test_stats = generated_sim_stats(
agent_env=learned_agent_env, n_trials=100)
"""
Explanation: Updates to the final agent and final performance
End of explanation
"""
train_test_plots(train_stats, test_stats,
plot_cumulative_success_rate)
train_test_plots(train_stats, test_stats,
plot_cumulative_crimes)
"""
Explanation: I made two major changes (as shown in the code above), based on my observations of the initial agent. First, I reduced the $\gamma$ all the way to 0.05 from 0.95. This caused my agent to pay much more attention to the current correct turn, and less on the final goal. This also means I can set a much larger initial $\alpha$ value since a majority of the new value is now deterministic (the reward).
Another key observation I made was that optimal moves were deterministic based on the state. In order to exploit this in the learner, I considered the following cases:
Reward of 12:
This is assigned when the car makes the correct move to the destination (not illegal or suboptimal).
Reward of 9.5:
This is assigned when the car makes an incorrect move to the destination (perhaps teleporting from one side to the other)
I map this to -0.5
Reward of 9:
This is assigned when the car makes an illegal move to the destination
I map this to -1
Reward of 2:
This is assigned when the car legally follows the waypoint
Reward of 0:
This is assigned when the car stops
Reward of -0.5:
This is assigned when the car makes a suboptimal but legal move (doesn't follow waypoint)
Reward of -1:
This is assigned when the car makes an illegal move
Now, any action with a positive reward is now an optimal action, and any action with a negative reward is suboptimal. Therefore, if I can get a positive reward, a good learner should not bother looking at any other actions, pruning the rest. If I encounter a negative reward, a good learner should never try that action again. The only uncertainty comes into play when the reward is 0 (stopping). In this case, we must try each action until we either find a positive rewarding action or rule them all out (as < 0). An optimal explorer then, will assign a zero probability to negative rewards, 1 probability to positive rewards, and a non-zero probability to 0 rewards. It follows that the initial value of Q should be 0 here. Naturally then, the explorer will do best as my temperature, $T$, for the softmax (action sampling) probabilities approaches 0. Since the temperature is modeled as the average $\alpha$, I greatly reduced the span of $\alpha$, from 200 to 2, promoting quick convergence for $\alpha \to 0.05$ and thus $T \to 0.05$. I then increased the initial value of $\alpha$ to 1.0 in order to learn $Q$ values much quicker (with higher magnitudes), knowing the $\alpha$ values themselves, will still decay to their minimum value of 0.05 quickly.
The final performance can be seen below:
End of explanation
"""
def plot_cumulative_optimality(stats, sim_type, ax=None):
(1. - (stats['suboptimals'] + stats['crimes']) /
stats['n_turns']).plot(
ax=ax, kind='area', stacked=False, figsize=(15, 8),
title='%s Optimality in Each Trial' % (
sim_type
)
)
pylab.ylabel('% of Optimal Moves')
pylab.xlabel('Trial#')
train_test_plots(train_stats, test_stats,
plot_cumulative_optimality)
"""
Explanation: Optimality of the final policy
The agent effectively either took the waypoint or sat if it was illegal. That, to me, is optimal. Something I also looked into was learning my own waypoint by giving relative headings to the destination [up-left, up, up-right, left, right, down-left, down, down-right]. Obviously the environment is rewarding the wrong rewards for this scenario (tuned to the given waypoint), and I did not want to tamper with the environment so I wasn’t able to test this sufficiently.
To get a formal measure of optimality, for each trial, I counted the number of steps, $t$, as well as the number of suboptimal steps (legal but not following waypoint) $t_s$ and crime (illegal) steps $t_c$. Optimality, $\theta$, on each trial is then 1 minus the ratio of non-optimal steps:
$$\theta = 1 - \frac{t_n + t_c}{t}$$
This is shown below for each trial:
End of explanation
"""
|
zomansud/coursera | ml-classification/week-2/module-3-linear-classifier-learning-assignment-blank.ipynb | mit | import graphlab
"""
Explanation: Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Implement the link function for logistic regression.
Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.
Implement gradient ascent.
Given a set of coefficients, predict sentiments.
Compute classification accuracy for the logistic regression model.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
products = graphlab.SFrame('amazon_baby_subset.gl/')
products.head()
"""
Explanation: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
"""
products['sentiment']
"""
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
"""
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
"""
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
"""
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
"""
Explanation: Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
"""
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
"""
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
End of explanation
"""
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
"""
products['perfect']
"""
Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
"""
def contains_important_word_count(word):
new_feature = "contains_" + word
products[new_feature] = products.apply(lambda x : x[word] >= 1)
word = 'perfect'
new_feature = "contains_" + word
contains_important_word_count(word)
print "Number of reviews containing word `" + word + "` = " + str(products[new_feature].sum())
"""
Explanation: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
End of explanation
"""
import numpy as np
"""
Explanation: Quiz Question. How many reviews contain the word perfect?
Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
End of explanation
"""
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
"""
Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
End of explanation
"""
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
"""
Explanation: Let us convert the data into NumPy arrays.
End of explanation
"""
feature_matrix.shape
"""
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-3-assignment-numpy-arrays.npz')
feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment']
End of explanation
"""
len(important_words)
print "#features in feature_matrix = " + str(len(important_words) + 1)
"""
Explanation: Quiz Question: How many features are there in the feature_matrix?
Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model?
End of explanation
"""
sentiment
"""
Explanation: Now, let us see what the sentiment column looks like:
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
import math
sigmoid = lambda x: 1 / (1 + math.exp(-x))
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
dot_product = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = []
for dpi in dot_product:
predictions.append(sigmoid(dpi))
# return predictions
return predictions
"""
Explanation: Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
End of explanation
"""
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
"""
Explanation: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
End of explanation
"""
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
"""
Explanation: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block:
End of explanation
"""
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
"""
Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset.
End of explanation
"""
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
"""
Explanation: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
End of explanation
"""
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative(errors, feature_matrix[:,j])
# add the step size times the derivative to the current coefficient
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
"""
Explanation: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent:
End of explanation
"""
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
"""
Explanation: Now, let us run the logistic regression solver.
End of explanation
"""
print "increases"
"""
Explanation: Quiz question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease?
End of explanation
"""
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
"""
Explanation: Predicting sentiments
Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
Step 1 can be implemented as follows:
End of explanation
"""
class_predictions = []
class_predictor = lambda x : 1 if x > 0 else -1
for score in scores:
class_predictions.append(class_predictor(score))
class_predictions
"""
Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
End of explanation
"""
print "#reviews with predicted positive sentiment = " + str(len([x for x in class_predictions if x == 1]))
t = 0
for p in class_predictions:
if p == 1:
t += 1
print t
"""
Explanation: Quiz question: How many reviews were predicted to have positive sentiment?
End of explanation
"""
num_mistakes = 0
for i in xrange(len(sentiment)):
if sentiment[i] != class_predictions[i]:
num_mistakes += 1
accuracy = 1 - float(num_mistakes) / len(sentiment)
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
"""
Explanation: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model.
End of explanation
"""
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
"""
Explanation: Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy)
Which words contribute most to positive & negative sentiments?
Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order.
End of explanation
"""
word_coefficient_tuples[:10]
"""
Explanation: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
End of explanation
"""
word_coefficient_tuples[-10:]
"""
Explanation: Quiz question: Which word is not present in the top 10 "most positive" words?
Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
End of explanation
"""
|
tensorflow/docs | site/en/tutorials/load_data/tfrecord.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import numpy as np
import IPython.display as display
"""
Explanation: TFRecord and tf.train.Example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/tfrecord"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/tfrecord.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/tfrecord.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/tfrecord.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TFRecord format is a simple format for storing a sequence of binary records.
Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data.
Protocol messages are defined by .proto files, these are often the easiest way to understand a message type.
The tf.train.Example message (or protobuf) is a flexible message type that represents a {"string": value} mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as TFX.
This notebook demonstrates how to create, parse, and use the tf.train.Example message, and then serialize, write, and read tf.train.Example messages to and from .tfrecord files.
Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using tf.data and reading data is still the bottleneck to training. You can refer to Better performance with the tf.data API for dataset performance tips.
Note: In general, you should shard your data across multiple files so that you can parallelize I/O (within a single host or across multiple hosts). The rule of thumb is to have at least 10 times as many files as there will be hosts reading data. At the same time, each file should be large enough (at least 10 MB+ and ideally 100 MB+) so that you can benefit from I/O prefetching. For example, say you have X GB of data and you plan to train on up to N hosts. Ideally, you should shard the data to ~10*N files, as long as ~X/(10*N) is 10 MB+ (and ideally 100 MB+). If it is less than that, you might need to create fewer shards to trade off parallelism benefits and I/O prefetching benefits.
Setup
End of explanation
"""
# The following functions can be used to convert a value to a type compatible
# with tf.train.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
"""
Explanation: tf.train.Example
Data types for tf.train.Example
Fundamentally, a tf.train.Example is a {"string": tf.train.Feature} mapping.
The tf.train.Feature message type can accept one of the following three types (See the .proto file for reference). Most other generic types can be coerced into one of these:
tf.train.BytesList (the following types can be coerced)
string
byte
tf.train.FloatList (the following types can be coerced)
float (float32)
double (float64)
tf.train.Int64List (the following types can be coerced)
bool
enum
int32
uint32
int64
uint64
In order to convert a standard TensorFlow type to a tf.train.Example-compatible tf.train.Feature, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a tf.train.Feature containing one of the three list types above:
End of explanation
"""
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
"""
Explanation: Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use tf.io.serialize_tensor to convert tensors to binary-strings. Strings are scalars in TensorFlow. Use tf.io.parse_tensor to convert the binary-string back to a tensor.
Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. _int64_feature(1.0) will error out because 1.0 is a float—therefore, it should be used with the _float_feature function instead):
End of explanation
"""
feature = _float_feature(np.exp(1))
feature.SerializeToString()
"""
Explanation: All proto messages can be serialized to a binary-string using the .SerializeToString method:
End of explanation
"""
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature.
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution.
feature3 = np.random.randn(n_observations)
"""
Explanation: Creating a tf.train.Example message
Suppose you want to create a tf.train.Example message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the tf.train.Example message from a single observation will be the same:
Within each observation, each value needs to be converted to a tf.train.Feature containing one of the 3 compatible types, using one of the functions above.
You create a map (dictionary) from the feature name string to the encoded feature value produced in #1.
The map produced in step 2 is converted to a Features message.
In this notebook, you will create a dataset using NumPy.
This dataset will have 4 features:
a boolean feature, False or True with equal probability
an integer feature uniformly randomly chosen from [0, 5]
a string feature generated from a string table by using the integer feature as an index
a float feature from a standard normal distribution
Consider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
End of explanation
"""
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.train.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.train.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
"""
Explanation: Each of these features can be coerced into a tf.train.Example-compatible type using one of _bytes_feature, _float_feature, _int64_feature. You can then create a tf.train.Example message from these encoded features:
End of explanation
"""
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
"""
Explanation: For example, suppose you have a single observation from the dataset, [False, 4, bytes('goat'), 0.9876]. You can create and print the tf.train.Example message for this observation using create_message(). Each single observation will be written as a Features message as per the above. Note that the tf.train.Example message is just a wrapper around the Features message:
End of explanation
"""
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
"""
Explanation: To decode the message use the tf.train.Example.FromString method.
End of explanation
"""
tf.data.Dataset.from_tensor_slices(feature1)
"""
Explanation: TFRecords format details
A TFRecord file contains a sequence of records. The file can only be read sequentially.
Each record contains a byte-string, for the data-payload, plus the data-length, and CRC-32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.
Each record is stored in the following formats:
uint64 length
uint32 masked_crc32_of_length
byte data[length]
uint32 masked_crc32_of_data
The records are concatenated together to produce the file. CRCs are
described here, and
the mask of a CRC is:
masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
Note: There is no requirement to use tf.train.Example in TFRecord files. tf.train.Example is just a method of serializing dictionaries to byte-strings. Any byte-string that can be decoded in TensorFlow could be stored in a TFRecord file. Examples include: lines of text, JSON (using tf.io.decode_json_example), encoded image data, or serialized tf.Tensors (using tf.io.serialize_tensor/tf.io.parse_tensor). See the tf.io module for more options.
TFRecord files using tf.data
The tf.data module also provides tools for reading and writing data in TensorFlow.
Writing a TFRecord file
The easiest way to get the data into a dataset is to use the from_tensor_slices method.
Applied to an array, it returns a dataset of scalars:
End of explanation
"""
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
"""
Explanation: Applied to a tuple of arrays, it returns a dataset of tuples:
End of explanation
"""
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0, f1, f2, f3), # Pass these args to the above function.
tf.string) # The return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar.
tf_serialize_example(f0, f1, f2, f3)
"""
Explanation: Use the tf.data.Dataset.map method to apply a function to each element of a Dataset.
The mapped function must operate in TensorFlow graph mode—it must operate on and return tf.Tensors. A non-tensor function, like serialize_example, can be wrapped with tf.py_function to make it compatible.
Using tf.py_function requires to specify the shape and type information that is otherwise unavailable:
End of explanation
"""
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
"""
Explanation: Apply this function to each element in the dataset:
End of explanation
"""
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
"""
Explanation: And write them to a TFRecord file:
End of explanation
"""
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
"""
Explanation: Reading a TFRecord file
You can also read the TFRecord file using the tf.data.TFRecordDataset class.
More information on consuming TFRecord files using tf.data can be found in the tf.data: Build TensorFlow input pipelines guide.
Using TFRecordDatasets can be useful for standardizing input data and optimizing performance.
End of explanation
"""
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
"""
Explanation: At this point the dataset contains serialized tf.train.Example messages. When iterated over it returns these as scalar string tensors.
Use the .take method to only show the first 10 records.
Note: iterating over a tf.data.Dataset only works with eager execution enabled.
End of explanation
"""
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.train.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
"""
Explanation: These tensors can be parsed using the function below. Note that the feature_description is necessary here because tf.data.Datasets use graph-execution, and need this description to build their shape and type signature:
End of explanation
"""
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
"""
Explanation: Alternatively, use tf.parse example to parse the whole batch at once. Apply this function to each item in the dataset using the tf.data.Dataset.map method:
End of explanation
"""
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
"""
Explanation: Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a tf.Tensor, and the numpy element of this tensor displays the value of the feature:
End of explanation
"""
# Write the `tf.train.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
"""
Explanation: Here, the tf.parse_example function unpacks the tf.train.Example fields into standard tensors.
TFRecord files in Python
The tf.io module also contains pure-Python functions for reading and writing TFRecord files.
Writing a TFRecord file
Next, write the 10,000 observations to the file test.tfrecord. Each observation is converted to a tf.train.Example message, then written to file. You can then verify that the file test.tfrecord has been created:
End of explanation
"""
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
"""
Explanation: Reading a TFRecord file
These serialized tensors can be easily parsed using tf.train.Example.ParseFromString:
End of explanation
"""
result = {}
# example.features.feature is the dictionary
for key, feature in example.features.feature.items():
# The values are the Feature objects which contain a `kind` which contains:
# one of three fields: bytes_list, float_list, int64_list
kind = feature.WhichOneof('kind')
result[key] = np.array(getattr(feature, kind).value)
result
"""
Explanation: That returns a tf.train.Example proto which is dificult to use as is, but it's fundamentally a representation of a:
Dict[str,
Union[List[float],
List[int],
List[str]]]
The following code manually converts the Example to a dictionary of NumPy arrays, without using TensorFlow Ops. Refer to the PROTO file for details.
End of explanation
"""
cat_in_snow = tf.keras.utils.get_file(
'320px-Felis_catus-cat_on_snow.jpg',
'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file(
'194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg',
'https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
"""
Explanation: Walkthrough: Reading and writing image data
This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.
This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.
First, let's download this image of a cat in the snow and this photo of the Williamsburg Bridge, NYC under construction.
Fetch the images
End of explanation
"""
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.io.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
"""
Explanation: Write the TFRecord file
As before, encode the features as types compatible with tf.train.Example. This stores the raw image string feature, as well as the height, width, depth, and arbitrary label feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use 0 for the cat image, and 1 for the bridge image:
End of explanation
"""
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.train.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
"""
Explanation: Notice that all of the features are now stored in the tf.train.Example message. Next, functionalize the code above and write the example messages to a file named images.tfrecords:
End of explanation
"""
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.train.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
"""
Explanation: Read the TFRecord file
You now have the file—images.tfrecords—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely example.features.feature['image_raw'].bytes_list.value[0]. You can also use the labels to determine which record is the cat and which one is the bridge:
End of explanation
"""
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
"""
Explanation: Recover the images from the TFRecord file:
End of explanation
"""
|
OriolAbril/Statistics-Rocks-MasterCosmosUAB | Statistics_block1.ipynb | mit | # sets the plots to be embedded in the notebook
%matplotlib inline
# Import useful python libraries
import numpy as np # library to work with arrays
import matplotlib.pyplot as plt # plotting library (all weird commands starting with plt., ax., fig. are matplotlib
# they are not important, the define plots and set labels, axis...)
import random # (pseudo-)random numbers generation
import scipy.stats as stat # extra statistical functions (the basic are included in numpy)
import scipy.optimize as opt # optimization and root finding package
from scipy.misc import factorial
"""
Explanation: Statistics and Probability
First Block
We have the following p.d.f. $\frac{d\Gamma}{d\cos\theta}$ with a parameter $P_{\mu} \in [-1,1]$
$$
\frac{d\Gamma}{d\cos\theta}=\frac{1}{2}\big(1-\frac{1}{3}P_{\mu}\cos\theta\big)
$$
The variable $\theta$ represents the angle between the muon and electron polarizations, thus $\theta \in [-\pi,0]$ which means that $\cos \theta \in [-1,1]$. It can be checked that this function fulfills all the conditions in order to be a p.d.f.
1. Positive Semi-Defined Function
Both $P_{\mu}$ and $\cos \theta$ are constrained in the interval $[-1,1]$. Therefore:
\begin{eqnarray}
\frac{1}{2}\big(1-\frac{1}{3}\big) \leqslant & \frac{1}{2}\big(1-\frac{1}{3}P_{\mu}\cos\theta\big) & \leqslant \frac{1}{2}\big(1-\frac{-1}{3}\big) \
\frac{1}{3} \leqslant & \frac{1}{2}\big(1-\frac{1}{3}P_{\mu}\cos\theta\big) & \leqslant \frac{2}{3}
\end{eqnarray}
2. Normalized Function
To simplify the calculations, we will use $\cos \theta$ as our variable.
\begin{eqnarray}
\int_{-1}^{1} \frac{1}{2}\big(1-\frac{1}{3}P_{\mu}\cos\theta\big) d\cos\theta = \frac{1}{2}\int_{-1}^{1} d\cos\theta -\frac{1}{6}P_{\mu}\int_{-1}^{1} \cos\theta d\cos\theta = \Big[\frac{1}{2}\cos\theta-\frac{1}{12}P_{\mu}\cos^2\theta \Big]{-1}^{1}=\frac{1}{2}(1-(-1))-\frac{1}{12}P{\mu}(1-(-1)^2)=1
\end{eqnarray}
3. Plot of the PDF
Let's now plot the normalized differential probability density function $\frac{d\Gamma}{d\cos\theta}$. First, we will initialize several Python libraries which we be useful during this problem set.
End of explanation
"""
def gamma(cost,Pmu):
dg=.5*(1-Pmu*cost/3)
return dg
"""
Explanation: Define the $\frac{d\Gamma}{d\cos\theta}$ function, using the angle $\theta$ and the muon polarization $P_{\mu}$ as input variables:
End of explanation
"""
# Define parameters
Pmu=0.3
# Define plot variables
cost=np.linspace(-1,1,201)
dG=gamma(cost,Pmu)
# Plot
fig=plt.figure(1)
ax=fig.add_subplot(111)
ax.plot(cost,dG)
ax.set_title('$d\Gamma$ probability density function')
ax.set_xlabel(r'$\cos\theta$')
ax.set_ylabel('$d\Gamma$')
"""
Explanation: We are now ready to plot the $d\Gamma$ probability density function.
End of explanation
"""
'''
plt.figure(1)
plt.plot(cost, dG)
plt.title('$d\Gamma$ probability density function')
plt.xlabel(r'$\cos\theta$')
plt.ylabel('$d\Gamma$')
'''
"""
Explanation: [Mariona]: I've never really used the ax. command because it increases the
complexity of the code quite significantly. I've written below how i'd personally plot the figure, which consists on using the regular plt. command. If you try it out, you'll see we get the same results!:
End of explanation
"""
#%%timeit # returns the elapsed time when executing the cell, it executes the cell more than once,
# thus, it should only be uncommented when desired
# Montecarlo try-reject
N=10000000
# Define fmax
# The goal is to have the lowest fmax possible in order to increase the efficiency of the try-reject, knowing
# the shape of the p.d.f. this can be trivially done, because we know its maximum value must be either at 1 or -1
fmax=max(gamma(-1,Pmu),gamma(1,Pmu))
Xi1=np.empty(N) # Declaring an array instead of making it grow inside a loop speeds a lot the computation time
i=0
while i<N:
# 1st step of the try reject, choose a point inside the area [a,b]x[0,fmax]
r=random.random()
x=-1+2*r
y=random.random()*fmax
# 2nd step, check if it is inside the p.d.f. area and save x€[a,b] to Xi1, otherwise, reject it
if gamma(x,Pmu)>=y:
Xi1[i]=x
i+=1
"""
Explanation: 2. Build a Monte Carlo able to generate this PDF
[Oriol]: Nota sobre python y los bucles: Python es muy bonito y simple, pero se le atragantan un poco los bucles. Por suerte, numpy esta hecho en C, cosa que agiliza muchísimo la ejecución. Tanto en el try-teject como en el inverse function hay una comparación. Teneis que descomentar el timeit para ver la comparación de tiempo y comentarlo para poderlo ejecutar correctamente.
Try-Reject p.d.f generation
We will construct our Monte Carlo with the Try-Reject method.
Without using Numpy explicitly:
End of explanation
"""
#%%timeit
N=10000000
# 1st step of the try reject, choose a point inside the area [a,b]x[0,fmax]
# now, all N desired number are generated at once with numpy, thus, as some of them will be rejected, 2*N
# random numbers are generated
x=-1+2*np.random.random(2*N)
y=fmax*np.random.random(2*N)
# 2nd step, check if it is inside the p.d.f. area and save x€[a,b] to Xi1, otherwise, reject it
# in addition, as 2*N were generated in order to be safe and less than N x values will have been rejected
# a sample of size N must be chosen in order to compare with the other methods.
Xi2=x[y<=gamma(x,Pmu)][:N]
"""
Explanation: Using Numpy:
End of explanation
"""
fig=plt.figure(2)
ax=fig.add_subplot(111)
ax.hist(Xi1,color='b',normed=1,bins=50,label='Montecarlo p.d.f.')
ax.hist(Xi2,color='g',normed=1,bins=50,label='Montecarlo p.d.f.',alpha=0.6)
ax.plot(cost,dG,'r--',linewidth=2,label='Theoretical')
ax.set_title('Try-reject Monte Carlo generated $d\Gamma$ p.d.f.')
ax.set_xlabel(r'$\cos\theta$')
ax.set_ylabel('$d\Gamma$')
ax.legend()
ax.set_ylim([0.4,0.6])
"""
Explanation: Plot the results of the Try-Reject Method:
End of explanation
"""
# Inverse function F^{-1}(r) implemented in Python
def Finv(r,Pmu):
# its arguments are:
# r : either int, float or np.array. Must be a value between 0 and 1
# Pmu : either int, float or np.array, its shape must be compatible with r in case of arrays
cost=(3.-6.*np.sqrt(.25-Pmu/3.*(r-.5-Pmu/12)))/Pmu
return cost
"""
Explanation: Inverse function p.d.f. generation
To generate a p.d.f via the inverse method we need the inverse of the cumulative function. The cumulative function was found in the first question while checking that the p.d.f. was normalized, thus:
$$
F(\cos\theta)=\Big[\frac{1}{2}z-\frac{1}{12}P_{\mu}z^2 \Big]{-1}^{\cos\theta}=\frac{1}{2}(\cos\theta+1)-\frac{1}{12}P{\mu}(\cos^2\theta-1)
$$
$$
\frac{P_{\mu}}{12}\cos^2\theta-\frac{\cos\theta}{2}+r-\frac{1}{2}-\frac{P_{\mu}}{12}=0 \quad \rightarrow \quad F^{-1}(r)=\cos\theta=\frac{3}{P_{\mu}}\pm \frac{6}{P_{\mu}}\sqrt{\frac{1}{4}-\frac{P_{\mu}}{3}(r-\frac{1}{2}-\frac{P_{\mu}}{12})}
$$
End of explanation
"""
#%%timeit
N=10000000
Xinv1=np.empty(N)
for i in xrange(N):
r=random.random()
Xinv1[i]=Finv(r,Pmu)
"""
Explanation: (without numpy explicitely)
End of explanation
"""
# Montecarlo inverse function
def Montecarlo_inv_fun(Pmu,N=1000000):
# its arguments are:
# Pmu : np.array containing the value or values of Pmu for which the pdf will be generated
# Optional arguments:
# N : int, number of values in the pdf sample
m=len(Pmu)
#v.1
PmuV=np.array([Pmu]) #Transform Pmu into an 1xm matrix so that it matches the dimensions of r (Nxm)
r=np.random.random((N,m))
#v.2
#PmuV=np.empty((m,1))
#PmuV[:,0]=Pmu
#r=np.random.random((m,N))
Xinv = Finv(r,PmuV)
return Xinv
Xinv2=Montecarlo_inv_fun(np.array([Pmu]),10000000)[:,0]
# Now, in order to make the function compatible with arrays, Pmu must be reshaped into a np.array and
# afterwards, the obtained sample is converted from Nx1 matrix to vector of length N
"""
Explanation: (with numpy magic)
End of explanation
"""
#Plot for the inverse function method
fig=plt.figure(2)
ax=fig.add_subplot(111)
ax.hist(Xinv1,color='b',normed=1,bins=50,label='Montecarlo p.d.f.')
ax.hist(Xinv2,color='g',normed=1,bins=50,label='Montecarlo p.d.f.',alpha=0.6)
ax.plot(cost,dG,'r--',linewidth=2,label='Theoretical')
ax.set_title('Inverse function montecarlo generated $d\Gamma$ p.d.f.')
ax.set_xlabel(r'$\cos\theta$')
ax.set_ylabel('$d\Gamma$')
ax.legend()
ax.set_ylim([0.4,0.6])
"""
Explanation: [Oriol] The execution time of the code without using numpy own methods is 18 seconds, whereas knowing numpy reduces it to 0.45 seconds (results with my computer with many things opened)
End of explanation
"""
mu=np.mean(Xinv2)
sigma=np.std(Xinv2) # equivalent to np.sqrt(np.var(Xi1))
skewness=stat.skew(Xinv2)
kurtosis=stat.kurtosis(Xinv2)
print 'The try-reject montecarlo generated distribution has:\n\tmean = %.6f,\n\t\
sigma = %.6f,\n\tskewness = %.6f\n\tand kurtosis = %.6f' %(mu, sigma, skewness, kurtosis)
"""
Explanation: Estimate distribution parameters
In this section, the rellevant parameters of the p.d.f. will be estimated from the montecarlo generated sample. To be able to compare this values, the theoretical values will also be obtained:
$$
\int_{-1}^{1} \frac{1}{2}\big(1-\frac{1}{3}P_{\mu}\cos\theta\big) \cos\theta d\cos\theta = \Big[\frac{\cos^2\theta}{4}-\frac{P_{\mu}}{6}\frac{\cos^3\theta}{3}\Big]{-1}^{1}=\frac{-P{\mu}}{9}
$$
End of explanation
"""
N=int(1e7)
PmuVec=np.linspace(-1,1,30) # array of Pmu valus for which the pdf sample will be generated
X=Montecarlo_inv_fun(PmuVec,N) # call Montecarlo_inv_fun, which returns a matrix, containing N x values
# following the pdf of each Pmu in PmuVec
mu=np.mean(X,axis=0) # estimate the mean for each Pmu, thus, the result is a vector of the same length as PmuVec
muTh=-PmuVec/9. # calculate the theoretical mean for each Pmu
fig=plt.figure(1)
ax=fig.add_subplot(111)
ax.plot(PmuVec,mu,'ro',PmuVec,muTh,'b--')
ax.set_title('$P_{\mu}$ p.d.f. dependency')
ax.set_xlabel(r'$P_{\mu}$')
ax.set_ylabel('$\mu$')
ax.legend(['Estimated mean','Theoretical mean'])
"""
Explanation: Part 3
The $\frac{d\Gamma}{d\cos\theta}$ mean depends on the polarisation $P_{\mu}$ in a simple manner: theoretical mean = -$P_{\mu}$/9.
3.1. Show that the Monte Carlo predicts this dependency by changing the value of $P_{\mu}$
End of explanation
"""
sample_size = int(1e5)
Pmu_chosen_value = 0.5 #Choose a value of Pmu for this exercise
vec_length = 50 # Choose the length of the array "get_PmuVec"
get_PmuVec = np.ones(vec_length)*Pmu_chosen_value # Create vector with length = 50 filled with a given Pmu value from which the pdf sample will be generated
get_Pmus = Montecarlo_inv_fun(get_PmuVec, sample_size) # call Montecarlo_inv_fun, which returns a matrix with dimensions (sample_size X vec_length), containing x values
# following the pdf of each Pmu_chosen_value in get_PmuVec
"""
Explanation: 3.2. What is the variance of the parameter $P_{\mu}$? Compute it numerically using Monte Carlo techniques for a given $P_{\mu}$ value.
Perform Monte Carlo simulation for a given Pmu value (called "Pmu_chosen_value") and sample size. We will obtain a vector $X = {x_{1}, x_{2}, ...., x_{N}}$
End of explanation
"""
estimated_mean = np.mean(get_Pmus, axis=0) #Get estimated mean
estimated_Pmu = -9*estimated_mean # estimate the mean for each Pmu. The result is a vector of the same length as PmuVec
variance1 = np.var(estimated_Pmu, ddof = 1) #Estimate the variance of the estimated Pmu. The "ddof" is used so that the division is not done as 1/N but as 1/N-1
variance2 = (sum((estimated_Pmu-Pmu_chosen_value)**2))/vec_length
print variance1
print variance2
"""
Explanation: Calculate the variance of the estimated Pmu both with the estimated mu and the theoretical mu and compare them.
End of explanation
"""
Pmu_chosen_value = 0.5 #Choose a value of Pmu for this exercise
vec_length = 50 # Choose the length of the array "get_PmuVec"
get_PmuVec = np.ones(vec_length)*Pmu_chosen_value # Create vector with length = 50 filled with a given Pmu value from which the pdf sample will be generated
num_samples = 20
sample_sizes = np.logspace(start = 1, stop = 6.5, num = num_samples, dtype=int) #num refers to the nº of points generated between start and stop
estimated_Pmus = np.empty(num_samples)
for i,sample_size in enumerate(sample_sizes):
get_Pmus = Montecarlo_inv_fun(get_PmuVec, sample_size)
estimated_mean = np.mean(get_Pmus, axis = 0) #Get estimated mean
estimated_Pmus[i] = -9*np.mean(estimated_mean) # estimate the mean for each Pmu. The result is a vector of the same length as PmuVec
fig = plt.figure()
ax = fig.add_subplot(111)
ax.semilogx(sample_sizes,estimated_Pmus,'ro',sample_sizes,Pmu_chosen_value*np.ones(num_samples),'b--')
ax.set_title('Proof of the Law of Large Numbers')
ax.set_xlabel('Sample size')
ax.set_ylabel('Estimated $P_{\mu}$')
ax.legend(['Estimated mean','Theoretical mean'],loc=3)
"""
Explanation: We define the variance as: $$Var(X)=E[(x-\mu)^{2}]$$
With the sum command, we are obtaining the variance with: $$\overline{Var}=\frac{1}{N}\sum(P_{i}-\mu)^{2}$$ where $\mu$ is the theoretical mean. In this case, we have chosen a specific value for Pmu (Pmu_chosen_value = 0.5), so the theoretical mean will simply correspond to this value.
With the np.var command, we are calculating the variance with equation: $$\overline{Var}=\frac{1}{N-1}\sum(P_{i}-\overline{x})$$
where $\bar{x}$ is the estimated mean, i.e. $\bar{x}=\frac{1}{N}\sum x_{i}$. Note that np.var accepts "delta Degrees of freedom" (ddof) which refers to the factor $1/N-\text{ddof}$. We chose $ddof = 1$ so that the division would be as 1/N-1, instead of 1/N.
Part 4
Generate a continuous series of N events using the Monte Carlo and compute the mean of the distribution and the estimated $P_{\mu}$ as 9*mean.
4.1. Show that the $P_{\mu}$ tends to the true evalue as predicted by the law of large numbers
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.loglog(sample_sizes, np.abs(estimated_Pmus-Pmu_chosen_value),'ro') #Move the two axes to logarithmic space
ax.set_title('Residuals')
ax.set_xlabel('Sample Size')
ax.set_ylabel('Estimated - Theoretical $P_{\mu}$')
"""
Explanation: As we can see from the plot, the estimated mean resulting from the first iterations is not very accurate, but as the sample size increases, the estimated mean approaches the theoretical mean --as predicted by the Law of Large Numbers.
For each given number of trials, we will now plot the residuals between the estimated and the theoretical means.
End of explanation
"""
#%%timeit
# Here we demonstrate the law of large numbers without reseting the sample
Pmu_41 = 0.5 #Choose a value of Pmu for this exercise
sample_size_41 = int(1e8) # Size of the sample, then, to estimate the LLN we will use the first m
# to calculate the mean, then the first 2m and so on
m = int(5e6) # Divide the Pmu means vector (of size vec_length_41) into m elements, thus, its legth will be
X_sample_41 = Montecarlo_inv_fun(np.array([Pmu_41]), sample_size_41)[:,0]
X_divided = np.array(np.split(X_sample_41,m))
partial_means = np.mean(X_divided,axis=1)
N_vec = np.arange(1,m+1)
cumulative_Pmus = -9*np.cumsum(partial_means)/N_vec
N_vec = sample_size_41/m*N_vec
"""
Explanation: ATENCION: Intentar demostrar LLN sin generar cada vez la muestra
[Oriol]: Ya he cambiado el código para que no resetee la muestra cada vez. Al empezar a hacerlo directament siguiendo los pasos que tenia en la cabeza he hecho esta primera celda, que de momento dejo porque creo que se entiende mejor (o almenos alguien lo puede entender mejor). Lo que hago es:
1. Generar un vector de N componentes
2. Dividirlo en m tramos (de longitud N/m cadascuno) y hacer la media de cada tramo
3. Calcular la suma cumulativa y dividirla por el numero de celda para obtener la media de los valores anteriores a este.
- Es decir, en el vector cumulative_Pmus, el valor en la posicion i es la media de los (i+1)*N/m primeros valores de la muestra
Esta version tarda 10 segundos aprox para hacer este calculo con $N=10^8$ y $m=5\times10^6$, es decir calcular la media cada 20.
End of explanation
"""
#%%timeit
# After I did the above cell, I realized that taking into account how we have defined the MC,
# it can be done in a simpler way thanks to the fact that the random numbers we generate are independent
Pmu_41 = 0.5 #Choose a value of Pmu for this exercise
sample_size_41 = int(1e8) # Size of the sample, then, to estimate the LLN we will use the first m
# to calculate the mean, then the first 2m and so on
m = int(5e6) # Divide the Pmu means vector (of size vec_length_41) into m elements, thus, its legth will be
X_sample_41 = Montecarlo_inv_fun(np.ones(sample_size_41/m)*Pmu_41, m)
partial_means = np.mean(X_sample_41,axis=1)
N_vec = np.arange(1,m+1)
cumulative_Pmus = -9*np.cumsum(partial_means)/N_vec
N_vec = sample_size_41/m*N_vec
fig = plt.figure()
ax = fig.add_subplot(111)
ax.semilogx(N_vec,cumulative_Pmus,'b-',alpha=0.7)
ax.semilogx(N_vec,np.ones(m)*Pmu_41,'k')
ax.set_title('Proof of the Law of Large Numbers')
ax.set_xlabel('Sample size')
ax.set_ylabel('Estimated $P_{\mu}$')
ax.legend(['Estimated mean','Theoretical mean'],loc='best')
"""
Explanation: Como la generacion de numeros aleatorios se hace de manera independiente, y nuesto MC trabaja con arrays bidimensionales, podemos entrar los parámetros de manera que el propio montecarlo nos devuelva la matriz Nxm del paso 2 de la explicación.
De este modo, para los mismos N i m, el código tarda 4 segundos.
End of explanation
"""
N = 15 # number of samples for each montecarlo experiment
Pmu_chosen_value = 0.5 #Choose a value of Pmu for this exercise
mu_t=-Pmu_chosen_value/9.
num_repetitions = 15 # number of times the N x num_MC_experiments MC_samples matrix is generated
num_MC_experiments = 500000 # Choose the number of MC experiments to generate (it will be the # of t values in our histogram)
PmuVec_t = np.ones(num_MC_experiments)*Pmu_chosen_value # Create vector with length = num_MC_experiments filled with a given Pmu value from which the pdf sample will be generated
t_values = np.empty(num_repetitions*num_MC_experiments)
for repeat in xrange(num_repetitions):
MC_samples = Montecarlo_inv_fun(PmuVec_t, N) # matrix N x num_MC_experiments
x_bar = np.mean(MC_samples, axis=0) #Get estimated mean
s_square = np.var(MC_samples, axis=0, ddof=1) #Get estimated variance (sigma^2)
t_values[repeat*num_MC_experiments:(repeat+1)*num_MC_experiments] = np.sqrt(N/s_square)*(x_bar-mu_t)
fig = plt.figure()
ax = fig.add_subplot(111)
lim_prob = 0.001
grid_num = 100
x = np.linspace(stat.t.ppf(lim_prob, N), stat.t.ppf(1-lim_prob, N), grid_num)
ax.plot(x, stat.t.pdf(x, N), 'r-', label='t pdf (Theoretical)')
ax.hist(t_values,color='b',normed=1, bins = 20,label='Montecarlo p.d.f.')
ax.legend(loc=2)
"""
Explanation: Part 5
Generate several Monte Carlo experiments, each with N events.
5.1. Build for each experiment the student's t variable for $P_{\mu}$ and show that it follows the Student's t distribution
End of explanation
"""
mu_t = np.mean(t_values)
sigma_t = np.std(t_values) # equivalent to np.sqrt(np.var(Xi1))
skewness_t =stat.skew(t_values)
kurtosis_t =stat.kurtosis(t_values)
print '\tmean = %.6f,\n\t\
sigma = %.6f,\n\tskewness = %.6f\n\tand kurtosis = %.6f' %(mu_t, sigma_t, skewness_t, kurtosis_t)
"""
Explanation: Calculate the moments of the distribution:
End of explanation
"""
N_clt = int(1e6)
Pmu_chosen_value = 0.5 #Choose a value of Pmu for this exercise
mu_t=-Pmu_chosen_value/9.
num_repetitions = 15 # number of times the N x num_MC_experiments MC_samples matrix is generated
num_MC_experiments = 200 # Choose the number of MC experiments to generate (it will be the # of t values in our histogram)
PmuVec_t = np.ones(num_MC_experiments)*Pmu_chosen_value # Create vector with length = num_MC_experiments filled with a given Pmu value from which the pdf sample will be generated
clt_values = np.empty(num_repetitions*num_MC_experiments)
for repeat in xrange(num_repetitions):
MC_samples = Montecarlo_inv_fun(PmuVec_t, N_clt) # matrix N x num_MC_experiments
x_bar = np.mean(MC_samples, axis=0) #Get estimated mean
s_square = np.var(MC_samples, axis=0, ddof=1) #Get estimated variance (sigma^2)
clt_values[repeat*num_MC_experiments:(repeat+1)*num_MC_experiments] = np.sqrt(N_clt/s_square)*(x_bar-mu_t)
# Expected gaussian pdf
mu = 0
sigma = 1
dist = stat.norm(mu, sigma)
x_pdf = np.linspace(-4, 4, 1000)
#Plot distributions
fig = plt.figure()
ax = fig.add_subplot(111)
lim_prob = 0.001
grid_num = 100
x = np.linspace(stat.t.ppf(lim_prob, N_clt), stat.t.ppf(1-lim_prob, N_clt), grid_num)
ax.plot(x, stat.t.pdf(x, N_clt), 'r-', lw=5, alpha=0.5, label='t pdf (Theoretical)')
ax.hist(clt_values,color='b',normed=1,bins=30,label='Montecarlo p.d.f.')
ax.plot(x_pdf, dist.pdf(x_pdf), '-k', label = 'Gaussian')
ax.legend()
"""
Explanation: For the values to follow a Gaussian distribution, the skewness factor would have to be 0 (sknewness measures symmetry, and Gaussian is 100% symmetrical), and the kurtosis factor would also have to be 0. Since none of these parameters is 0, the distribution is not Gaussian.
5.2. Show the validity of the Cetral Limit Theorem
End of explanation
"""
mu_clt = np.mean(clt_values)
sigma_clt = np.std(clt_values)
skewness_clt =stat.skew(clt_values)
kurtosis_clt =stat.kurtosis(clt_values)
print '\tmean = %.6f,\n\t\
sigma = %.6f,\n\tskewness = %.6f\n\tand kurtosis = %.6f' %(mu_clt, sigma_clt, skewness_clt, kurtosis_clt)
"""
Explanation: Calculate the moments of the distribution:
End of explanation
"""
Pmu = 0.3
num_sample_values = 1000
Xinv6 = Montecarlo_inv_fun(np.array([Pmu]),num_sample_values)[:,0] #Run Montecarlo with Pmu as a fixed parameter
fig = plt.figure()
ax = fig.add_subplot(111)
# Create the histogram (note: it's NOT normalized)
N_bin, bins, lis = ax.hist(Xinv6,color='b',bins=10,label='Montecarlo p.d.f.')
# N_bin is an array containing the values of the histogram bins
# bins is an array with the edges of the bins.
# The last parameter will not be used here.
ax.set_title('Inverse function montecarlo generated $d\Gamma$ p.d.f.')
ax.set_xlabel(r'$\cos\theta$')
ax.set_ylabel('$d\Gamma$')
ax.legend()
"""
Explanation: Both the skewness and kurtosis factors are approximately zero, which implies the distribution is Gaussian.
Part 6
Generate N random events according to the probability density function $\frac{d\Gamma}{d\cos\theta}$ and fill the obtained $\cos\theta$ values in a histogram.
End of explanation
"""
num_sample = 100
binomial_sample = 10000
Pmu_chosen_value = -0.3
bin_number = 4
PmuVec_61 = np.ones(binomial_sample)*Pmu_chosen_value # Create vector with length = num_MC_experiments filled with a given Pmu value from which the pdf sample will be generated
X_61 = Montecarlo_inv_fun(PmuVec_61,num_sample) #Run Montecarlo with Pmu as a fixed parameter
N_61 = np.empty(binomial_sample)
for i in xrange(binomial_sample):
hist, bin_edges = np.histogram(X_61[:,i],range=(-1,1)) # fixing the range to -1, 1 avoids the bins to start at the minimum
# of the sample which would be min(X_61) (around -0.98 but not -1)
N_61[i] = hist[bin_number-1]
"""
Explanation: 6.1. What is the probability density function associated to the number of entries per bin?
Pasos del algoritmo 6.1:
- Generar num_sample mustras de $\cos\theta$ siguiendo la pdf del enunciado con MC
- Hacer el histograma
- Guardar el valor de entradas en el primer bin (podria ser cualquier bin), lo llamamos binomy
- Repetir los tres primeros pasos binomial_sample veces
- Comprovar que las muestras guardadas en binomy siguen una binomial de $N$
End of explanation
"""
bin_edges
"""
Explanation: The left and right position of the bins (from the first to the last one) are contained in the "bin_edges" vector. In the cell above, we chose to study the first bin (bin_number=1), and to calculate the probability that a point falls within it, we have to select bin_edge[1] rather than bin_edge[0] (as choosing bin_edge[0] would yield probability equal to 0).
End of explanation
"""
def Fcumulative(cost,Pmu): # Cumulative function
F=.5*(cost+1)-Pmu/12.*(cost**2-1)
return F
"""
Explanation: We can use the cumulative function to calculate the probability that a point falls within the bin.
End of explanation
"""
ki=np.arange(num_sample)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(N_61,color='b',bins=10,normed=1,label='Montecarlo p.d.f.')
Prob_bin = Fcumulative(bin_edges[bin_number],Pmu_chosen_value)-Fcumulative(bin_edges[bin_number-1],Pmu_chosen_value)
ax.plot(ki,stat.binom.pmf(ki,num_sample,Prob_bin),'r--')
ax.set_title('Binomial distribution and Histogram for "bin_number" bin')
ax.set_xlabel('Number of entries in bin "bin_number"')
ax.set_ylabel('Number of succesful events (normalized)')
ax.legend()
"""
Explanation: Plot the binominal distribution on top the histogram for bin bin_number
End of explanation
"""
num_sample = int(5e5)
binomial_sample = 1000
Pmu_chosen_value = -0.3
bin_number = 1
PmuVec_62 = np.ones(binomial_sample)*Pmu_chosen_value # Create vector with length = num_MC_experiments
#filled with a given Pmu value from which the pdf sample will be generated
X_62 = Montecarlo_inv_fun(PmuVec_62,num_sample) #Run Montecarlo with Pmu as a fixed parameter
N_62 = np.empty(binomial_sample)
for i in xrange(binomial_sample):
hist, bin_edges = np.histogram(X_62[:,i],range=(-1,1)) # fixing the range to -1, 1 avoids the bins to start at
#the minimum of the sample which would be min(X_62) (around -0.98 but not -1)
N_62[i] = hist[bin_number-1]
ki=np.arange(num_sample)
# Expected gaussian pdf
plot_center_value = Prob_bin*num_sample
sigma_gaussian = np.sqrt(Prob_bin*num_sample*(1-Prob_bin))
dist = stat.norm(0, 1)
# Shift and scale the histogram with respect to the estimated values. NOTE: Since we cannot generate infinite samples,
# we'll use np.mean and np.std instead of using the real values given by the theoretical distribution. This is likely
# to have an impact when shifting.
ki = (ki-np.mean(N_62))/np.std(N_62)
N_62 = (N_62-np.mean(N_62))/np.std(N_62)
x_pdf= np.linspace(-4,4,1000)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(N_62,color='b',bins=20,normed=1,label='Montecarlo p.d.f.')
Prob_bin = Fcumulative(bin_edges[bin_number],Pmu_chosen_value)-Fcumulative(bin_edges[bin_number-1],Pmu_chosen_value)
ax.plot(ki,stat.binom.pmf(ki,num_sample,Prob_bin),'r--')
ax.plot(x_pdf, dist.pdf(x_pdf), '-k', label = 'Gaussian')
ax.set_title('Binomial distribution tending to a Gaussian and Histogram for "bin_number" bin')
ax.set_xlabel('Number of entries in bin "bin_number"')
ax.set_ylabel('Number of succesful events (normalized)')
ax.set_xlim([-4,4])
ax.legend()
"""
Explanation: As shown in this exercise, when the number of values in the sample num_sample is fixed, the entries in a given bin follow a binomial distribution. By extension, the number of entries of a given bin with respect to each of the others follows a multinomial distribution.
If the number of entries for the experiment is not fixed, the distribution of the events in the bins follows a Poisson.
In any case, when the number of entries is very large, the multinominal distribution will behave as a Poisson. And what's more: based on the Central Theorem, this Poisson will in turn tend to a Gaussian.
6.2. What is the expected pdf for the number of entries per bin when the number N is very large?
End of explanation
"""
mu_62 = np.mean(N_62)
sigma_62 = np.std(N_62)
skewness_62 =stat.skew(N_62)
kurtosis_62 =stat.kurtosis(N_62)
print '\tmean = %.6f,\n\t\
sigma = %.6f,\n\tskewness = %.6f\n\tand kurtosis = %.6f' %(mu_62,
sigma_62, skewness_62, kurtosis_62)
"""
Explanation: Calculate the moments of the N_62 vector
End of explanation
"""
num_sample_63 = 10000
chisquared_sample = 1000 # Number of times we will call the MC function -- the higher this value, the better will the
#theorical Chi squared distribution describe the "observed" one
Pmu_chosen_value = -0.3
bin_number_63 = 5
PmuVec_63 = np.ones(chisquared_sample)*Pmu_chosen_value # Create vector with length = num_MC_experiments filled with a given Pmu value from which the pdf sample will be generated
X_63 = Montecarlo_inv_fun(PmuVec_63, num_sample_63) #Run Montecarlo with Pmu as a fixed parameter
"""
Explanation: Mean and sigma give 0 and 1 respectively since we have rescaled the N_62 vector. The skewness and kurtosis parameters are approximately 0, which is indicative of a gaussian.
6.3. Show that the $\chi^{2}$ of the obtained numbers per bin follows a $\chi^{2}$ distribution. To simplify, you shall use the nominal value obtained from the pdf formula per bin as the central value in the bin
Use procedure explained in section 6.1
End of explanation
"""
mean_63 = np.empty(bin_number_63)
bin_edges_63 = np.linspace(-1,1,bin_number_63+1) # Note that the dimensions of this parameter are 1x(bin_number+1), since we have
# to take into account the left-most and right-most bin
Prob_bin_63 = Fcumulative(bin_edges_63[1:], Pmu_chosen_value)-Fcumulative(bin_edges_63[:-1],Pmu_chosen_value)
mean_63 = Prob_bin_63*num_sample_63 #This will be our mu for each bin.
"""
Explanation: We will calculate the mean by multiplying the probability that a value falls within a given bin times the sample number. To this end, we will create a vector mean where we will store the $\mu$ of each bin, and use the cumulative function within two bin edges to obtain the probability, Prob_bin_63.
End of explanation
"""
chi_squared_values = np.empty(chisquared_sample)
for i in xrange(chisquared_sample):
hist, bin_edges = np.histogram(X_63[:,i],range=(-1,1), bins = bin_number_63) # fixing the range to -1, 1 avoids the bins to start at the minimum
# of the sample which would be min(X_63) (around -0.98 but not -1)
chi_squared_values[i] = np.sum((hist-mean_63)**2/(mean_63*(1-Prob_bin_63)))
"""
Explanation: Compute the chi squared for each MC run.
End of explanation
"""
x_pdf_chisquared = np.linspace(min(chi_squared_values),max(chi_squared_values),1000)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(chi_squared_values,color='b', bins=15,normed=1,label = '$\chi_{^2}$ values')
ax.plot(x_pdf_chisquared,stat.chi2.pdf(x_pdf_chisquared,bin_number_63),'r--',lw=2, label = '$\chi_{^2}$ distribution (theoretical)')
ax.set_title('Expected and Observed Distribution')
ax.set_xlabel('Chi Squared values')
ax.legend()
"""
Explanation: Plot the $\chi_{^2}$ distribution.
End of explanation
"""
'''STEP 1'''
pmu_7 = random.uniform(-1,1)
"""
Explanation: 6.4. How does the $\chi^{2}$ computed above change when you change the number of bins?
For this exercise, we can just modify the number of bins in 6.3 to obtain the new results.
As we increase the number of bins, we see that the observed distribution becomes more similar to a Gaussian (e.g. number of bins = 100 for a sample of 10000). Why? The chi-squared distribution depends ONLY on one parameter, k, which corresponds to the number of bins. If the size of the sample is large enough compared to the number of bins, then when we increase k and let it go to infinity, the Central Limit Theorem applies and the chi squared distribution tends to a Gaussian.
If we set the number of bins to a very small number --again, with a sample of 10000-- (e.g. 5), the resulting distribution will not look like a Gaussian at all.
Part 7
7.1.Take one of the histograms generated in the previous step and construct the conditional probability, P(histogram|$P_{\mu}$), of obtaining the entries in the obtained histogram for a given value of $P_{\mu}$.
Known parameters:
* pdf
Steps:
Generate a sample of random numbers with Monte Carlo, with a $P_{\mu}$ that will be "unknown" for the rest of the exercise.
Generate a histogram hist of the data (observed measurements). From now on, we assume we only know the theoretical pdf and that histogram.
We know that the bins of a histogram with fixed sample size behaves as a multinomial, thus, we know the function $P_{bin}(n_{i,obs};n_{bin},P_{\mu-try})$ where $n_{i,obs}$ is the number of observations in bin i, and it depends on 2 parameters, the number of bins $n_{bin}$ and the $P_{\mu_{try}}$. Therefore, we will say that the likelihood of obtaining hist for given $P_{\mu_{try}}$ is:
$$
P(\text{hist}|P_{\mu_{try}})=\prod_{i=1}^{n_{bin}} P_{bin}(\text{hist[i]};n_{bin},P_{\mu_{try}}) =n! \prod_{i=1}^{n_{bin}} \frac{p_i^{\text{hist[i]}}}{\text{hist[i]}!}=L(\text{hist}|P_{\mu_{try}})
$$
where $n$ is the sample size, hist is the array with our observed measurements and $p_i$ is the probability of an observation falling in bin $i$ (obtained from the theoretical pdf with $P_{\mu}=P_{\mu_{try}}$. The terms to the right of the last equal sign correspond to the expanded version of the multinomial distribution pmf. As this is kind of a likelihood, we define:
$$
l(\text{hist}|P_{\mu_{try}})=-\log(L(\text{hist}|P_{\mu_{try}}))= -\log(n!) +\sum_{i=1}^{n_{bin}} \Big(\text{hist[i]}\dot\log(p_i)-\log(\text{hist[i]}!) \Big)
$$
which will be much better numerically speaking. In addition, it is also preferable to derivate by hand and simplify the problem to root finding instead of optimization, which tend to converge much better. First we will need to know $p_i$ as a function of $P_{\mu}$:
$$
p_i=F(b)-F(a)=\int_a^b f(x)dx=\frac{1}{2}(b-a)+\frac{1}{12}P_{\mu}(a^2-b^2) \Rightarrow \frac{\partial p_i}{\partial P_{\mu}}=\frac{1}{12}(a^2-b^2) \
\frac{\partial l(\text{hist}|P_{\mu_{try}})}{\partial P_{\mu_{try}}}=\frac{\partial l(\text{hist}|P_{\mu_{try}})}{\partial p_i}\frac{\partial p_i}{\partial P_{\mu_{try}}}=\sum_{i=1}^{n_{bin}} \frac{\text{hist[i]}}{p_i}\frac{1}{12}(a_i^2-b_i^2)
$$
where $a$ and $b$ are the limits of the bin, bin_edges[i-1] and bin_edges[i-1] respectively, with $i=1,\dots,N_{bin}$.
* Find the maximum of $P(\text{hist}|P_{\mu_{try}})$ numerically.
End of explanation
"""
sample_size_n = int(1e3)
sample_7 = Montecarlo_inv_fun(np.array([pmu_7]), sample_size_n)[:,0]
bin_number_7 = 10
'''STEP 2'''
#Obtain the theoretical pdf
dG = gamma(cost,pmu_7)
#Plot the histogram and the theoretical distribution
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(sample_7,color='b', bins = bin_number_7,normed=1, range=(-1,1), label = 'Sample values')
#ax.plot(cost, dG, label = 'Theoretical pdf')
ax.set_title('Histogram data')
''' STEP 3'''
hist_7, bin_edges_7 = np.histogram(sample_7,range=(-1,1), bins = bin_number_7)
#Define the likelihood function
def likelihood(pmu_try, hist_values):
n_bin = len(hist_values)
N = sum(hist_values)
bin_edges = np.linspace(-1, 1, n_bin+1)
Prob_bin = Fcumulative(bin_edges[1:], pmu_try) - Fcumulative(bin_edges[:-1], pmu_try)
# This is the theoretical likelihood as defined in the comment above. Since the values that we obtain are very
# small, we have decided to work with the logarithm of this function.
'''
nfact = factorial(len(hist_values))
num = np.power(Prob_bin_7,hist_values)
den = factorial(hist_values)
likelihood = nfact*np.prod(num/den)
'''
#term1 = -np.log(factorial(N)) # note that the log likelihood is defined as -log(L)
term2 = -np.sum(hist_values*np.log(Prob_bin))
#term3 = -np.sum(np.log(factorial(hist_values)))
#likelihood = term1 + term2 - term3
return term2
# define the derivative of the likelihood function
def d_likelihood(pmu_try, hist_values):
n_bin = len(hist_values)
N = sum(hist_values)
bin_edges = np.linspace(-1, 1, n_bin+1)
Prob_bin = Fcumulative(bin_edges[1:], pmu_try) - Fcumulative(bin_edges[:-1], pmu_try)
d_like = np.sum(hist_values/Prob_bin*(np.square(bin_edges[:-1])-np.square(bin_edges[1:])))/12.
return d_like
pmu_grid = np.linspace(-1,1,50)
like = np.empty(len(pmu_grid))
Dlike = np.empty(len(pmu_grid))
for i,pmu in enumerate(pmu_grid):
like[i] = likelihood(pmu, hist_7)
Dlike[i] = d_likelihood(pmu, hist_7)
fig=plt.figure()
ax1=fig.add_subplot(121)
ax1.plot(pmu_grid, like, '.')
ax2=fig.add_subplot(122)
ax2.plot(pmu_grid, Dlike, '.')
#res = opt.minimize(fun=likelihood, x0=0, args=(hist_7))
#Pmu_ML = res.x[0]
Pmu_ML = opt.fsolve(d_likelihood, 0, args=(hist_7))
print 'Real value of Pmu:%.4g' %pmu_7
L_ML = likelihood(Pmu_ML,hist_7)
fun_var = lambda Pmu : likelihood(Pmu,hist_7)-L_ML-0.5
err_plus = opt.fsolve(fun_var,Pmu_ML+1)[0]-Pmu_ML
err_minus = Pmu_ML-opt.fsolve(fun_var,Pmu_ML-1)[0]
print 'ML estimate of Pmu:\n\tPmu=%.4g (-%.3g,+%.3g)' %(Pmu_ML,err_minus,err_plus)
print 'Mean estimate of Pmu:\n\tPmu=%.4g (%.3g)' %(-9*np.mean(sample_7), 9*np.std(sample_7)/np.sqrt(sample_size_n)) # aquí creo que
# la std del estimador con la media es la std de Pmu=a*x donde a es la constante -9 y x la variable aleatoria mean,
# por tanto, la std de Pmu es abs(a)*std(x)
"""
Explanation: [Oriol] He separado la definición de $P_{\mu}$ del resto de parametros, así si no haceis un print no sabreis cual es el valor de $P_{\mu}$ y no os quejareis tanto. Ahora podeis tratar de adivinar $P_{\mu}$ mirando al histograma con el plot de la pdf teorica comentado. También he movido la definicón de bin number abajo por si alguien se aburre y quiere mirar como varia el resultado con la misma muestra pero usando bins diferentes.
End of explanation
"""
|
NEONScience/NEON-Data-Skills | tutorials/Python/Hyperspectral/hyperspectral-classification/Classification_PCA_py/Classification_PCA_py.ipynb | agpl-3.0 | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy import linalg
from scipy import io
from mpl_toolkits.mplot3d import Axes3D
def PlotSpectraAndMean(Spectra, Wv, fignum):
### Spectra is NBands x NSamps
mu = np.mean(Spectra, axis=1)
print(np.shape(mu))
plt.figure(fignum)
plt.plot(Wv, Spectra, 'c')
plt.plot(Wv, mu, 'r')
plt.show()
return mu
"""
Explanation: syncID: 9a1e6798f2e94a71bbcd1c6a1f5946d2
title: "Classification of Hyperspectral Data with Principal Components Analysis (PCA) in Python"
description: "Learn to classify spectral data using the Principal Components Analysis (PCA) method."
dateCreated: 2017-06-21
authors: Paul Gader
contributors: Donal O'Leary
estimatedTime: 1 hour
packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot
topics: hyperspectral-remote-sensing, HDF5, remote-sensing
languagesTool: python
dataProduct: NEON.DP1.30006, NEON.DP3.30006, NEON.DP1.30008
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/hyperspectral-classification/Classification_PCA_py/Classification_PCA_py.ipynb
tutorialSeries: intro-hsi-py-series
urlTitle: classification-pca-python
In this tutorial, we will learn to classify spectral data using the
Principal Components Analysis (PCA) method.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Classify spectral remote sensing data using Principal Components Analysis.
### Install Python Packages
* **numpy**
* **gdal**
* **matplotlib**
* **matplotlib.pyplot**
### Download Data
<a href="https://ndownloader.figshare.com/files/8730436">
Download the spectral classification teaching data subset</a>
<a href="https://ndownloader.figshare.com/files/8730436" class="link--button link--arrow">
Download Dataset</a>
### Additional Materials
This tutorial was prepared in conjunction with a presentation on spectral classification
that can be downloaded.
<a href="https://ndownloader.figshare.com/files/8730613">
Download Dr. Paul Gader's Classification 1 PPT</a>
<a href="https://ndownloader.figshare.com/files/8731960">
Download Dr. Paul Gader's Classification 2 PPT</a>
<a href="https://ndownloader.figshare.com/files/8731963">
Download Dr. Paul Gader's Classification 3 PPT</a>
</div>
Set up
First, we'll start by setting up the necessary environment.
End of explanation
"""
filename = '/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/OSBSTinyIm.mat'
ImDict = io.loadmat(filename)
OSBSTinyIm = ImDict['OSBSTinyIm']
TinySize = np.shape(OSBSTinyIm)
NRows = TinySize[0]
NCols = TinySize[1]
NBands = TinySize[2]
print('{0:4d} {1:4d} {2:4d}'.format(NRows, NCols, NBands))
"""
Explanation: Now we can load the spectra.
Note that you will need to update the filepath below for your local file structure.
End of explanation
"""
### LOAD WAVELENGTHS WITH WATER BANDS ###
### AND BAD BEGINNING AND ENDING BANDS REMOVED ###
Wv = io.loadmat('/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/NEONWvsNBB.mat')
Wv = Wv['NEONWvsNBB']
print(np.shape(Wv))
plt.figure(1)
plt.plot(range(346), Wv)
plt.show()
"""
Explanation: Now we can extract wavelengths.
End of explanation
"""
### HAVE TO SUBTRACT AN OFFSET BECAUSE OF BAD BAND ###
### REMOVAL AND 0-BASED Python vs 1-Based MATLAB ###
Offset = 7
### LOAD & PRINT THE INDICES FOR THE COLORS ###
### AND DIG THEM OUT OF MANY LAYERS OF ARRAYS ###
NEONColors = io.loadmat('/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/NEONColors.mat')
NEONRed = NEONColors['NEONRed']
NEONGreen = NEONColors['NEONGreen']
NEONBlue = NEONColors['NEONBlue']
NEONNir = NEONColors['NEONNir']
NEONRed = NEONRed[0][0]-Offset
NEONGreen = NEONGreen[0][0]-Offset
NEONBlue = NEONBlue[0][0]-Offset
NEONNir = NEONNir[0][0]-Offset
print('Indices: {0:4d} {1:4d} {2:4d} {3:4d}'.format(NEONRed, NEONGreen, NEONBlue, NEONNir))
### CONVERT THE INDICES TO WAVELENGTHS ###
NEONRedWv = Wv[NEONRed][0]
NEONGreenWv = Wv[NEONGreen][0]
NEONBlueWv = Wv[NEONBlue][0]
NEONNirWv = Wv[NEONNir][0]
print('Wavelengths: {0:4d} {1:4d} {2:4d} {3:4d}'.format(NEONRedWv, NEONGreenWv, NEONBlueWv, NEONNirWv))
"""
Explanation: Let's load indices for Red, Green, and Blue for NEON hyperspectral data.
End of explanation
"""
RGBIm = OSBSTinyIm[:, :, [NEONRed, NEONGreen, NEONBlue]]
RGBIm = np.sqrt(RGBIm)
plt.figure(2)
plt.imshow(RGBIm)
plt.show()
"""
Explanation: Now we can make a color image and display it
End of explanation
"""
### HAVE TO TAKE INTO ACCOUNT DIFFERENCES BETWEEN Python AND Matlab ###
### Python USES THE C PROGRAMMING LANGUAGE ORDERING ###
### MATLAB USERS THE FORTRAN PROGRAMMING LANGUAGE ORDERING ###
### Python WOULD RESHAPE BY REFERENCE AND MATLAB BY VALUE ###
### THEREFORE, WE NEED TO COPY THE VALUES EXPLICITLY ###
TinyVecs = OSBSTinyIm.reshape(NRows*NCols, NBands, order='F').copy()
### MATLAB TREATS THE ROWS AS DATA SAMPLES ###
### np TREATS THE COLS AS DATA SAMPLES ###
TinyVecs = np.transpose(TinyVecs)
NSamps = np.shape(TinyVecs)[1]
np.shape(TinyVecs)
### EXERCISE
SpecIndices = range(1000, 2000, 100)
SomeSpectra = TinyVecs[:, range(1000, 2000, 100)]
mymu = PlotSpectraAndMean(SomeSpectra, Wv, 3)
np.shape(mymu)
"""
Explanation: Now let's turn the image into a sequence of vectors
so we can use matrix algebra
End of explanation
"""
### Indices of Spectra to Try ###
### SpecIndices = range(0, 1000, 100) ###
SpecIndices = range(1000, 2000, 100)
SomeSpectra = TinyVecs[:, range(1000, 2000, 100)]
plt.figure(3)
plt.plot(Wv, SomeSpectra)
plt.xlabel('Wavelengths in nm')
plt.ylabel('Reflectance')
plt.show()
"""
Explanation: Let's plot some spectra
End of explanation
"""
mu = np.mean(TinyVecs, axis=1)
plt.figure(4)
plt.plot(Wv, SomeSpectra, 'c')
plt.plot(Wv, mu, 'k')
plt.xlabel('Wavelengths in nm')
plt.ylabel('Reflectance')
plt.show()
"""
Explanation: Compute the Average Spectrum and plot it
End of explanation
"""
np.shape(mu)
TinyVecsZ = np.zeros((NBands, NSamps))
for n in range(NSamps):
TinyVecsZ[range(NBands),n]= TinyVecs[(range(NBands), n)]-mu
muz = np.mean(TinyVecsZ, axis=1)
plt.figure(5)
plt.plot(Wv, muz, 'k')
#plt.ylim(-1,1)
plt.show()
"""
Explanation: Now we want to subtract the mean from every sample.
End of explanation
"""
C = np.cov(TinyVecs)
np.shape(C)
"""
Explanation: Let's calculate the covariance.
End of explanation
"""
plt.figure(6)
plt.imshow(C)
plt.show()
# PRINT OUT SOME "AMPLIFIED" COVARIANCE VALUES %%%
for cn in range(0, 50,5):
w = int(Wv[cn])
if cn==0:
print(" ", end=" ")
else:
print('{0:5d}'.format(w), end=" ")
print('\n')
for rn in range(5, 50, 5):
w = int(Wv[rn])
print('{0:5d}'.format(w), end=" ")
for cn in range(5,50,5):
CovVal = int(100000*C[rn, rn])
print('{0:5d}'.format(CovVal), end=" ")
print('\n')
#print(round(100000*C[NEONBlue, NEONNir]))
#print(round(100000*C[NEONGreen, NEONNir]))
#print(round(100000*C[NEONRed, NEONNir]))
#print(round(100000*C[NEONGreen, NEONRed]))
"""
Explanation: We can look at some of the values but there are too many to look at them all.
We can also view C as an image.
End of explanation
"""
Norms = np.sqrt(np.sum(TinyVecs*TinyVecs, axis=0))
plt.figure(7)
plt.plot(Norms)
### Too many Norms. How do we fix?
plt.show()
np.shape(Norms)
np.shape(TinyVecs)
### Allocate Memory
TinyVecsNorm = np.zeros((NBands, NSamps))
for samp in range(NSamps):
NormSamp = Norms[samp]
for band in range(NBands):
TinyVecsNorm[band, samp] = TinyVecs[band,samp]/NormSamp
Norms1 = np.sqrt(np.sum(TinyVecsNorm*TinyVecsNorm, axis=0))
plt.figure(7)
plt.plot(Norms1)
plt.show()
BigNorm = np.max(Norms1)
LitNorm = np.min(Norms1)
print('{0:4f} {1:4f}'.format(BigNorm, LitNorm))
### Too many Norms. How do we fix?
"""
Explanation: Notice that there are no negative values. Why?
What if we normalize the vectors to have Norm 1 (a common strategy).
End of explanation
"""
### EXERCISE
SpecIndices = range(1000, 2000, 100)
SomeSpectraNorm = TinyVecsNorm[:, range(1000, 2000, 100)]
MuNorm = PlotSpectraAndMean(SomeSpectraNorm, Wv, 3)
CNorm = np.cov(TinyVecsNorm)
plt.figure()
plt.imshow(CNorm)
plt.show()
# PRINT OUT SOME "AMPLIFIED" COVARIANCE VALUES %%%
for cn in range(0, 50,5):
w = int(Wv[cn])
if cn==0:
print(" ", end=" ")
else:
print('{0:5d}'.format(w), end=" ")
print('\n')
for rn in range(5, 50, 5):
w = int(Wv[rn])
print('{0:5d}'.format(w), end=" ")
for cn in range(5,50,5):
CovVal = int(10000000*CNorm[rn, rn])
print('{0:5d}'.format(CovVal), end=" ")
print('\n')
print(np.shape(TinyVecs))
print(NEONNir)
print(NEONRed)
NIRVals = TinyVecs[NEONNir, range(NSamps)]
RedVals = TinyVecs[NEONRed, range(NSamps)]
NDVIVals = (NIRVals-RedVals)/(NIRVals+RedVals)
np.shape(NDVIVals)
NDVIIm = np.reshape(NDVIVals,(NRows, NCols),order='F')
print(np.shape(NDVIIm))
plt.figure()
plt.hist(NDVIVals)
plt.show()
HiNDVI = NDVIIm*(NDVIIm>0.8)
plt.figure()
plt.imshow(HiNDVI)
plt.show()
# plt.figure()
# plt.plot(nonzero(NDVIVals>0.8))
# plt.show()
VegIndices = np.nonzero(NDVIVals>0.8)
# print(VegIndices[0])
print(np.shape(VegIndices))
# print(np.shape(TinyVecs))
VegSpectra = TinyVecs[:, VegIndices[0]]
print(np.shape(VegSpectra))
CVeg = np.cov(VegSpectra)
plt.figure(9)
plt.imshow?
plt.imshow(CVeg,extent=(np.amin(Wv), np.amax(Wv),np.amax(Wv), np.amin(Wv)))
plt.colorbar()
plt.show()
"""
Explanation: <div id="ds-challenge" markdown="1">
**Challenge: Plotting Spectra with Mean Function**
Turn the script for plotting spectra and their mean above into a function.
</div>
End of explanation
"""
C = np.cov(TinyVecs)
D,V = linalg.eig(C)
D = D.real
print(np.shape(D))
print(np.shape(V))
print(TinyVecs.shape)
print(V[0,0])
plt.figure(10)
print(D.shape)
DiagD = np.diag(D)
print(D.shape)
plt.plot(DiagD)
#Exercise
#plt.plot(D[range(10)])
#plt.plot(D[range(10, 30, 10)])
plt.show()
TinyVecsPCA = np.dot(V.T, TinyVecs)
PCACovar = np.cov(TinyVecsPCA)
D,V = linalg.eig(C)
D = D.real
print(D.shape)
print(PCACovar.shape)
for r in range(10):
print('{0:5f} {1:5f}'.format(D[r], PCACovar[r,r]))
print()
for r in range(10):
for c in range(10):
NextVal = int(10000*PCACovar[r,c])
print('{0:5d}'.format(NextVal), end=" ")
print('\n')
# #Delta = np.sum(np.sum((PCACovar-D), axis=0), axis=0)
# print(Delta)
# plt.figure(11)
# plt.plot(np.diag(PCACovar))
# plt.show()
"""
Explanation: PCA
OK, let's do PCA
Recall that TinyVecs is the mean-subtracted version of the original spectra.
End of explanation
"""
%matplotlib notebook
fig = plt.figure(13)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(TinyVecsPCA[0,range(NSamps)],TinyVecsPCA[1,range(NSamps)],TinyVecsPCA[2,range(NSamps)], marker='o')
plt.show()
"""
Explanation: Notice that the values on the diagonal are the variances of each coordinate in
the PCA transformed data. They drop off rapidly which is why one can reduce
dimensionality by discarding components that have low variance. Also, notice that
the diagonal matrix D produce by diagonalizing the covariance of x is the
covariance of y = PCA(x).
If the data are Gaussian, then the coordinates of y are uncorrelated and
independent. If not, then only uncorrelated.
Let's pull out the first 3 dimensions and plot them.
End of explanation
"""
for coord in range(3):
P1 = TinyVecsPCA[coord, :]
PCAIm = np.reshape(P1, (NRows, NCols), order='F')
plt.figure(14+coord)
plt.imshow(np.abs(PCAIm))
plt.colorbar()
plt.show()
"""
Explanation: We can also display principal components as images
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.