repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
EducationalTestingService/rsmtool | rsmtool/notebooks/comparison/feature_descriptives.ipynb | apache-2.0 | if not out_dfs['descriptives'].empty:
display(HTML(out_dfs['descriptives'].to_html(index=True, classes=['alternate_colors3_groups'], float_format=float_format_func)))
else:
display(Markdown(no_info_str))
"""
Explanation: Overall descriptive feature statistics
End of explanation
"""
if not out_dfs['outliers'].empty:
display(HTML(out_dfs['outliers'].to_html(index=True, classes=['alternate_colors3_groups'], float_format=float_format_func)))
else:
display(Markdown(no_info_str))
"""
Explanation: Prevalence of recoded cases
End of explanation
"""
if not out_dfs['percentiles'].empty:
display(HTML(out_dfs['percentiles'].to_html(index=True, classes=['alternate_colors3_groups'], float_format=float_format_func)))
else:
display(Markdown(no_info_str))
"""
Explanation: Feature value distribution
End of explanation
"""
missing_value_warnings = []
if not (outputs_old['df_train_features'].empty or outputs_new['df_train_features'].empty):
ids_in_both_sets = list(set(outputs_old['df_train_features']['spkitemid']).intersection(outputs_new['df_train_features']['spkitemid']))
if not len(ids_in_both_sets)== 0:
if not len(ids_in_both_sets) == len(outputs_old['df_train_features']):
missing_value_warnings.append("Some responses from the old data set were not present in the new data.")
if not len(ids_in_both_sets) == len(outputs_new['df_train_features']):
missing_value_warnings.append("Some responses from the new data set were not present in the old data.")
# select matching data sets
df_selected_old = outputs_old['df_train_features'][outputs_old['df_train_features']['spkitemid'].isin(ids_in_both_sets)]
df_selected_new = outputs_new['df_train_features'][outputs_new['df_train_features']['spkitemid'].isin(ids_in_both_sets)]
df_correlations = comparer.compute_correlations_between_versions(df_selected_old,
df_selected_new)
if len(missing_value_warnings) > 0:
display(Markdown('*WARNING*: {} These responses were excluded from this analysis.'.format(' '.join(missing_value_warnings))))
display(HTML(df_correlations[['N', 'human_old', 'human_new', 'old_new']].to_html(index=True,
classes=['sortable'],
float_format=int_or_float_format_func)))
else:
display(Markdown("*WARNING: There were no matching response IDs in the training sets in old and new version*"))
else:
display(Markdown(no_info_str))
"""
Explanation: Correlations between feature values between old and new model
The table shows correlations between raw feature values and human scores in old and new model as well as the correlations between the feature values in both models.
End of explanation
"""
|
Yu-Group/scikit-learn-sandbox | jupyter/backup_deprecated_nbs/16_Combined_utils_RIT.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.datasets import load_breast_cancer
import numpy as np
from functools import reduce
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
"""
Explanation: Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Typical Setup
End of explanation
"""
%timeit
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(sklearn_ds = load_breast_cancer())
"""
Explanation: Step 1: Fit the Initial Random Forest
Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
End of explanation
"""
print("Training feature dimensions", X_train.shape, sep = ":\n")
print("\n")
print("Training outcome dimensions", y_train.shape, sep = ":\n")
print("\n")
print("Test feature dimensions", X_test.shape, sep = ":\n")
print("\n")
print("Test outcome dimensions", y_test.shape, sep = ":\n")
print("\n")
print("first 5 rows of the training set features", X_train[:5], sep = ":\n")
print("\n")
print("first 5 rows of the training set outcomes", y_train[:5], sep = ":\n")
X_train.shape[0]
breast_cancer = load_breast_cancer()
breast_cancer.data.shape[0]
"""
Explanation: Check out the data
End of explanation
"""
# Import our custom utilities
rf.n_estimators
estimator0 = rf.estimators_[0] # First tree
estimator1 = rf.estimators_[1] # Second tree
estimator2 = rf.estimators_[2] # Second tree
"""
Explanation: Step 2: For each Tree get core leaf node features
For each decision tree in the classifier, get:
The list of leaf nodes
Depth of the leaf node
Leaf node predicted class i.e. {0, 1}
Probability of predicting class in leaf node
Number of observations in the leaf node i.e. weight of node
Get the 2 Decision trees to use for testing
End of explanation
"""
tree_dat0 = irf_utils.get_tree_data(X_train = X_train, dtree = estimator0, root_node_id = 0)
tree_dat1 = irf_utils.get_tree_data(X_train = X_train, dtree = estimator1, root_node_id = 0)
tree_dat1 = irf_utils.get_tree_data(X_train = X_train, dtree = estimator2, root_node_id = 0)
"""
Explanation: Design the single function to get the key tree information
Get data from the first and second decision tree
End of explanation
"""
# Now plot the trees individually
irf_jupyter_utils.draw_tree(decision_tree = estimator0)
irf_jupyter_utils.pretty_print_dict(inp_dict = tree_dat0)
# Count the number of samples passing through the leaf nodes
sum(tree_dat0['tot_leaf_node_values'])
"""
Explanation: Decision Tree 0 (First) - Get output
Check the output against the decision tree graph
End of explanation
"""
feature_importances = rf.feature_importances_
std = np.std([dtree.feature_importances_ for dtree in rf.estimators_]
, axis=0)
feature_importances_rank_idx = np.argsort(feature_importances)[::-1]
# Check that the feature importances are standardized to 1
print(sum(feature_importances))
"""
Explanation: Step 3: Get the Gini Importance of Weights for the Random Forest
For the first random forest we just need to get the Gini Importance of Weights
Step 3.1 Get them numerically - most important
End of explanation
"""
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1
, feature_importances_rank_idx[f]
, feature_importances[feature_importances_rank_idx[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1])
, feature_importances[feature_importances_rank_idx]
, color="r"
, yerr = std[feature_importances_rank_idx], align="center")
plt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)
plt.xlim([-1, X_train.shape[1]])
plt.show()
"""
Explanation: Step 3.2 Display Feature Importances Graphically (just for interest)
End of explanation
"""
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
rf.n_classes_
estimator0.n_classes_
type(rf).__name__
rf_metrics = irf_utils.get_validation_metrics(inp_class_reg_obj = rf, y_true = y_test, X_test = X_test)
rf_metrics['confusion_matrix']
dtree1 = rf.estimators_[1]
dtree_metrics = irf_utils.get_validation_metrics(inp_class_reg_obj = dtree1, y_true = y_test, X_test = X_test)
dtree_metrics['confusion_matrix']
rf_metrics = irf_utils.get_validation_metrics(inp_class_reg_obj = rf, y_true = y_test, X_test = X_test)
rf_metrics['confusion_matrix']
# CHECK: If the random forest objects are going to be really large in size
# we could just omit them and only return our custom summary outputs
rf_metrics = irf_utils.get_validation_metrics(inp_class_reg_obj = rf, y_true = y_test, X_test = X_test)
all_rf_outputs = {"rf_obj" : rf,
"feature_importances" : feature_importances,
"feature_importances_rank_idx" : feature_importances_rank_idx,
"rf_metrics" : rf_metrics}
# CHECK: The following should be paralellized!
# CHECK: Whether we can maintain X_train correctly as required
for idx, dtree in enumerate(rf.estimators_):
dtree_out = irf_utils.get_tree_data(X_train=X_train,
X_test=X_test,
y_test=y_test,
dtree=dtree, root_node_id = 0)
# Append output to dictionary
all_rf_outputs["dtree{}".format(idx)] = dtree_out
estimator0_out = irf_utils.get_tree_data(X_train=X_train,
X_test=X_test,
y_test=y_test,
dtree=estimator0,
root_node_id=0)
print(estimator0_out['all_leaf_nodes'])
"""
Explanation: Putting it all together
Create a dictionary object to include all of the random forest objects
End of explanation
"""
print(estimator0_out['all_leaf_nodes'])
print(sum(estimator0_out['tot_leaf_node_values']))
print(estimator0_out['tot_leaf_node_values'])
print(estimator0_out['all_leaf_node_samples'])
print(estimator0.tree_.n_node_samples[0])
print([round(i, 1) for i in estimator0_out['all_leaf_node_samples_percent']])
print(sum(estimator0_out['all_leaf_node_samples_percent']))
"""
Explanation: Examine Individual Decision Tree Output
End of explanation
"""
irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_outputs)
"""
Explanation: Check the final dictionary of outputs
End of explanation
"""
irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_outputs['rf_metrics'])
all_rf_outputs['dtree0']
estimator0.tree_.value[:]
X_train_n_samples = X_train.shape[0]
all_leaf_vals = all_rf_outputs['dtree0']['all_leaf_node_values']
scaled_values = [i/X_train_n_samples for i in all_leaf_vals]
scaled_values
[(i) for i in all_leaf_vals]
"""
Explanation: Now we can start setting up the RIT class
Overview
At it's core, the RIT is comprised of 3 main modules
* FILTERING: Subsetting to either the 1's or the 0's
* RANDOM SAMPLING: The path-nodes in a weighted manner, with/ without replacement, within tree/ outside tree
* INTERSECTION: Intersecting the selected node paths in a systematic manner
For now we will just work with a single decision tree outputs
End of explanation
"""
uniq_feature_paths = all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']
leaf_node_classes = all_rf_outputs['dtree0']['all_leaf_node_classes']
ones_only = [i for i, j in zip(uniq_feature_paths, leaf_node_classes)
if j == 1]
ones_only
print("Number of leaf nodes", len(all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']), sep = ":\n")
print("Number of leaf nodes with 1 class", len(ones_only), sep = ":\n")
# Just pick the last seven cases, we are going to manually construct
# binary RIT of depth 3 i.e. max 2**3 -1 = 7 intersecting nodes
ones_only_seven = ones_only[-7:]
ones_only_seven
# Construct a binary version of the RIT manually!
# This should come in useful for unit tests!
node0 = ones_only_seven[-1]
node1 = np.intersect1d(node0, ones_only_seven[-2])
node2 = np.intersect1d(node1, ones_only_seven[-3])
node3 = np.intersect1d(node1, ones_only_seven[-4])
node4 = np.intersect1d(node0, ones_only_seven[-5])
node5 = np.intersect1d(node4, ones_only_seven[-6])
node6 = np.intersect1d(node4, ones_only_seven[-7])
intersected_nodes_seven = [node0, node1, node2, node3, node4, node5, node6]
for idx, node in enumerate(intersected_nodes_seven):
print("node" + str(idx), node)
rit_output = reduce(np.union1d, (node2, node3, node5, node6))
rit_output
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
raw_data = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
raw_data.data, raw_data.target, train_size=0.9,
random_state=2017)
rf = RandomForestClassifier(
n_estimators=3, random_state=2018)
rf.fit(X=X_train, y=y_train)
estimator0 = rf.estimators_[0]
estimator0_out = irf_utils.get_tree_data(X_train=X_train,
X_test=X_test,
y_test=y_test,
dtree=estimator0,
root_node_id=0)
print(estimator0_out['all_leaf_nodes'])
estimator0_out['validation_metrics']
np.random.seed(12)
tree = irf_utils.build_tree(feature_paths=irf_utils.select_random_path(),
max_depth=3,
noisy_split=False,
num_splits=5)
print("Root:\n", tree._val)
#print("Some child:\n", tree.children[0].children[1]._val)
# If noisy split is False, this should pass
assert(len(tree) == 1 + 5 + 5**2)
list(tree.traverse_depth_first())
estimator0_out_fltr = irf_utils.filter_leaves_classifier(dtree_data=estimator0_out,bin_class_type=1)
estimator0_out_fltr
print("Total Number of classes", len(estimator0_out['all_leaf_node_classes']), sep=":\n")
print("Total Number of 1-value classes", sum(estimator0_out['all_leaf_node_classes']), sep=":\n")
print("Total Number of 1-value classes", len(estimator0_out_fltr['leaf_nodes_depths']), sep=":\n")
all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']
irf_utils.filter_leaves_classifier(dtree_data=all_rf_outputs['dtree0'],bin_class_type=1)
all_rf_outputs['dtree0']
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
filtered = irf_utils.filter_leaves_classifier(dtree_data=all_rf_outputs['dtree0'],bin_class_type=1)
filtered
filtered['validation_metrics']['accuracy_score']
filtered['uniq_feature_paths']
import random
filtered
random.choice([1,5,9,10,12])
random.choice(filtered['uniq_feature_paths'])
from scipy import stats
def weighted_choice(values, weights):
"""Discrete distribution, drawing values with the frequency specified in weights.
Weights do not need to be normalized.
"""
if not len(weights) == len(values):
raise ValueError('Equal number of values and weights expected')
weights = np.array(weights)
weights = weights / weights.sum()
dist = stats.rv_discrete(values=(range(len(weights)), weights))
while True:
yield values[dist.rvs()]
g = weighted_choice(filtered['uniq_feature_paths'], filtered['tot_leaf_node_values'])
for i in range(100):
print(next(g))
filtered0 = irf_utils.filter_leaves_classifier(dtree_data=all_rf_outputs['dtree0'],bin_class_type=1)
filtered1 = irf_utils.filter_leaves_classifier(dtree_data=all_rf_outputs['dtree1'],bin_class_type=1)
all_weights = []
all_paths = []
for tree in range(2):
filtered = irf_utils.filter_leaves_classifier(dtree_data=all_rf_outputs['dtree{}'.format(tree)],
bin_class_type=1)
all_weights.extend(filtered['tot_leaf_node_values'])
all_paths.extend(filtered['uniq_feature_paths'])
g = weighted_choice(all_paths, all_weights)
all_weights
all_paths
for i in range(50):
print(next(g))
"""
Explanation: Get the leaf node 1's paths
Get the unique feature paths where the leaf node predicted class is just 1
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/d5a59f5536154816047f788dc4573ab4/60_sleep.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Stanislas Chambon <stan.chambon@gmail.com>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets.sleep_physionet.age import fetch_data
from mne.time_frequency import psd_welch
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
"""
Explanation: Sleep stage classification from polysomnography (PSG) data
<div class="alert alert-info"><h4>Note</h4><p>This code is taken from the analysis code used in
:footcite:`ChambonEtAl2018`. If you reuse this code please consider
citing this work.</p></div>
This tutorial explains how to perform a toy polysomnography analysis that
answers the following question:
.. important:: Given two subjects from the Sleep Physionet dataset
:footcite:KempEtAl2000,GoldbergerEtAl2000, namely
Alice and Bob, how well can we predict the sleep stages of
Bob from Alice's data?
This problem is tackled as supervised multiclass classification task. The aim
is to predict the sleep stage from 5 possible stages for each chunk of 30
seconds of data.
End of explanation
"""
ALICE, BOB = 0, 1
[alice_files, bob_files] = fetch_data(subjects=[ALICE, BOB], recording=[1])
mapping = {'EOG horizontal': 'eog',
'Resp oro-nasal': 'resp',
'EMG submental': 'emg',
'Temp rectal': 'misc',
'Event marker': 'misc'}
raw_train = mne.io.read_raw_edf(alice_files[0])
annot_train = mne.read_annotations(alice_files[1])
raw_train.set_annotations(annot_train, emit_warning=False)
raw_train.set_channel_types(mapping)
# plot some data
# scalings were chosen manually to allow for simultaneous visualization of
# different channel types in this specific dataset
raw_train.plot(start=60, duration=60,
scalings=dict(eeg=1e-4, resp=1e3, eog=1e-4, emg=1e-7,
misc=1e-1))
"""
Explanation: Load the data
Here we download the data from two subjects and the end goal is to obtain
:term:epochs and its associated ground truth.
MNE-Python provides us with
:func:mne.datasets.sleep_physionet.age.fetch_data to conveniently download
data from the Sleep Physionet dataset
:footcite:KempEtAl2000,GoldbergerEtAl2000.
Given a list of subjects and records, the fetcher downloads the data and
provides us for each subject, a pair of files:
-PSG.edf containing the polysomnography. The :term:raw data from the
EEG helmet,
-Hypnogram.edf containing the :term:annotations recorded by an
expert.
Combining these two in a :class:mne.io.Raw object then we can extract
:term:events based on the descriptions of the annotations to obtain the
:term:epochs.
Read the PSG data and Hypnograms to create a raw object
End of explanation
"""
annotation_desc_2_event_id = {'Sleep stage W': 1,
'Sleep stage 1': 2,
'Sleep stage 2': 3,
'Sleep stage 3': 4,
'Sleep stage 4': 4,
'Sleep stage R': 5}
# keep last 30-min wake events before sleep and first 30-min wake events after
# sleep and redefine annotations on raw data
annot_train.crop(annot_train[1]['onset'] - 30 * 60,
annot_train[-2]['onset'] + 30 * 60)
raw_train.set_annotations(annot_train, emit_warning=False)
events_train, _ = mne.events_from_annotations(
raw_train, event_id=annotation_desc_2_event_id, chunk_duration=30.)
# create a new event_id that unifies stages 3 and 4
event_id = {'Sleep stage W': 1,
'Sleep stage 1': 2,
'Sleep stage 2': 3,
'Sleep stage 3/4': 4,
'Sleep stage R': 5}
# plot events
fig = mne.viz.plot_events(events_train, event_id=event_id,
sfreq=raw_train.info['sfreq'],
first_samp=events_train[0, 0])
# keep the color-code for further plotting
stage_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
"""
Explanation: Extract 30s events from annotations
The Sleep Physionet dataset is annotated using
8 labels <physionet_labels_>_:
Wake (W), Stage 1, Stage 2, Stage 3, Stage 4 corresponding to the range from
light sleep to deep sleep, REM sleep (R) where REM is the abbreviation for
Rapid Eye Movement sleep, movement (M), and Stage (?) for any none scored
segment.
We will work only with 5 stages: Wake (W), Stage 1, Stage 2, Stage 3/4, and
REM sleep (R). To do so, we use the event_id parameter in
:func:mne.events_from_annotations to select which events are we
interested in and we associate an event identifier to each of them.
Moreover, the recordings contain long awake (W) regions before and after each
night. To limit the impact of class imbalance, we trim each recording by only
keeping 30 minutes of wake time before the first occurrence and 30 minutes
after the last occurrence of sleep stages.
End of explanation
"""
tmax = 30. - 1. / raw_train.info['sfreq'] # tmax in included
epochs_train = mne.Epochs(raw=raw_train, events=events_train,
event_id=event_id, tmin=0., tmax=tmax, baseline=None)
print(epochs_train)
"""
Explanation: Create Epochs from the data based on the events found in the annotations
End of explanation
"""
raw_test = mne.io.read_raw_edf(bob_files[0])
annot_test = mne.read_annotations(bob_files[1])
annot_test.crop(annot_test[1]['onset'] - 30 * 60,
annot_test[-2]['onset'] + 30 * 60)
raw_test.set_annotations(annot_test, emit_warning=False)
raw_test.set_channel_types(mapping)
events_test, _ = mne.events_from_annotations(
raw_test, event_id=annotation_desc_2_event_id, chunk_duration=30.)
epochs_test = mne.Epochs(raw=raw_test, events=events_test, event_id=event_id,
tmin=0., tmax=tmax, baseline=None)
print(epochs_test)
"""
Explanation: Applying the same steps to the test data from Bob
End of explanation
"""
# visualize Alice vs. Bob PSD by sleep stage.
fig, (ax1, ax2) = plt.subplots(ncols=2)
# iterate over the subjects
stages = sorted(event_id.keys())
for ax, title, epochs in zip([ax1, ax2],
['Alice', 'Bob'],
[epochs_train, epochs_test]):
for stage, color in zip(stages, stage_colors):
epochs[stage].plot_psd(area_mode=None, color=color, ax=ax,
fmin=0.1, fmax=20., show=False,
average=True, spatial_colors=False)
ax.set(title=title, xlabel='Frequency (Hz)')
ax2.set(ylabel='µV^2/Hz (dB)')
ax2.legend(ax2.lines[2::3], stages)
plt.show()
"""
Explanation: Feature Engineering
Observing the power spectral density (PSD) plot of the :term:epochs grouped
by sleeping stage we can see that different sleep stages have different
signatures. These signatures remain similar between Alice and Bob's data.
The rest of this section we will create EEG features based on relative power
in specific frequency bands to capture this difference between the sleep
stages in our data.
End of explanation
"""
def eeg_power_band(epochs):
"""EEG relative power band feature extraction.
This function takes an ``mne.Epochs`` object and creates EEG features based
on relative power in specific frequency bands that are compatible with
scikit-learn.
Parameters
----------
epochs : Epochs
The data.
Returns
-------
X : numpy array of shape [n_samples, 5]
Transformed data.
"""
# specific frequency bands
FREQ_BANDS = {"delta": [0.5, 4.5],
"theta": [4.5, 8.5],
"alpha": [8.5, 11.5],
"sigma": [11.5, 15.5],
"beta": [15.5, 30]}
psds, freqs = psd_welch(epochs, picks='eeg', fmin=0.5, fmax=30.)
# Normalize the PSDs
psds /= np.sum(psds, axis=-1, keepdims=True)
X = []
for fmin, fmax in FREQ_BANDS.values():
psds_band = psds[:, :, (freqs >= fmin) & (freqs < fmax)].mean(axis=-1)
X.append(psds_band.reshape(len(psds), -1))
return np.concatenate(X, axis=1)
"""
Explanation: Design a scikit-learn transformer from a Python function
We will now create a function to extract EEG features based on relative power
in specific frequency bands to be able to predict sleep stages from EEG
signals.
End of explanation
"""
pipe = make_pipeline(FunctionTransformer(eeg_power_band, validate=False),
RandomForestClassifier(n_estimators=100, random_state=42))
# Train
y_train = epochs_train.events[:, 2]
pipe.fit(epochs_train, y_train)
# Test
y_pred = pipe.predict(epochs_test)
# Assess the results
y_test = epochs_test.events[:, 2]
acc = accuracy_score(y_test, y_pred)
print("Accuracy score: {}".format(acc))
"""
Explanation: Multiclass classification workflow using scikit-learn
To answer the question of how well can we predict the sleep stages of Bob
from Alice's data and avoid as much boilerplate code as possible, we will
take advantage of two key features of sckit-learn:
Pipeline , and FunctionTransformer.
Scikit-learn pipeline composes an estimator as a sequence of transforms
and a final estimator, while the FunctionTransformer converts a python
function in an estimator compatible object. In this manner we can create
scikit-learn estimator that takes :class:mne.Epochs thanks to
eeg_power_band function we just created.
End of explanation
"""
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred, target_names=event_id.keys()))
"""
Explanation: In short, yes. We can predict Bob's sleeping stages based on Alice's data.
Further analysis of the data
We can check the confusion matrix or the classification report.
End of explanation
"""
|
TheAstroFactory/transit-network | code/scratch/beacon_scratch.ipynb | mit | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.cm as cm
import matplotlib
matplotlib.rcParams.update({'font.size':18})
matplotlib.rcParams.update({'font.family':'serif'})
"""
Explanation: Transit-Network
Making plots for my SETI idea, dreamed up on the airplane home from AAS 227
SHELVED
I put this idea on the backburner and removed it from the paper because it has an extra layer of confusion:
if the ETI goal is to simply send a 1-bit message ("we're here!") then we don't need an extra unknown relation (i.e. the relation of orbital period and separation). Instead it is easier to find (and probably build) a beacon where all the periods are the same in a region (or if not periods, transit depths, or time of transit, etc)
End of explanation
"""
Alpha = 3./2.
ab_dist = np.array([2., 3.,3.5, 6., 6.7, 7., 8.2, 14., 18., 20.])
ab_per = ab_dist ** (Alpha)
# a figure of the 1-d projection, the key for SETI.
plt.figure(figsize=(5,4))
plt.plot(ab_dist, ab_per, 'k')
plt.scatter(ab_dist, ab_per,color='k')
plt.xlabel('Home $-$ Beacon Separation (pc)')
plt.ylabel('P$_{orb}$ of Beacon (days)')
plt.ylim(0,100)
plt.savefig('dist_per.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
# figure angle on sky to plot
# theta = np.random.random(len(ab_dist)) * 2. * np.pi
# freeze this random config, i like it
theta = np.array([ 4.52448995, 3.46489278, 0.33872438, 1.6891746 , 2.37611205,
2.72516744, 5.41764719, 4.01860732, 1.72938583, 0.60279578])
x = ab_dist * np.cos(theta)
y = ab_dist * np.sin(theta)
# a figure of the 2-d observed (sky) plane
plt.figure(figsize=(6,5))
plt.axes()
# the central red cicrle
circ = plt.Circle((0,0), radius=1.4, fc='r', zorder=0)
plt.gca().add_patch(circ)
# make the concentric circles
for k in range(5,29,3):
circ = plt.Circle((0,0), radius=k, fc='none', alpha=0.35, color='k')
plt.gca().add_patch(circ)
plt.scatter(x,y, c=ab_per, cmap=cm.viridis_r, s=90, alpha=0.7, edgecolors='k', zorder=2)
plt.xlim(-20,20)
plt.ylim(-20,20)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
cbar = plt.colorbar()
cbar.set_label('P$_{orb}$ of Beacon (days)')
plt.savefig('sky_per.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
plt.show()
"""
Explanation: The idea is simple: ET's would place artificial satellites in orbit around stars surrounding their home star system. The orbit period of the remote artificial satellites (or beacons) would be proportional to the distance from the remote star to the home star system. Each remote beacon system would be part of a network map leading to the home system. If the orbital period vs beacon distance relationship was known, the exact location of the home system could be triangulated using only a subset of the beacons.
First toy model (2D)
End of explanation
"""
n_b = 1000 # number of beacon systems
# n_b = 1e6
i_max = 90.
# i_max = 10. # what if we forced beacons to be roughly aligned w/ Galactic plane?
d_min = 2 # min distance from ET home world to place beacons (pc)
d_max = 50 # max distance from ET home world to place beacons (pc)
d_home = 1000 # distance from Earth to ET Home world (in pc)
alpha = 3. / 2. # the coefficient to encode the period-distance relationship - in this case Kepler 3rd law
R_star = 6.955e10 # cm (R_Sun)
R_planet = 7149235 # cm (R_Jupiter)
AU = 1.49598e13 # cm (1 AU)
#__ the part to repeat __
rad = np.random.random(n_b) * (d_max - d_min) + d_min # dist from ET Home to Beacons (pc)
per = rad**Alpha # the period, in days by arbitrary construction
a_AU = (per / 365.)**(2./3.) # the orbital semimajor axis (in AU), assuming solar scaling
incl = np.random.random(n_b) * i_max # orbit plane inclination (deg)
#__
# plt.scatter(a_AU, per, s=90, alpha=0.6)
# plt.xlabel('a (AU)')
# plt.ylabel('Period (days)')
# determine if beacon is "visible", i.e. does it Transit?
b = a_AU * AU * np.sin(incl / 180. * np.pi)
Transit = b < (R_star + R_planet)
no_Transit = b >= (R_star + R_planet)
print(sum(Transit), n_b, float(sum(Transit)) / n_b)
# plt.scatter(a_AU[no_Transit], per[no_Transit], s=20, alpha=0.6, c='blue', lw=0)
# plt.scatter(a_AU[Transit], per[Transit], s=100, alpha=0.6, c='red', lw=0)
# plt.xlabel('a (AU)')
# plt.ylabel('Period (days)')
# plt.xlim(0,2)
# make a plot of fraction of systems that transit as a function of orbital semimajor axis (a)
yy, aa = np.histogram(a_AU[Transit], bins=25, range=(min(a_AU),1))
nn, aa = np.histogram(a_AU[no_Transit], bins=25, range=(min(a_AU),1))
plt.plot((aa[1:] + aa[0:-1])/2., np.array(yy, dtype='float') / nn)
plt.xlabel('a (AU)')
plt.ylabel('Fraction that transit')
# now put beacons in random places in space to illustrate on the sky
theta = np.random.random(n_b) * 2 * np.pi
phi = np.random.random(n_b) * np.pi
x = rad * np.cos(theta)
y = rad * np.sin(theta)
plt.figure(figsize=(5,5))
plt.scatter(x[no_Transit], y[no_Transit], s=10, alpha=0.1)
plt.scatter(x[Transit], y[Transit], s=100, alpha=0.5, c='red')
plt.xlim(-60,60)
plt.ylim(-60,60)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
plt.savefig('3d_model.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
'''
# repeat this 3d toy model 1000 times to get smooth recovery fraction
hist_smooth = np.zeros_like(yy)
num_smooth = np.zeros_like(nn)
num_transit = np.zeros(1000)
for k in range(1000):
rad = np.random.random(n_b) * (d_max - d_min) + d_min # dist from ET Home to Beacons (pc)
per = rad**alpha # the period, in days by arbitrary construction
a_AU = (per / 365.)**(2./3.) # the orbital semimajor axis (in AU), assuming solar scaling
incl = np.random.random(n_b) * i_max # orbit plane inclination (deg)
b = a_AU * AU * np.sin(incl / 180. * np.pi)
Transit = b < (R_star + R_planet)
no_Transit = b >= (R_star + R_planet)
yy, aa = np.histogram(a_AU[Transit], bins=25, range=(0,2))
nn, aa = np.histogram(a_AU[no_Transit], bins=25, range=(0,2))
hist_smooth = hist_smooth + np.array(yy, dtype='float')
num_smooth = num_smooth + np.array(nn, dtype='float')
# plt.plot((aa[1:] + aa[0:-1])/2., np.array(yy, dtype='float') / nn, alpha=0.1, c='k')
num_transit[k] = (float(sum(Transit)) / n_b)
plt.plot((aa[1:] + aa[0:-1])/2., hist_smooth / num_smooth, lw=2);
plt.xlabel('a (AU)');
plt.ylabel('Fraction that transit');
# plt.savefig('recov_fraction.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
print(np.mean(num_transit), np.std(num_transit))
''';
"""
Explanation: a simple 3D model
End of explanation
"""
plt.figure(figsize=(5,5))
plt.scatter(x[no_Transit], y[no_Transit], s=10, alpha=0.1)
plt.scatter(x[Transit], y[Transit], s=100, alpha=0.5, c=per[Transit], edgecolors='k', cmap=cm.viridis_r)
plt.xlim(-60,60)
plt.ylim(-60,60)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
plt.savefig('3dcolor.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
Nother = 200
plt.figure(figsize=(5,5))
# plt.scatter(x[no_Transit], y[no_Transit], s=10, alpha=0.1)
plt.scatter(x[Transit], y[Transit], alpha=0.5, c=per[Transit], cmap=cm.viridis_r)
plt.scatter(np.random.random(Nother)*100-50,np.random.random(Nother)*100-50,
c=np.random.random(Nother)*250+5, alpha=.5, s=10)
plt.xlim(-60,60)
plt.ylim(-60,60)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
plt.savefig('3dcolor_bkgd.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
"""
Explanation: OK, so our toy model works... but how do we actually detect these beacons among the noise of naturally occuring exoplanets we've detected?
if we had the Kepler exoplanet database
and "injected" this signal (assuming the Kepler cadence was 100% effective in detecting transiting systems), would we see an "over density" or concentration of orbital periods as a funcrtion of location?
End of explanation
"""
|
canismarko/xanespy | tests/View_TXM_Data.ipynb | gpl-3.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
from matplotlib import pyplot as plt
plt.xkcd()
import pandas as pd
import os
import xanespy as xp
import numpy as np
from skimage import transform
# Set some directories
SSRL_DIR = 'txm-data-ssrl'
# APS_DIR = os.path.join(TEST_DIR, 'txm-data-aps')
# PTYCHO_DIR = os.path.join(TEST_DIR, 'ptycho-data-als/NS_160406074')
"""
Explanation: This notebook is part of the various tests for xanespy. It's intended to allow for visual evaluation of fits, etc.
End of explanation
"""
spectrum = pd.read_csv(os.path.join(SSRL_DIR, 'NCA_xanes.csv'),
index_col=0, sep=' ', names=['Absorbance'])
Es = np.array(spectrum.index)
As = np.array(spectrum.values)[:,0]
edge = xp.edges.NCANickelKEdge()
# Do the guessing
result = xp.xanes_math.guess_kedge(spectrum=As, energies=Es, edge=edge)
predicted_As = xp.xanes_math.predict_edge(Es, *result)
plt.plot(Es, As, marker='o', linestyle=":")
plt.plot(Es, predicted_As)
plt.legend(['Real Data', 'Predicted'], loc="lower right")
print(result)
grad = np.gradient(As)
grad = (grad - np.min(grad)) / (np.max(grad) - np.min(grad))
grad = grad * (np.max(As) - np.min(As)) + np.min(As)
plt.plot(Es, grad)
plt.plot(Es, As)
print("Best Edge: {}".format(Es[np.argmax(As)]))
"""
Explanation: test_math.XanesMathTest.test_guess_kedge_params
The code below will print two lines: one the actual data and one is the predicted k-edge. They should be more-or-less on top of each other.
End of explanation
"""
xp.import_ssrl_frameset(directory=SSRL_DIR, hdf_filename='imported-ssrl-data.h5')
%pdb off
fs = xp.XanesFrameset(filename='imported-ssrl-data.h5',
edge=xp.k_edges['Ni_NCA'],
groupname='ssrl-test-data')
fs.qt_viewer()
"""
Explanation: Importing SSRL Dataset
End of explanation
"""
fs = get_frame
xanes_spectrum = pd.Series.from_csv('testdata/NCA-cell2-soc1-fov1-xanesspectrum.tsv', sep='\t')
(peak, goodness) = fit_whiteline(xanes_spectrum, width=5)
peak.plot_fit()
fit = peak.fit_list[0]
print("Center:", peak.center())
print("Goodness of fit:", goodness)
residuals = peak.residuals(observations=xanes_spectrum[8347:8358])
xanes_spectrum.plot(ax=plt.gca(), marker='o', linestyle="None")
residuals.plot(ax=plt.gca(), marker='o')
plt.xlim(8340, 8360)
"""
Explanation: Particle Labeling and Segmentation
End of explanation
"""
spectrum = pd.read_csv(os.path.join(SSRL_DIR, 'NCA_xanes.csv'),
index_col=0, sep=' ', names=['Absorbance'])
spectrum.plot(linestyle="None", marker="o")
plt.ylabel("Optical Depth")
plt.xlabel("Energy (eV)")
xp.plots.remove_extra_spines(plt.gca())
plt.legend().remove()
# Guess the starting parameters so fitting is more accurate
p0 = xp.xanes_math.guess_kedge(spectrum.values[:,0],
energies=spectrum.index,
edge=xanespy.k_edges['Ni_NCA'])
Es = np.linspace(8250, 8650, num=200)
predicted = xanespy.xanes_math.predict_edge(Es, *p0)
plt.plot(Es, predicted)
# Get the data
As = np.array([spectrum.values])[:,:,0]
Es = np.array([spectrum.index])
# Do the fitting
fit = xp.xanes_math.fit_kedge(spectra=As,
energies=Es,
p0=p0)
# Predict what the edge is from the fit parameters
energies = np.linspace(8250, 8640, num=500)
predicted = xp.xanes_math.predict_edge(energies, *fit[0])
# Plot the results
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
spectrum.plot(ax=ax1, linestyle="None", marker='o')
spectrum.plot(ax=ax2, linestyle="None", marker='o')
ax1.plot(energies, predicted)
ax2.plot(energies, predicted)
# Put a line for the predicted whiteline position
params = xp.xanes_math.KEdgeParams(*fit[0])
ax2.axvline(x=params.E0, linestyle="--", color="green")
ax2.axvline(x=params.E0 + params.gb, linestyle="--", color="green")
ax2.set_xlim(8333, 8360)
# Clean up the plots
ax1.legend().remove()
ax2.legend().remove()
ax1.set_xlabel("Energy (eV)")
ax2.set_xlabel("Energy (eV)")
ax1.set_ylabel("Optical Depth")
ax2.set_ylabel("Optical Depth")
xp.plots.remove_extra_spines(ax1)
xp.plots.remove_extra_spines(ax2)
"""
Explanation: Example Spectrum With K-edge Fitting
End of explanation
"""
|
KasperPRasmussen/bokeh | examples/howto/charts/scatter.ipynb | bsd-3-clause | df2 = df_from_json(data)
df2 = df2.sort('total', ascending=False)
df2 = df2.head(10)
df2 = pd.melt(df2, id_vars=['abbr', 'name'])
scatter5 = Scatter(
df2, x='value', y='name', color='variable', title="x='value', y='name', color='variable'",
xlabel="Medals", ylabel="Top 10 Countries", legend='bottom_right')
show(scatter5)
"""
Explanation: Example with nested json/dict like data, which has been pre-aggregated and pivoted
End of explanation
"""
scatter6 = Scatter(flowers, x=blend('petal_length', 'sepal_length', name='length'),
y=blend('petal_width', 'sepal_width', name='width'), color='species',
title='x=petal_length+sepal_length, y=petal_width+sepal_width, color=species',
legend='top_right')
show(scatter6)
"""
Explanation: Use blend operator to "stack" variables
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
print(z_dim)
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name='inputs_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(inputs=z, units=128, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(units=out_dim, inputs=h1, activation=None)
out = tf.nn.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator',reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(inputs=x, units=n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
logits = tf.layers.dense(inputs=h1, units=1, activation=None)
out = tf.nn.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
print(input_real.shape)
print(input_z.shape)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, reuse=False, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_logits_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in tf.trainable_variables() if var.name.find('generator')!=-1]
d_vars = [var for var in tf.trainable_variables() if var.name.find('discriminator')!=-1]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
# "Discriminator Loss: {:.4f}...".format(train_loss_d),
# "Generator Loss: {:.4f}".format(train_loss_g)
)
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
ledrui/Regression | week2/.ipynb_checkpoints/week-2-multiple-regression-assignment-1-blank-checkpoint.ipynb | mit | import graphlab
"""
Explanation: Regression Week 2: Multiple Regression (Interpretation)
The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
* Use SFrames to do some feature engineering
* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
* Look at coefficients and interpret their meanings
* Evaluate multiple models via RSS
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
train_data,test_data = sales.random_split(.8,seed=0)
"""
Explanation: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
"""
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
"""
Explanation: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
End of explanation
"""
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
"""
Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
End of explanation
"""
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
"""
Explanation: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
End of explanation
"""
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
prediction = model.predict(data)
# Then compute the residuals/errors
residual = prediction - outcome
# Then square and add them up
RSS = 0
for error in residual:
RSS = RSS + (error**2)
return(RSS)
"""
Explanation: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
End of explanation
"""
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
"""
Explanation: Test your function by computing the RSS on TEST data for the example model:
End of explanation
"""
from math import log
"""
Explanation: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
End of explanation
"""
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
# bed_bath
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
# log_sqft_living
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x))
#lat_plus_long
train_data['lat_plus_long'] = train_data['lat']+train_data['long']
test_data['lat_plus_long'] = test_data['lat']+test_data['long']
"""
Explanation: Next create the following 4 new features as column in both TEST and TRAIN data:
* bedrooms_squared = bedrooms*bedrooms
* bed_bath_rooms = bedrooms*bathrooms
* log_sqft_living = log(sqft_living)
* lat_plus_long = lat + long
As an example here's the first one:
End of explanation
"""
test_data['bedrooms_squared']+test_data['bed_bath_rooms']+test_data['log_sqft_living']+test_data['lat_plus_long']
"""
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
End of explanation
"""
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
"""
Explanation: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
* Model 2: add bedrooms*bathrooms
* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
End of explanation
"""
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features,
validation_set = None)
##
model_2 = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features,
validation_set = None)
##
model_3 = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features,
validation_set = None)
# Examine/extract each model's coefficients:
model_1_coefficients = model_1.get("coefficients")
print" Model_1", model_1_coefficients
##
model_2_coefficients = model_2.get("coefficients")
print " Model_2", model_2_coefficients
##
model_3_coefficients = model_3.get("coefficients")
print " Model_3", model_3_coefficients
"""
Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
End of explanation
"""
# Compute the RSS on TRAINING data for each of the three models and record the values:
## Model_1
model_1_rss = get_residual_sum_of_squares(model_1, train_data, train_data['price'])
print " model_1 rss",model_1_rss
## Model_2
model_2_rss = get_residual_sum_of_squares(model_2, train_data, train_data['price'])
print " model_2 rss",model_2_rss
##
model_3_rss = get_residual_sum_of_squares(model_3, train_data, train_data['price'])
print " model_3 rss",model_3_rss
"""
Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?
Think about what this means.
Comparing multiple models
Now that you've learned three models and extracted the model weights we want to evaluate which model is best.
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
End of explanation
"""
# Compute the RSS on TESTING data for each of the three models and record the values:
## Model_1
model_1_rss = get_residual_sum_of_squares(model_1, test_data, test_data['price'])
print " model_1 rss",model_1_rss
## Model_2
model_2_rss = get_residual_sum_of_squares(model_2, test_data, test_data['price'])
print " model_2 rss",model_2_rss
##
model_3_rss = get_residual_sum_of_squares(model_3, test_data, test_data['price'])
print " model_3 rss",model_3_rss
"""
Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?
Now compute the RSS on on TEST data for each of the three models.
End of explanation
"""
|
UWSEDS/LectureNotes | Autumn2017/02-Python-and-Data/Lecture-Python-And-Data-Completed.ipynb | bsd-2-clause | !ls
"""
Explanation: Software Engineering for Data Scientists
Manipulating Data with Python
CSE 583
Today's Objectives
1. Opening & Navigating the Jupyter Notebook
2. Simple Math in the Jupyter Notebook
3. Loading data with pandas
4. Cleaning and Manipulating data with pandas
5. Visualizing data with pandas & matplotlib
1. Opening and Navigating the IPython Notebook
We will start today with the interactive environment that we will be using often through the course: the Jupyter Notebook.
We will walk through the following steps together:
Download miniconda (be sure to get Version 3.6) and install it on your system (hopefully you have done this before coming to class)
Use the conda command-line tool to update your package listing and install the IPython notebook:
Update conda's listing of packages for your system:
$ conda update conda
Install IPython notebook and all its requirements
$ conda install jupyter notebook
Navigate to the directory containing the course material. For example:
$ cd ~/courses/CSE583/
You should see a number of files in the directory, including these:
$ ls
...
Breakout-Simple-Math.ipynb
Lecture-Python-And-Data.ipynb
...
Type jupyter notebook in the terminal to start the notebook
$ jupyter notebook
If everything has worked correctly, it should automatically launch your default browser
Click on Lecture-Python-And-Data.ipynb to open the notebook containing the content for this lecture.
With that, you're set up to use the Jupyter notebook!
2. Simple Math in the Jupyter Notebook
Now that we have the Jupyter notebook up and running, we're going to do a short breakout exploring some of the mathematical functionality that Python offers.
Please open Breakout-Simple-Math.ipynb, find a partner, and make your way through that notebook, typing and executing code along the way.
3. Loading data with pandas
With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.
Python's Data Science Ecosystem
In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.
Some of the most important ones are:
numpy: Numerical Python
Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data.
If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.
scipy: Scientific Python
Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.
We will not look closely at Scipy today, but we will use its functionality later in the course.
pandas: Labeled Data Manipulation in Python
Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame.
If you've used the R statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar.
matplotlib: Visualization in Python
Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).
Installing Pandas & friends
Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run
$ conda install numpy scipy pandas matplotlib
and (so long as your conda setup is working) the packages will be downloaded and installed on your system.
Downloading the data
shell commands can be run from the notebook by preceding them with an exclamation point:
End of explanation
"""
# !curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
"""
Explanation: uncomment this to download the data:
End of explanation
"""
import pandas
"""
Explanation: Loading Data with Pandas
End of explanation
"""
import pandas as pd
"""
Explanation: Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern:
End of explanation
"""
data = pd.read_csv('pronto.csv')
"""
Explanation: Now we can use the read_csv command to read the comma-separated-value data:
End of explanation
"""
data.head()
data.tail()
"""
Explanation: Note: strings in Python can be defined either with double quotes or single quotes
Viewing Pandas Dataframes
The head() and tail() methods show us the first and last rows of the data
End of explanation
"""
data.shape
"""
Explanation: The shape attribute shows us the number of elements:
End of explanation
"""
data.columns
"""
Explanation: The columns attribute gives us the column names
End of explanation
"""
data.index
"""
Explanation: The index attribute gives us the index names
End of explanation
"""
data.dtypes
"""
Explanation: The dtypes attribute gives the data types of each column:
End of explanation
"""
data.columns
data['tripduration']
"""
Explanation: 4. Manipulating data with pandas
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing:
End of explanation
"""
data['tripduration'] / 60
"""
Explanation: Mathematical operations on columns happen element-wise:
End of explanation
"""
data['tripminutes'] = data['tripduration'] / 60
data.head()
"""
Explanation: Columns can be created (or overwritten) with the assignment operator.
Let's create a tripminutes column with the number of minutes for each trip
End of explanation
"""
data.to_csv('pronto-new.csv')
!ls
"""
Explanation: Note that this manipulation only modifies the data frame in memory; if you want to save the modified dataframe to CSV, you can use the to_csv() method:
End of explanation
"""
import numpy as np
np.exp(data['tripminutes'])
"""
Explanation: More complicated mathematical operations can be done with tools in the numpy package:
End of explanation
"""
data['starttime'].head()
pd.to_datetime(data['starttime'].head())
times = pd.DatetimeIndex(data['starttime'].head())
times.dayofweek
data['starttime']
times = pd.DatetimeIndex(pd.to_datetime(data['starttime'], format="%m/%d/%Y %I:%M:%S %p"))
"""
Explanation: Working with Times
One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times.
For a dataset of this size, using pd.to_datetime and specifying the date format can make things much faster (from the strftime reference, we see that the pronto data has format "%m/%d/%Y %I:%M:%S %p"
End of explanation
"""
times.dayofweek
times.month
times
"""
Explanation: (Note: you can also use infer_datetime_format=True in most cases to automatically infer the correct format, though due to a bug it doesn't work when AM/PM are present)
With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:
End of explanation
"""
data.head()
"""
Explanation: Simple Grouping of Data
The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.
End of explanation
"""
pd.value_counts(data['gender'])
"""
Explanation: Value Counts
Pandas includes an array of useful functionality for manipulating and analyzing tabular data.
We'll take a look at two of these here.
The pandas.value_counts returns statistics on the unique values within each column.
We can use it, for example, to break down rides by gender:
End of explanation
"""
pd.value_counts(data['birthyear'])
"""
Explanation: Or to break down rides by age:
End of explanation
"""
pd.value_counts(data['birthyear'], sort=False)
"""
Explanation: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off:
End of explanation
"""
pd.value_counts(times.dayofweek, sort=False)
pd.value_counts(times.month, sort=False)
pd.value_counts(data['gender'], dropna=False)
"""
Explanation: We can explore other things as well: day of week, hour of day, etc.
End of explanation
"""
from IPython.display import Image
Image('split_apply_combine.png')
"""
Explanation: Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the Python Data Science Handbook)
End of explanation
"""
data.groupby('gender').count()
data.groupby('gender').mean()
"""
Explanation: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
<data object>.groupby(<grouping values>).<aggregate>()
for example, we can group by gender and find the average of all numerical columns:
End of explanation
"""
data.groupby('gender')['tripminutes'].mean()
"""
Explanation: It's also possible to indes the grouped object like it is a dataframe:
End of explanation
"""
data.groupby([times.hour, 'gender'])['tripminutes'].mean()
"""
Explanation: You can even group by multiple values: for example we can look at the trip duration by time of day and by gender:
End of explanation
"""
grouped = data.groupby([times.hour, 'gender'])['tripminutes'].mean().unstack()
grouped
"""
Explanation: The unstack() operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns:
End of explanation
"""
%matplotlib inline
"""
Explanation: 5. Visualizing data with pandas
Of course, looking at tables of data is not very intuitive.
Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.
Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots:
End of explanation
"""
grouped.plot()
"""
Explanation: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data:
End of explanation
"""
import matplotlib.pyplot as plt
plt.style.use('seaborn')
grouped.plot()
"""
Explanation: Adjusting the Plot Style
Matplotlib has a number of plot styles you can use. For example, if you like R you might use the 'ggplot' style.
I like the 'seaborn' style:
End of explanation
"""
grouped.plot.bar()
"""
Explanation: Other plot types
Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method:
For example, we can create a histogram of trip durations:
End of explanation
"""
ax = grouped.plot.bar()
ax.set_xlim(-1, 10)
"""
Explanation: If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object:
End of explanation
"""
data['month'] = times.month
ax = data.groupby('month')['trip_id'].count().plot.bar();
"""
Explanation: Breakout: Exploring the Data
Make a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a groupby, and find the appropriate aggregation to count the number in each group).
End of explanation
"""
data.groupby(['month','gender'])['trip_id'].count().unstack().plot();
"""
Explanation: Split this plot by gender. Do you see any seasonal ridership patterns by gender?
End of explanation
"""
data.groupby(['month','usertype'])['trip_id'].count().unstack().plot();
"""
Explanation: Split this plot by user type. Do you see any seasonal ridership patterns by usertype?
End of explanation
"""
data['hour'] = times.month
ax = data.groupby('hour')['trip_id'].count().plot.bar();
data.groupby(['hour','gender'])['trip_id'].count().unstack().plot();
data.groupby(['hour','usertype'])['trip_id'].count().unstack().plot();
"""
Explanation: Repeat the above three steps, counting the number of rides by time of day rather thatn by month.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session01/Day4/IntroToMachLearnSolutions.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Introduction to Machine Learning:
Examples of Unsupervised and Supervised Machine-Learning Algorithms
Version 0.1
Broadly speaking, machine-learning methods constitute a diverse collection of data-driven algorithms designed to classify/characterize/analyze sources in multi-dimensional spaces. The topics and studies that fall under the umbrella of machine learning is growing, and there is no good catch-all definition. The number (and variation) of algorithms is vast, and beyond the scope of these exercises. While we will discuss a few specific algorithms today, more importantly, we will explore the scope of the two general methods: unsupervised learning and supervised learning and introduce the powerful (and dangerous?) Python package scikit-learn.
By AA Miller (Jet Propulsion Laboratory, California Institute of Technology.)
(c) 2016 California Institute of Technology. Government sponsorship acknowledged.
End of explanation
"""
# execute dummy code here
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
"""
Explanation: Problem 1) Introduction to scikit-learn
At the most basic level, scikit-learn makes machine learning extremely easy within Python. By way of example, here is a short piece of code that builds a complex, non-linear model to classify sources in the Iris data set that we learned about yesterday:
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
Those 4 lines of code have constructed a model that is superior to any system of hard cuts that we could have encoded while looking at the multidimensional space. This can be fast as well: execute the dummy code in the cell below to see how "easy" machine-learning is with scikit-learn.
End of explanation
"""
type(iris)
"""
Explanation: Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, X, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y.
Unsupervised models are fit by calling .fit(X) and supervised models are fit by calling .fit(X, y). In both cases, predictions for new observations, Xnew, can be obtained by calling .predict(Xnew). Those are the basics and beyond that, the details are algorithm specific, but the documentation for essentially everything within scikit-learn is excellent, so read the docs.
To further develop our intuition, we will now explore the Iris dataset a little further.
Problem 1a What is the pythonic type of iris?
End of explanation
"""
iris.keys()
"""
Explanation: You likely haven't encountered a scikit-learn Bunch before. It's functionality is essentially the same as a dictionary.
Problem 1b What are the keys of iris?
End of explanation
"""
print(np.shape(iris.data))
print(iris.data)
"""
Explanation: Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful.
Problem 1c What is the shape and content of the iris data?
End of explanation
"""
print(np.shape(iris.target))
print(iris.target)
"""
Explanation: Problem 1d What is the shape and content of the iris target?
End of explanation
"""
print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature
plt.scatter(iris.data[:,0], iris.data[:,1], c = iris.target, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
"""
Explanation: Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms.
Problem 1e Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.
End of explanation
"""
from sklearn.cluster import KMeans
Kcluster = KMeans(n_clusters = 2)
Kcluster.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
Kcluster = KMeans(n_clusters = 3)
Kcluster.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
"""
Explanation: Problem 2) Unsupervised Machine Learning
Unsupervised machine learning, sometimes referred to as clustering or data mining, aims to group or classify sources in the multidimensional feature space. The "unsupervised" comes from the fact that there are no target labels provided to the algorithm, so the machine is asked to cluster the data "on its own." The lack of labels means there is no (simple) method for validating the accuracy of the solution provided by the machine (though sometimes simple examination can show the results are terrible).
For this reason [note - this is my (AAM) opinion and there many be many others who disagree], unsupervised methods are not particularly useful for astronomy. Supposing one did find some useful clustering structure, an adversarial researcher could always claim that the current feature space does not accurately capture the physics of the system and as such the clustering result is not interesting or, worse, erroneous. The one potentially powerful exception to this broad statement is outlier detection, which can be a branch of both unsupervised and supervised learning. Finding weirdo objects is an astronomical pastime, and there are unsupervised methods that may help in that regard in the LSST era.
To begin today we will examine one of the most famous, and simple, clustering algorithms: $k$-means. $k$-means clustering looks to identify $k$ convex clusters, where $k$ is a user defined number. And here-in lies the rub: if we truly knew the number of clusters in advance, we likely wouldn't need to perform any clustering in the first place. This is the major downside to $k$-means. Operationally, pseudocode for the algorithm can be summarized as the following:
initiate search by identifying k points (i.e. the cluster centers)
loop
assign each point in the data set to the closest cluster center
calculate new cluster centers based on mean position of all points within each cluster
if diff(new center - old center) < threshold:
stop (i.e. clusters are defined)
The threshold is defined by the user, though in some cases the total number of iterations is also An advantage of $k$-means is that the solution will always converge, though the solution may only be a local minimum. Disadvantages include the assumption of convexity, i.e. difficult to capture complex geometry, and the curse of dimensionality (though you can combat that with dimensionality reduction after yesterday).
In scikit-learn the KMeans algorithm is implemented as part of the sklearn.cluster module.
Problem 2a Fit two different $k$-means models to the iris data, one with 2 clusters and one with 3 clusters. Plot the resulting clusters in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications?
End of explanation
"""
rs = 14
Kcluster1 = KMeans(n_clusters = 3, n_init = 1, init = 'random', random_state = rs)
Kcluster1.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster1.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
"""
Explanation: With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior.
Problem 2b How do the results change if the 3 cluster model is called with n_init = 1 and init = 'random' options? Use rs for the random state [this allows me to cheat in service of making a point].
*Note - the respective defaults for these two parameters are 10 and k-means++, respectively. Read the docs to see why these choices are, likely, better than those in 2b.
End of explanation
"""
print("feature\t\t\tmean\tstd\tmin\tmax")
for featnum, feat in enumerate(iris.feature_names):
print("{:s}\t{:.2f}\t{:.2f}\t{:.2f}\t{:.2f}".format(feat, np.mean(iris.data[:,featnum]),
np.std(iris.data[:,featnum]), np.min(iris.data[:,featnum]),
np.max(iris.data[:,featnum])))
"""
Explanation: A random aside that is not particularly relevant here
$k$-means evaluates the Euclidean distance between individual sources and cluster centers, thus, the magnitude of the individual features has a strong effect on the final clustering outcome.
Problem 2c Calculate the mean, standard deviation, min, and max of each feature in the iris data set. Based on these summaries, which feature is most important for clustering?
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(iris.data)
Kcluster = KMeans(n_clusters = 3)
Kcluster.fit(scaler.transform(iris.data))
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
"""
Explanation: Petal length has the largest range and standard deviation, thus, it will have the most "weight" when determining the $k$ clusters.
The truth is that the iris data set is fairly small and straightfoward. Nevertheless, we will now examine the clustering results after re-scaling the features. [Some algorithms, cough Support Vector Machines cough, are notoriously sensitive to the feature scaling, so it is important to know about this step.] Imagine you are classifying stellar light curves: the data set will include contact binaries with periods of $\sim 0.1 \; \mathrm{d}$ and Mira variables with periods of $\gg 100 \; \mathrm{d}$. Without re-scaling, this feature that covers 4 orders of magnitude may dominate all others in the final model projections.
The two most common forms of re-scaling are to rescale to a guassian with mean $= 0$ and variance $= 1$, or to rescale the min and max of the feature to $[0, 1]$. The best normalization is problem dependent. The sklearn.preprocessing module makes it easy to re-scale the feature set. It is essential that the same scaling used for the training set be used for all other data run through the model. The testing, validation, and field observations cannot be re-scaled independently. This would result in meaningless final classifications/predictions.
Problem 2d Re-scale the features to normal distributions, and perform $k$-means clustering on the iris data. How do the results compare to those obtained earlier?
Hint - you may find 'StandardScaler()' within the sklearn.preprocessing module useful.
End of explanation
"""
from sklearn.cluster import DBSCAN
dbs = DBSCAN(eps = 0.7, min_samples = 7)
dbs.fit(scaler.transform(iris.data)) # best to use re-scaled data since eps is in absolute units
dbs_outliers = dbs.labels_ == -1
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = dbs.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.scatter(iris.data[:,0][dbs_outliers], iris.data[:,1][dbs_outliers], s = 30, c = 'k')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
"""
Explanation: These results are almost identical to those obtained without scaling. This is due to the simplicity of the iris data set.
How do I test the accuracy of my clusters?
Essentially - you don't. There are some methods that are available, but they essentially compare clusters to labeled samples, and if the samples are labeled it is likely that supervised learning is more useful anyway. If you are curious, scikit-learn does provide some built-in functions for analyzing clustering, but again, it is difficult to evaluate the validity of any newly discovered clusters.
What if I don't know how many clusters are present in the data?
An excellent question, as you will almost never know this a priori. Many algorithms, like $k$-means, do require the number of clusters to be specified, but some other methods do not. As an example DBSCAN. In brief, DBSCAN requires two parameters: minPts, the minimum number of points necessary for a cluster, and $\epsilon$, a distance measure. Clusters are grown by identifying core points, objects that have at least minPts located within a distance $\epsilon$. Reachable points are those within a distance $\epsilon$ of at least one core point but less than minPts core points. Identically, these points define the outskirts of the clusters. Finally, there are also outliers which are points that are $> \epsilon$ away from any core points. Thus, DBSCAN naturally identifies clusters, does not assume clusters are convex, and even provides a notion of outliers. The downsides to the algorithm are that the results are highly dependent on the two tuning parameters, and that clusters of highly different densities can be difficult to recover (because $\epsilon$ and minPts is specified for all clusters.
In scitkit-learn the
DBSCAN algorithm is part of the sklearn.cluster module. $\epsilon$ and minPts are set by eps and min_samples, respectively.
Problem 2e Cluster the iris data using DBSCAN. Play around with the tuning parameters to see how they affect the final clustering results. How does the use of DBSCAN compare to $k$-means? Can you obtain 3 clusters with DBSCAN? If not, given the knowledge that the iris dataset has 3 classes - does this invalidate DBSCAN as a viable algorithm?
Note - DBSCAN labels outliers as $-1$, and thus, plt.scatter(), will plot all these points as the same color.
End of explanation
"""
from astroquery.sdss import SDSS # enables direct queries to the SDSS database
GALquery = """SELECT TOP 10000
p.dered_u - p.dered_g as ug, p.dered_g - p.dered_r as gr,
p.dered_g - p.dered_i as gi, p.dered_g - p.dered_z as gz,
p.petroRad_i, p.petroR50_i, p.deVAB_i
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND p.type = 3
"""
SDSSgals = SDSS.query_sql(GALquery)
SDSSgals
"""
Explanation: I was unable to obtain 3 clusters with DBSCAN. While these results are, on the surface, worse than what we got with $k$-means, my suspicion is that the 4 features do not adequately separate the 3 classes. [See - a nayseyer can always make that argument.] This is not a problem for DBSCAN as an algorithm, but rather, evidence that no single algorithm works well in all cases.
Challenge Problem) Cluster SDSS Galaxy Data
The following query will select 10k likely galaxies from the SDSS database and return the results of that query into an astropy.Table object. (For now, if you are not familiar with the SDSS DB schema, don't worry about this query, just know that it returns a bunch of photometric features.)
End of explanation
"""
Xgal = np.array(SDSSgals.to_pandas())
galScaler = StandardScaler().fit(Xgal)
dbs = DBSCAN(eps = 0.5, min_samples=100)
dbs.fit(galScaler.transform(Xgal))
cluster_members = dbs.labels_ != -1
outliers = dbs.labels_ == -1
plt.figure(figsize = (10,8))
plt.scatter(Xgal[:,0][outliers], Xgal[:,3][outliers],
c = "k",
s = 4, alpha = 0.1)
plt.scatter(Xgal[:,0][cluster_members], Xgal[:,3][cluster_members],
c = dbs.labels_[cluster_members],
alpha = 0.4, edgecolor = "None", cmap = "viridis")
plt.xlim(-1,5)
plt.ylim(-0,3.5)
"""
Explanation: I have used my own domain knowledge to specificly choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query.
One nice feature of astropy tables is that they can readily be turned into pandas DataFrames, which can in turn easily be turned into a sklearn X array with NumPy. For example:
X = np.array(SDSSgals.to_pandas())
And you are ready to go.
Challenge Problem Using the SDSS dataset above, identify interesting clusters within the data [this is intentionally very open ended, if you uncover anything especially exciting you'll have a chance to share it with the group]. Feel free to use the algorithms discussed above, or any other packages available via sklearn. Can you make sense of the clusters in the context of galaxy evolution?
Hint - don't fret if you know nothing about galaxy evolution (neither do I!). Just take a critical look at the clusters that are identified
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier(n_neighbors = 3).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
KNNclf = KNeighborsClassifier(n_neighbors = 10).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
"""
Explanation: Note - the above solution seems to separate out elliptical galaxies from blue star forming galaxies, however, the results are highly, highly dependent upon the tuning parameters.
Problem 3) Supervised Machine Learning
Supervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The "supervised" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later).
The details and machinations of supervised learning will be explored further during the following break-out session. Here, we will simply introduce some of the basics as a point of comparison to unsupervised machine learning.
We will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem.
In scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module.
Problem 3a Fit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure?
Hint - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, same features and order as the training set, as input.
Hint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?
End of explanation
"""
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(KNeighborsClassifier(n_neighbors=5), iris.data, iris.target)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
print("The accuracy of the kNN = 5 model is ~{:.4}".format( sum(CVpreds == iris.target)/len(CVpreds) ))
CVpreds50 = cross_val_predict(KNeighborsClassifier(n_neighbors=50), iris.data, iris.target)
print("The accuracy of the kNN = 50 model is ~{:.4}".format( sum(CVpreds50 == iris.target)/len(CVpreds50) ))
"""
Explanation: These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data?
Without going into too much detail, we will test this using cross validation (CV), which will be explored in more detail later. In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using cross_val_predict, we can get a better sense of the model accuracy. Predictions from cross_val_predict are produced in the following manner:
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(sklearn.model(), X, y)
where sklearn.model() is the desired model, X is the feature array, and y is the label array.
Problem 3b Produce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?
End of explanation
"""
for iris_type in range(3):
iris_acc = sum( (CVpreds50 == iris_type) & (iris.target == iris_type)) / sum(iris.target == iris_type)
print("The accuracy for class {:s} is ~{:.4f}".format(iris.target_names[iris_type], iris_acc))
"""
Explanation: While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur.
Problem 3c Calculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.
End of explanation
"""
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(iris.target, CVpreds50)
print(cm)
"""
Explanation: We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor "neighborhoods". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is "confusion" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Like almost everything else we have encountered during this exercise, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_prep)
Problem 3d Calculate the confusion matrix for the iris training set and the $k$NN = 50 model.
End of explanation
"""
normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]
normalized_cm
"""
Explanation: From this representation, we see right away that most of the virginica that are being misclassifed are being scattered into the versicolor class. However, this representation could still be improved: it'd be helpful to normalize each value relative to the total number of sources in each class, and better still, it'd be good to have a visual representation of the confusion matrix. This visual representation will be readily digestible. Now let's normalize the confusion matrix.
Problem 3e Calculate the normalized confusion matrix. Be careful, you have to sum along one axis, and then divide along the other.
Anti-hint: This operation is actually straightforward using some array manipulation that we have not covered up to this point. Thus, we have performed the necessary operations for you below. If you have extra time, you should try to develop an alternate way to arrive at the same normalization.
End of explanation
"""
plt.imshow(normalized_cm, interpolation = 'nearest', cmap = 'bone_r')# complete
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.ylabel( 'True')# complete
plt.xlabel( 'Predicted' )# complete
plt.colorbar()
plt.tight_layout()
"""
Explanation: The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful.
Problem C3 Plot the confusion matrix. Be sure to label each of the axeses.
Hint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.
End of explanation
"""
|
usantamaria/ipynb_para_docencia | 09_libreria_pandas/pandas.ipynb | mit | """
IPython Notebook v4.0 para python 3.0
Librerías adicionales: numpy, scipy, matplotlib. (EDITAR EN FUNCION DEL NOTEBOOK!!!)
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/slides.css", "r").read())
"""
Explanation: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Libraría Pandas
Objetivos
Conocer los principales comandos de la librería pandas
Utilizar pandas para limpieza y manipulación de datos.
Sobre el autor
Sebastián Flores
ICM UTFSM
sebastian.flores@usm.cl
Sobre la presentación
Contenido creada en ipython notebook (jupyter)
Versión en Slides gracias a RISE de Damián Avila
Software:
* python 2.7 o python 3.1
* pandas 0.16.1
Opcional:
* numpy 1.9.2
* matplotlib 1.3.1
0.1 Instrucciones
Las instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente link.
Después de descargar y abrir el presente notebook, recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con Ctr-S para evitar sorpresas.
* Reemplazar en las celdas de código donde diga FIX_ME por el código correspondiente.
* Ejecutar cada celda de código utilizando Ctr-Enter
0.2 Licenciamiento y Configuración
Ejecutar la siguiente celda mediante Ctr-Enter.
End of explanation
"""
%%bash
cat data/data.csv
"""
Explanation: Aprender haciendo
Consideraremos el siguiente archivo data.csv que contiene datos incompletos:
End of explanation
"""
import numpy as np
df = np.loadtxt("data/data.csv", delimiter=";", dtype=str)
print( df )
import pandas as pd
df = pd.read_csv("data/data.csv", sep=";")
print( df )
#df
inch2m = 0.0254
feet2m = 0.3048
df.diametro = df.diametro * inch2m
df.altura = df.altura * feet2m
df.volumen = df.volumen * (feet2m**3)
df.tipo_de_arbol = "Cherry Tree"
df
print( df.columns )
print( df.index )
print( df["altura"]*2 )
print( df["diametro"]**2 * df["altura"] / df.volumen )
"""
Explanation: 1.- ¿Porqué utilizar pandas?
Razón oficial:
Porque en numpy no es posible mezclar tipos de datos, lo cual complica cargar, usar, limpiar y guardar datos mixtos.
Razón personal:
Porque habían cosas que en R eran más fáciles pero no pythonísticas. La librería pandas es un excelente compromiso.
End of explanation
"""
import pandas as pd
s1 = pd.Series([False, 1, 2., "3", 4 + 0j])
print( s1 )
# Casting a otros tipos
print( list(s1) )
print( set(s1) )
print( np.array(s1) )
# Ejemplo de operatoria
s0 = pd.Series(range(6), index=range(6))
s1 = pd.Series([1,2,3], index=[1,2,3])
s2 = pd.Series([4,5,6], index=[4,5,6])
s3 = pd.Series([10,10,10], index=[1,4,6])
print( s0 )
print( s0 + s1 )
print( s0 + s1 + s2 )
print( s0.add(s1, fill_value=0) )
"""
Explanation: 2. Lo básico de pandas
Pandas imita los dataframes de R, pero en python. Todo lo que no tiene sentido es porque se parece demasiado a R.
Pandas permite tener datos como en tablas de excel: datos en una columna pueden ser mixtos.
La idea central es que la indexación es "a medida": las columnas y las filas (index) pueden ser enteros o floats, pero también pueden ser strings. Depende de lo que tenga sentido.
Los elementos básicos de pandas son:
Series: Conjunto de valores con indexación variable.
DataFrames: Conjunto de Series.
2.1 Series
Una serie es un conveniente conjunto de datos, como una columna de datos de excel, pero con indexación más genérica.
python
pd.Series(self, data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)
End of explanation
"""
# dict
df = pd.DataFrame({"col1":[1,2,3,4],
"col2":[1., 2., 3., 4.],
"col3":["uno", "dos", "tres", "cuatro"]})
df
"""
Explanation: 2.2 DataFrames
Un Dataframe es una colección de Series con una indexación común. Como una planilla de excel.
python
pd.DataFrame(self, data=None, index=None,
columns=None, dtype=None, copy=False)
End of explanation
"""
# csv
df = pd.read_csv("data/data.csv", sep=";")
df
df = pd.read_json("data/data.json")
df
"""
Explanation: 3.1 Obteniendo datos
Archivo csv
Archivo json
Archivo de excel: convertir a csv cuidadosamente (elegir un separador apropiado, no comentar strings).
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
df.columns
df['altura']
df.shape
df.head()
df.tail()
df.describe()
df.describe(include="all")
from matplotlib import pyplot as plt
df.hist(figsize=(10,10), layout=(3,1))
#df.hist(figsize=(8,8), layout=(3,1), by="tipo_de_arbol")
plt.show()
from matplotlib import pyplot as plt
pd.scatter_matrix(df, figsize=(10,10), range_padding=0.2)
plt.show()
df.tipo_de_arbol.value_counts()
"""
Explanation: 4.- Inspeccionando datos
Accesando las columnas
shape
head, tail, describe
histogram
pd.scatter_matrix
count_values
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
df["radio"] = .5 * df.diametro
df
df.area = np.pi * df.radio **2
df.columns
"""
Explanation: 5.- Manipulando DataFrames
Agregando columnas
Borrando columnas
Agregando filas
Borrando filas
Mask
Grouping
Imputación de datos
Apply
Merge (a la SQL)
Accesamiento
5.1 Agregando columnas
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
print( df.columns )
df.columns = ["RaDiO","AlTuRa","VoLuMeN","TiPo_De_ArBoL"]
print( df.columns )
df.columns = [col.lower() for col in df.columns]
print( df.columns )
"""
Explanation: 5.2 Renombrando columnas
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
print( df.columns )
df = df[["tipo_de_arbol","volumen", "diametro"]]
df
df = df.drop("tipo_de_arbol", axis=1)
df
df.drop("diametro", axis=1, inplace=True)
df
"""
Explanation: 5.3 Borrando columnas
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
print( df.index )
df
df = df.reindex( range(20) )
df
# Usando loc para acceder con notación de indices tradicional
df.loc[20, :] = [10, 20, 30, "CT"]
df
"""
Explanation: 5.4 Agregando filas (indices)
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
print df.index
df.index = df.index + 10
print df.index
df.index = ["i_%d"%idx for idx in df.index]
print df.index
"""
Explanation: 5.5 Renombrando filas (índices)
End of explanation
"""
print df.index
df
df = df.drop(["i_11","i_13","i_19"], axis=0)
print( df.index )
df
df.drop(["i_24","i_25","i_26"], axis=0, inplace=True)
df
df = df[-5:]
df
"""
Explanation: 5.6 Borrando indices
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
vol_mean = df.volumen.mean()
vol_std = df.volumen.std()
mask_1 = df.altura < 80
mask_2 = df.volumen <= vol_mean + vol_std
df1 = df[ mask_1 & mask_2 ]
df1
# Si se hace dinamicamente, utilizar suficientes parentesis
#df2 = df[ ((vol_mean - vol_std) <= df.volumen) & (df.volumen <= (vol_mean + vol_std) ) ]
df2 = df[ (df.volumen >=(vol_mean - vol_std)) & (df.volumen <= (vol_mean + vol_std) ) ]
df2
# A veces para simplificar numpy ayuda
mask_1 = df.volumen >= (vol_mean - vol_std)
mask_2 = df.volumen <= (vol_mean + vol_std)
mask = np.logical_and(mask_1, mask_2)
df3 = df[np.logical_not(mask)]
df3
"""
Explanation: Observación
```python
seleccionar la columna col
regresa una serie
df[col]
seleccionar las columnas col1, col2, ..., coln
regresa dataframe
df[[col1,col2,.., coln]]
selecciona solo el indice inicio
regresa un dataframe
df[inicio:(inicio+1)]
selecciona los indices en notacion
regresa un dataframe
df[inicio:fin:salto]
seleccion mixta
regresa un dataframe
df.loc[inicio:fin:salto, col1:col2]
```
5.7 Masking
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
df.columns
g = df.groupby("tipo_de_arbol")
print( g )
print( g.count() )
print( g.sum() ) # .mean(), .std()
# Ejemplo real
df[["tipo_de_arbol","diametro", "altura"]].groupby("tipo_de_arbol").mean()
"""
Explanation: 5.8.- Grouping
End of explanation
"""
# Antes de imputar datos, siempre explorar
df.describe(include="all")
# Imputación manual de datos (incorrecto)
df["tipo_de_arbol"][df.tipo_de_arbol=="Cherrie Tree"] = "Cherry Tree"
df
# Imputación manual de datos
df = pd.read_csv("data/data.csv", sep=";")
index_mask = (df.tipo_de_arbol=="Cherrie Tree")
df.loc[index_mask, "tipo_de_arbol"] = "Cherry Tree" # .loc es esencial
df
# Imputación de datos: llenar NaNs con promedio
df = pd.read_csv("data/data.csv", sep=";")
df1 = df.fillna(df.mean())
df1
# Imputación de datos: llenar NaNs con valor
df2 = df.fillna(0)
df2
# Imputación de datos: desechar filas con NaN
df3 = df.dropna()
df3
"""
Explanation: 5.9.- Imputación de datos
End of explanation
"""
df = pd.read_csv("data/data.csv", sep=";")
df1 = df.diametro.apply(lambda x: x*2)
df1
# Aplicación incorrecta
df2 = df["tipo_de_arbol"].apply(str.upper) # Error
df2
# Aplicación correcta
df2 = df["tipo_de_arbol"].apply(lambda s: str(s).upper() )
df2
# Error (o no?)
df3 = df.apply(lambda x: x*2)
df3
"""
Explanation: 5.10 Apply
End of explanation
"""
df.tipo_de_arbol.str.upper()
df.tipo_de_arbol.str.len()
df.tipo_de_arbol.str[3:-3]
"""
Explanation: Atajo
Para usar las operaciones de string en una columna de strings, es posible utilizar la siguiente notación para ahorrar espacio.
End of explanation
"""
df1 = pd.read_csv("data/data.csv", sep=";")
df1
df2 = pd.DataFrame(data={"tipo_de_arbol":["Cherry Tree", "Apple Tree", "Pear Tree"],
"fruto":["guinda","manzana", "pera"],
"precio_pesos_por_kg":[500, 2000, np.nan]})
df2
df3 = df1.merge(df2, how="left", on="tipo_de_arbol")
df3
df3 = df1.merge(df2, how="right", on="tipo_de_arbol")
df3
df3 = df1.merge(df2, how="inner", on="tipo_de_arbol")
df3
df3 = df1.merge(df2, how="outer", on="tipo_de_arbol")
df3
"""
Explanation: 5.11 Merge
End of explanation
"""
# guardar un csv
df = pd.read_csv("data/data.csv", sep=";")
df = df[df.tipo_de_arbol=="Cherry Tree"]
df.to_csv("data/output.csv", sep="|", index=True) # header=True by default
df
# Leer el csv anterior
df2 = pd.read_csv("data/output.csv", sep="|", index_col=0) # get index from first column
df2
%%bash
cat data/output.csv
# guardar un json
df = pd.read_csv("data/data.csv", sep=";")
df = df[df.tipo_de_arbol=="Cherry Tree"]
df.to_json("data/output.json")
df
# Leyendo el json anterior
df2 = pd.read_json("data/output.json")
df2
%%bash
cat data/output.json
"""
Explanation: Guardando datos
csv
json
excel
Lo más importante es tener cuidado de cómo se guardan los nombres de las columnas (header), y el indice (index).
Depende de la utilización, pero mi recomendación es guardar el header explícitamente y guardar el index como una columna.
End of explanation
"""
|
donaghhorgan/COMP9033 | labs/06 - Linear regression.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import pandas as pd
from sklearn.dummy import DummyRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV, KFold, cross_val_predict
"""
Explanation: Lab 06: Linear regression
Introduction
This week's lab focuses on data modelling using linear regression. At the end of the lab, you should be able to use scikit-learn to:
Create a linear regression model using the least squares technique.
Use the model to predict new values.
Measure the accuracy of the model.
Engineer new features to optimise model performance.
Getting started
Let's start by importing the packages we need. This week, we're going to use the linear_model subpackage from scikit-learn to build linear regression models using the least squares technique. We're also going to need the dummy subpackage to create a baseline regression model, to which we can compare.
End of explanation
"""
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
df = pd.read_fwf(url, header=None, names=['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'model year', 'origin', 'car name'])
"""
Explanation: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function:
End of explanation
"""
df.head()
"""
Explanation: Exploratory data analysis
According to its documentation, the Auto MPG dataset consists of eight explantory variables (i.e. features), each describing a single car model, which are related to the given target variable: the number of miles per gallon (MPG) of fuel of the given car. The following attribute information is given:
mpg: continuous
cylinders: multi-valued discrete
displacement: continuous
horsepower: continuous
weight: continuous
acceleration: continuous
model year: multi-valued discrete
origin: multi-valued discrete
car name: string (unique for each instance)
Let's start by taking a quick peek at the data:
End of explanation
"""
df = df.set_index('car name')
df.head()
"""
Explanation: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead:
Note: It seems plausible that MPG efficiency might vary from manufacturer to manufacturer, so we could generate a new feature by converting the car names into manufacturer names, but for simplicity lets just drop them here.
End of explanation
"""
df = df[df['horsepower'] != '?']
"""
Explanation: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set:
End of explanation
"""
df.dtypes
"""
Explanation: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric:
End of explanation
"""
df['horsepower'] = pd.to_numeric(df['horsepower'])
# Check the data types again
df.dtypes
"""
Explanation: We can correct this by converting the column values numbers manually, using pandas' to_numeric function:
End of explanation
"""
df = pd.get_dummies(df, columns=['origin'], drop_first=True)
df.head()
"""
Explanation: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
According to the documentation, the origin variable is categoric (i.e. origin = 1 is not "less than" origin = 2) and so we should encode it via one hot encoding so that our model can make sense of it. This is easy with pandas: all we need to do is use the get_dummies method, as follows:
End of explanation
"""
df.describe()
"""
Explanation: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a linear regression model, to avoid introducing multicollinearity, we must also drop the first of the encoded columns by setting the drop_first keyword argument to True.
Next, let's take a look at the distribution of the variables in the data frame. We can start by computing some descriptive statistics:
End of explanation
"""
df.corr()
"""
Explanation: Print a matrix of pairwise Pearson correlation values:
End of explanation
"""
pd.plotting.scatter_matrix(df, s=50, hist_kwds={'bins': 10}, figsize=(16, 16));
"""
Explanation: Let's also create a scatter plot matrix:
End of explanation
"""
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
model = DummyRegressor() # Predicts the target as the average of the features
outer_cv = KFold(n_splits=5, shuffle=True, random_state=0) # 5 fold cross validation
y_pred = cross_val_predict(model, X, y, cv=outer_cv) # Make predictions via cross validation
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the dummy model',
xlabel='Error'
);
"""
Explanation: Based on the above information, we can conclude the following:
Based on a quick visual inspection, there don't appear to be significant numbers of outliers in the data set. (We could make boxplots for each variable - but let's save time and skip it here.)
Most of the explanatory variables appear to have a non-linear relationship with the target.
There is a high degree of correlation ($r > 0.9$) between cylinders and displacement and, also, between weight and displacement.
The following variables appear to be left-skewed: mpg, displacement, horsepower, weight.
The acceleration variable appears to be normally distributed.
The model year follows a rough uniform distributed.
The cylinders and origin variables have few unique values.
For now, we'll just note this information, but we'll come back to it later when improving our model.
Data Modelling
Dummy model
Let's start our analysis by building a dummy regression model that makes very naive (often incorrect) predictions about the target variable. This is a good first step as it gives us a benchmark to compare our later models to.
Creating a dummy regression model with scikit-learn is easy: first, we create an instance of DummyRegressor, and then we evaluate its performance on the data using cross validation, just like last week.
Note: Our dummy model has no hyperparameters, so we don't need to do an inner cross validation or grid search - just the outer cross validation to estimate the model accuracy.
End of explanation
"""
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
model = LinearRegression(fit_intercept=True, normalize=False) # Use least squares linear regression
outer_cv = KFold(n_splits=5, shuffle=True, random_state=0) # 5-fold cross validation
y_pred = cross_val_predict(model, X, y, cv=outer_cv) # Make predictions via cross validation
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the linear regression model',
xlabel='Error'
);
"""
Explanation: The dummy model predicts the MPG with an average error of approximately $\pm 6.57$ (although, as can be seen from the distribution of errors the spread is much larger than this). Let's see if we can do better with a linear regression model.
Linear regression model
scikit-learn supports linear regression via its linear_model subpackage. This subpackage supports least squares regression, lasso regression and ridge regression, as well as many other varieties. Let's use least squares to build our model. We can do this using the LinearRegression class, which supports the following options:
fit_intercept: If True, prepend an all-ones predictor to the feature matrix before fitting the regression model; otherwise, use the feature matrix as is. By default, this is True if not specified.
normalize: If True, standardize the input features before fitting the regression model; otherwise use the unscaled features. By default, this is False if not specified.
Generally, it makes sense to fit an intercept term when building regression models, the exception being in cases where it is known that the target variable is zero when all the feature values are zero. In our case, it seems unlikely that an all-zero feature vector would correspond to a zero MPG target value (for instance, consider the meaning of model year = 0 and weight = 0 in the context of the analysis). Consequently, we can set fit_intercept=True below.
Whether to standardize the input features or not depends on a number of factors:
Standardization can mitigate against multicollinearity - but only in cases where supplemental new features have been generated based on a combination of one or more existing features, i.e. where both the new feature and the features it was dervied from are all included as input features.
Standardizing the input data ensures that the resulting model coefficients indicate the relative importance of their corresponding feature - but only in cases where the features are all approximately normally distributed.
In our case, as we are not generating supplmental new features and several of the features are not normally distributed (see the scatter plot matrix above), we can choose not to standardize them (normalize=False) with no loss in advantage.
Note: In cases where there is uncertainty as to whether an intercept should be fit or not, or whether the input features should be standardized or not, or both, we can use a grid search with nested cross validation (i.e. model selection) to determine the correct answer.
End of explanation
"""
pd.plotting.scatter_matrix(df[['mpg', 'displacement', 'horsepower', 'weight']], s=50, figsize=(9, 9));
"""
Explanation: Our linear regression model predicts the MPG with an average error of approximately $\pm 2.59$ and a significantly smaller standard deviation too - this is a big improvement over the dummy model!
But we can do better! Earlier, we noted that several of the features had non-linear relationships with the target variable - if we could transform these variables, we might be able to make this relationship more linear. Let's consider the displacement, horsepower and weight variables:
End of explanation
"""
df['displacement'] = df['displacement'].map(np.log)
df['horsepower'] = df['horsepower'].map(np.log)
df['weight'] = df['weight'].map(np.log)
"""
Explanation: The relationship between the target and these predictors appears to be an exponentially decreasing one: as the predictors increase in value, there is an exponential decrease in the target value. Log-transforming the variables should help to remove this effect (logarithms are the inverse mathematical operation to exponentials):
End of explanation
"""
pd.plotting.scatter_matrix(df[['mpg', 'displacement', 'horsepower', 'weight']], s=50, figsize=(9, 9));
"""
Explanation: Now, the relationship between the predictors and the target is much more linear:
End of explanation
"""
X = df.drop('mpg', axis='columns')
y = df['mpg']
model = LinearRegression(fit_intercept=True, normalize=False)
outer_cv = KFold(n_splits=5, shuffle=True, random_state=0)
y_pred = cross_val_predict(model, X, y, cv=outer_cv)
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the linear regression model with transformed features',
xlabel='Error'
);
"""
Explanation: Let's run the analysis a second time and see the effect this has had:
End of explanation
"""
X = df.drop('mpg', axis='columns')
y = df['mpg']
model = LinearRegression(fit_intercept=True, normalize=False)
model.fit(X, y) # Fit the model using all of the data
"""
Explanation: As can be seen, the average error has now decreased to $\pm 2.33$ and the standard deviation of the error to 3.12. Further reductions in error might be achieved by experimenting with feature selection, given the high degree of correlation between some of the predictors, or with a more sophisticated model, such as ridge regression.
Building the final model
Once we have identified an approach that satisfies our requirements (e.g. accuracy), we should build a final model using all of the data.
End of explanation
"""
print(model.intercept_)
print(model.coef_) # Coefficients are printed in the same order as the columns in the feature matrix, X
"""
Explanation: We can examine the values of the intercept (if we chose to fit one) and coefficients of our final model by printing its intercept_ and coef_ attributes, as follows:
End of explanation
"""
|
vorth/ipython | heptagons/DrawingTheHeptagon.ipynb | apache-2.0 | # load the definitions from the previous notebook
%run HeptagonNumbers.py
# represent points or vertices as pairs of heptagon numbers
p0 = ( zero, zero )
p1 = ( sigma, zero )
p2 = ( sigma+1, rho )
p3 = ( sigma, rho*sigma )
p4 = ( zero, sigma*sigma )
p5 = ( -rho, rho*sigma )
p6 = ( -rho, rho )
heptagon = [ p0, p1, p2, p3, p4, p5, p6 ]
heptagram_rho = [ p0, p2, p4, p6, p1, p3, p5 ]
heptagram_sigma = [ p0, p3, p6, p2, p5, p1, p4 ]
"""
Explanation: Part 2: Drawing the Heptagon
The heptagon numbers we defined in Part 1 are obviously useful for defining lengths of lines in a figure like the one below, since all of those line lengths are related to the $\rho$ and $\sigma$ ratios. But will heptagon numbers suffice for the coordinates we must give to a graphics library, to draw line segments on the screen? The answer turns out to be yes, though we'll need one little trick to make it work.
<img src="heptagonSampler.png",width=1100,height=600 />
Finding the Heptagon Vertices
To start with, we will try to construct coordinates for the seven vertices of the heptagon. We will label the points as in the figure below, $P0$ through $P6$.
<img src="heptagonVertices.png" width=500 height=500 />
For convenience, we can say that $P0$ is the origin of our coordinate system, so it will have coordinates $(0,0)$. If the heptagon edge length is one, then the coordinates of point $P1$ are clearly $(1,0)$. But now what? We can use the Pythogorean theorem to find the coordinates of point $P4$, but we end up with $(1/2,a)$, where
$$a = \sqrt{\sigma^2 - \frac{1}{4}}
= \sqrt{\frac{3}{4} + \rho + \sigma} $$
This is annoying, since we have no way to take the square root of a heptagon number! Fortunately, there is an easier way.
Suppose we abandon our usual Cartesian coordinate frame, and use one that works a little more naturally for us? We can use a different frame of reference, one where the "y" axis is defined as the line passing through points $P0$ and $P4$. We can then model all the heptagon vertices quite naturally, and adjust for the modified frame of reference when we get to the point of drawing.
For the sake of further convenience, let us scale everything up by a factor of $\sigma$. This makes it quite easy to write down the coordinates of all the points marked on the diagram above. Notice the three points I have included in the interior of the heptagon. Those points divide the horizontal and vertical diagonals into $\rho:\sigma:1$ and $\rho:\sigma$ proportions. So now we have:
|point|coordinates|
|-----|-----------|
|$P0$|$(0,0)$|
|$P1$|$(\sigma,0)$|
|$P2$|$(1+\sigma,\rho)$|
|$P3$|$(\sigma,\rho+\sigma)$|
|$P4$|$(0,1+\rho+\sigma)$|
|$P5$|$(-\rho,\rho+\sigma)$|
|$P6$|$(-\rho,\rho)$|
If we render our heptagon and heptagrams with these coordinates, ignoring the fact that we used an unusual coordinate frame, we get a skewed heptagon:
<img src="skewHeptagon.png" width=500 height=500 />
This figure is certainly not regular in any usual sense, since
the edge lengths and angles vary, but we could refer to it as an affine regular heptagon (and associated heptagrams). In linear algebra, an affine transformation is one that preserves parallel lines and ratios along lines. Those are exactly the properties that we took advantage of in capturing our coordinates.
Although we have not yet discussed how we will accommodate the skew coordinate frame we've used, let's take a look at the Python code to capture the vertices used above. Note that a point is represented simply as a pair of heptagon numbers, using Python's tuple notation.
End of explanation
"""
print "sigma = " + str( HeptagonNumber.sigma_real )
print "rho = " + str( HeptagonNumber.rho_real )
# This function maps from a pair of heptagon numbers to
# a pair of floating point numbers (approximating real numbers)
def render( v ):
x, y = v
return [ float(x), float(y) ]
"""
Explanation: Evaluating a Heptagon Number
We are almost ready to look at Python code for drawing these figures. The last step is to provide a mapping from heptagon numbers to real numbers, since any drawing library requires us to provide points as $(x,y)$ pairs, where $x$ and $y$ are floating-point numbers. This is easy, of course: we just evaluate the expression $a+b\rho+c\sigma$, using predefined values for $\rho$ and $\sigma$. But what are those values?
We can easily derive them from a tiny bit of trigonometry, looking at our heptagon-plus-heptagrams figure again.
<img src="findingConstants.png" width=500 height=500 />
I have rotated the heptagon a bit, so we see angles presented in the traditional way, with zero radians corresponding to the X axis, to the right, and considering angles around the point $A$. We can find $\sigma$ using the right triangle $\triangle ABC$. If our heptagon has edge length of one, then line segment $AB$ has length $\sigma$, by definition, and line segment $BC$ has length $1/2$. Remembering the formula for the sine function (opposite over hypotenuse), we can see that
$$\sin \angle CAB = \frac{1/2}{\sigma} = \frac{1}{2\sigma}$$
which means that
$$\sigma = \frac{1}{2\sin \angle CAB}$$
Now we just need to know what angle $\angle CAB$ is. Here, you can have some fun by convincing yourself that $\angle CAB$ is equal to $\pi/14$ radians. As a hint, first use similar triangles to show that all those narrow angles at the heptagon vertices are equal to $\pi/7$. In any case, we have our value for $\sigma$:
$$\sigma = \frac{1}{2\sin{\frac{\pi}{14}}} $$
Computing $\rho$ is just as easy. It is convenient to use triangle $\triangle ADE$, with the following results:
$$\frac{\rho}{2} = \sin{\angle EAD} = \sin{\frac{5\pi}{14}}$$
$$\rho = 2\sin{\frac{5\pi}{14}}$$
These values for $\rho$ and $\sigma$ were already captured as constants in Part 1. Here we will just print out the values, and define a rendering function that produces a vector of floats from a vector of heptagon numbers.
End of explanation
"""
%pylab inline
import matplotlib.pyplot as plt
import matplotlib.path as mpath
import matplotlib.patches as mpatches
Path = mpath.Path
def drawPolygon( polygonVerts, color, mapping=render ):
n = len( polygonVerts )
codes = [ Path.MOVETO ]
verts = []
verts .append( mapping( polygonVerts[ 0 ] ) )
for i in range(1,n+1):
codes.append ( Path.LINETO )
verts.append ( mapping( polygonVerts[ i % n ] ) )
path = mpath.Path( verts, codes )
return mpatches.PathPatch( path, facecolor='none', edgecolor=color )
def drawHeptagrams( mapping=render ):
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.add_patch( drawPolygon( heptagon,'#dd0000', mapping ) )
ax.add_patch( drawPolygon( heptagram_rho,'#990099', mapping ) )
ax.add_patch( drawPolygon( heptagram_sigma,'#0000dd', mapping ) )
ax.set_xlim(-3,4)
ax.set_ylim(-1,6)
drawHeptagrams()
"""
Explanation: Python Code for Drawing
Although we have not yet discussed how we will accommodate the skew coordinate frame we've used, let's take a look at the Python code to render the skewed heptagon shown earlier. I won't go over the details of the matplotlib drawing library, since you can explore the online documentation. The thing to note is the drawPolygon function, which takes an array of heptagon number pairs as vertices, then renders them and draws a path connecting them.
End of explanation
"""
def skewRender(v):
x = float( v[0] )
y = float( v[1] )
x = x + y/(2*HeptagonNumber.sigma_real)
y = math.sin( (3.0/7.0) * math.pi ) * y
return [ x, y ]
drawHeptagrams( skewRender )
"""
Explanation: Correcting the Skew
Now all that remains is to straighten up our heptagon by applying a shear transformation. This is simpler than it sounds, using the most basic technique of linear algebra, a change of basis.
The trick is to represent points as a column vectors:
$$(x,y) \Rightarrow \begin{bmatrix}x \ y\end{bmatrix}$$
Now, consider the vectors corresponding to the natural basis vectors we used to construct the heptagon vertices:
$$ \begin{bmatrix}1 \ 0\end{bmatrix}, \begin{bmatrix}0 \ 1\end{bmatrix} $$
What should our transformation do to these two vectors? We don't need any change in the X-axis direction, so we'll leave that one alone.
$$ \begin{bmatrix}1 \ 0\end{bmatrix} \Rightarrow \begin{bmatrix}1 \ 0\end{bmatrix} $$
For the Y-axis, we must determine what point in a traditional cartesian plane corresponds to our "vertical" basis vector. We can find that by applying a bit of trigonometry, as we did when deriving $\rho$ and $\sigma$ earlier. The result is:
$$ \begin{bmatrix}1 \ 0\end{bmatrix} \Rightarrow \begin{bmatrix} \frac{1}{2\sigma} \ \sin\frac{3}{7}\pi\end{bmatrix} $$
It turns out that specifying those two transformations is equivalent to specifying the transformation of any initial vector. We use our transformed basis vectors as the column of a matrix, and write:
$$\begin{bmatrix}x' \ y'\end{bmatrix} = \begin{bmatrix}1 & \frac{1}{2\sigma} \0 & \sin\frac{3}{7}\pi\end{bmatrix}
\begin{bmatrix}x \ y\end{bmatrix}$$
Again, this is simpler than it looks. It is just a way of writing one equation that represents two equations:
$$ x' = x + y \frac{1}{2\sigma} $$
$$ y' = y \sin\frac{3}{7}\pi $$
And now we finally have something that we can translate directly into Python, to render our heptagon and heptagrams with full symmetry. Since we defined drawHeptagrams to accept a rendering function already, we can just call it again with the new function.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bcc/cmip6/models/sandbox-1/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231 | solutions/pranay/assignment1/knn.ipynb | mit | # Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
"""
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
"""
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
"""
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
"""
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
"""
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
"""
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
"""
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
"""
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation
"""
|
shareactorIO/pipeline | source.ml/jupyterhub.ml/notebooks/spark/Deploy_SparkML_Airbnb_LinearRegression.ipynb | apache-2.0 | # You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett
import os
master = '--master spark://127.0.0.1:47077'
conf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'
packages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'
jars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'
py_files = '--py-files /root/lib/jpmml.py'
os.environ['PYSPARK_SUBMIT_ARGS'] = master \
+ ' ' + conf \
+ ' ' + packages \
+ ' ' + jars \
+ ' ' + py_files \
+ ' ' + 'pyspark-shell'
print(os.environ['PYSPARK_SUBMIT_ARGS'])
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.ml.feature import OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.regression import LinearRegression
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("hdfs://127.0.0.1:39000/datasets/airbnb/airbnb.csv.bz2")
df.registerTempTable("df")
print(df.head())
print(df.count())
"""
Explanation: Step 0: Load Libraries, Data, and SparkSession
End of explanation
"""
df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null")
df_filtered.registerTempTable("df_filtered")
df_final = spark.sql("""
select
id,
city,
case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')
then state
else 'Other'
end as state,
space,
cast(price as double) as price,
cast(bathrooms as double) as bathrooms,
cast(bedrooms as double) as bedrooms,
room_type,
host_is_super_host,
cancellation_policy,
cast(case when security_deposit is null
then 0.0
else security_deposit
end as double) as security_deposit,
price_per_bedroom,
cast(case when number_of_reviews is null
then 0.0
else number_of_reviews
end as double) as number_of_reviews,
cast(case when extra_people is null
then 0.0
else extra_people
end as double) as extra_people,
instant_bookable,
cast(case when cleaning_fee is null
then 0.0
else cleaning_fee
end as double) as cleaning_fee,
cast(case when review_scores_rating is null
then 80.0
else review_scores_rating
end as double) as review_scores_rating,
cast(case when square_feet is not null and square_feet > 100
then square_feet
when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)
then 350.0
else 380 * bedrooms
end as double) as square_feet
from df_filtered
""").persist()
df_final.registerTempTable("df_final")
df_final.select("square_feet", "price", "bedrooms", "bathrooms", "cleaning_fee").describe().show()
print(df_final.count())
print(df_final.schema)
# Most popular cities
spark.sql("""
select
state,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by state
order by count(*) desc
""").show()
# Most expensive popular cities
spark.sql("""
select
city,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by city
order by avg(price) desc
""").filter("ct > 25").show()
"""
Explanation: Step 1: Clean, Filter, and Summarize the Data
End of explanation
"""
continuous_features = ["bathrooms", \
"bedrooms", \
"security_deposit", \
"cleaning_fee", \
"extra_people", \
"number_of_reviews", \
"square_feet", \
"review_scores_rating"]
categorical_features = ["room_type", \
"host_is_super_host", \
"cancellation_policy", \
"instant_bookable", \
"state"]
"""
Explanation: Step 2: Define Continous and Categorical Features
End of explanation
"""
[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])
"""
Explanation: Step 3: Split Data into Training and Validation
End of explanation
"""
continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features")
continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \
withStd=True, withMean=False)
"""
Explanation: Step 4: Continous Feature Pipeline
End of explanation
"""
categorical_feature_indexers = [StringIndexer(inputCol=x, \
outputCol="{}_index".format(x)) \
for x in categorical_features]
categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \
outputCol="oh_encoder_{}".format(x.getOutputCol() )) \
for x in categorical_feature_indexers]
"""
Explanation: Step 5: Categorical Feature Pipeline
End of explanation
"""
feature_cols_lr = [x.getOutputCol() \
for x in categorical_feature_one_hot_encoders]
feature_cols_lr.append("scaled_continuous_features")
feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \
outputCol="features_lr")
"""
Explanation: Step 6: Assemble our features and feature pipeline
End of explanation
"""
linear_regression = LinearRegression(featuresCol="features_lr", \
labelCol="price", \
predictionCol="price_prediction", \
maxIter=10, \
regParam=0.3, \
elasticNetParam=0.8)
estimators_lr = \
[continuous_feature_assembler, continuous_feature_scaler] \
+ categorical_feature_indexers + categorical_feature_one_hot_encoders \
+ [feature_assembler_lr] + [linear_regression]
pipeline = Pipeline(stages=estimators_lr)
pipeline_model = pipeline.fit(training_dataset)
print(pipeline_model)
"""
Explanation: Step 7: Train a Linear Regression Model
End of explanation
"""
from jpmml import toPMMLBytes
pmmlBytes = toPMMLBytes(spark, training_dataset, pipeline_model)
print(pmmlBytes.decode("utf-8"))
"""
Explanation: TODO: Step 8: Validate Linear Regression Model
Step 9: Convert PipelineModel to PMML
End of explanation
"""
import urllib.request
update_url = 'http://127.0.0.1:39040/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.parse
import json
# Note: You will need to run this twice.
# A fallback will trigger the first time. (bug)
evaluate_url = 'http://<your-ip>:39040/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
"""
Explanation: Deployment Option 1: Mutable Model Deployment
Deploy New Model to Live, Running Model Server
End of explanation
"""
!mkdir -p /root/src/pmml/airbnb/
with open('/root/src/pmml/airbnb/pmml_airbnb.pmml', 'wb') as f:
f.write(pmmlBytes)
!ls /root/src/pmml/airbnb/pmml_airbnb.pmml
"""
Explanation: Model Server Dashboard
Fill in <your-ip> below, then copy/paste to your browser
http://<your-ip>:47979/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39043%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39042%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39041%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39040%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D
Deployment Option 2: Immutable Model Deployment
Save Model to Disk
End of explanation
"""
!start-loadtest.sh $SOURCE_HOME/loadtest/RecommendationServiceStressTest-local-airbnb.jmx
"""
Explanation: TODO: Trigger Airflow to Build New Docker Image (ie. via Github commit)
Load Test
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/notes2017/old/NB-11-Printing_techniques.ipynb | gpl-3.0 | a = "string"
b = "string1"
print a, b
print "The return value is", a
"""
Explanation: 문자열을 인쇄하는 다양한 방법 활용
파이썬 2.x에서는 print 함수의 경우 인자들이 굳이 괄호 안에 들어 있어야 할 필요는 없다. 또한 여러 개의 값을 동시에 인쇄할 수도 있다. 이때 인자들은 콤마로 구분지어진다.
End of explanation
"""
print(a, b)
print("The return value is", a)
"""
Explanation: 주의: 아래와 같이 하면 모양이 기대와 다르게 나온다.
End of explanation
"""
print(a+' '+b)
print("The return value is" + " " + a)
"""
Explanation: 하지만 위와 같이 괄호를 사용하지 않는 방식은 파이썬 3.x에서는 지원되지 않는다.
따라서 기본적으로 아래와 같이 사용하는 것을 추천한다.
End of explanation
"""
print(a),; print(b)
"""
Explanation: 아래와 같이 할 수도 있다
End of explanation
"""
print(a+b)
print("{}{}".format(a,b))
"""
Explanation: 그런데 위 경우 a와 b를 인쇄할 때 스페이스가 자동으로 추가된다.
그런데 스페이스 없이 stringstring1으로 출력하려면 두 가지 방식이 있다.
문자열 덧셈 연산자를 활용한다.
서식이 있는 인쇄방식(formatted printing)을 사용한다.
End of explanation
"""
print("%s%s%d" % (a, b, 10))
from math import pi
"""
Explanation: 서식이 있는 인쇄방식을 이용하여 다양한 형태로 자료들을 인쇄할 수 있다. 서식을 사용하는 방식은 크게 두 가지가 있다.
% 를 이용하는 방식
C, Java 등에서 사용하는 방식과 거의 비슷하다.
format 키워드를 사용하는 방식
파이써 만의 새로운 방식이며 % 보다 좀 더 다양한 기능을 지원한다.
Java의 경우 MessageFormat 클래스의 format 메소드가 비슷한 기능을
지원하지만 사용법이 좀 더 복잡하다.
서식지정자(%)를 활용하여 문자열 인쇄하기
%s: 문자열 서식지정자
%d: int형 서식지정자
End of explanation
"""
print("원주율값은 대략 %f이다." % pi)
print("원주율값은 대략 %e이다." % pi)
print("원주율값은 대략 %g이다." % pi)
"""
Explanation: 부동소수점 실수의 경우 여러 서식을 이용할 수 있다.
%f: 소숫점 이하 6째 자리까지 보여준다. (7째 자리에서 반올림 한다.)
%e: 지수 형태로 보여준다.
%g: %f 또는 %e 방식 중에서 보다 간단한 서식으로 보여준다.
End of explanation
"""
print("원주율값은 대략 %.10f이다." % pi)
print("원주율값은 대략 %.10e이다." % pi)
print("원주율값은 대략 %.10g이다." % pi)
"""
Explanation: 소수점 이하 숫자의 개수를 임의로 정할 수 있다.
End of explanation
"""
print("지금 사용하는 컴퓨터가 계산할 수 있는 원주율값은 대략 '%.50f'이다." % pi)
"""
Explanation: pi값 계산을 소숫점 이하 50자리 정도까지 계산함을 알 수 있다. 이것은 사용하는 컴퓨터의 한계이며 컴퓨터의 성능에 따라 계산능력이 달라진다.
End of explanation
"""
print("%f"% pi)
print("%f"% pi**3)
print("%f"% pi**10)
"""
Explanation: 여러 값을 보여주면 아래 예제처럼 기본은 왼쪽에 줄을 맞춘다.
End of explanation
"""
print("%12f"% pi)
print("%12f"% pi**3)
print("%12f"% pi**10)
print("%12e"% pi)
print("%12e"% pi**3)
print("%12e"% pi**10)
print("%12g"% pi)
print("%12g"% pi**3)
print("%12g"% pi**10)
print("%16.10f"% pi)
print("%16.10f"% pi**3)
print("%16.10f"% pi**10)
print("%16.10e"% pi)
print("%16.10e"% pi**3)
print("%16.10e"% pi**10)
print("%16.10g"% pi)
print("%16.10g"% pi**3)
print("%16.10g"% pi**10)
"""
Explanation: 오른쪽으로 줄을 맞추려면 아래 방식을 사용한다.
숫자 12는 pi**10의 값인 93648.047476가 점(.)을 포함하여 총 12자리로 가장 길기에 선택되었다.
표현방식이 달라지면 다른 값을 선택해야 한다.
End of explanation
"""
print("%012f"% pi)
print("%012f"% pi**3)
print("%012f"% pi**10)
print("%012e"% pi)
print("%012e"% pi**3)
print("%012e"% pi**10)
print("%012g"% pi)
print("%012g"% pi**3)
print("%012g"% pi**10)
print("%016.10f"% pi)
print("%016.10f"% pi**3)
print("%016.10f"% pi**10)
print("%016.10e"% pi)
print("%016.10e"% pi**3)
print("%016.10e"% pi**10)
print("%016.10g"% pi)
print("%016.10g"% pi**3)
print("%016.10g"% pi**10)
"""
Explanation: 비어 있는 자리를 숫자 0으로 채울 수도 있다.
End of explanation
"""
print("%12.20f" % pi**19)
"""
Explanation: 자릿수는 계산결과를 예상하여 결정해야 한다.
아래의 경우는 자릿수를 너무 작게 무시한다.
End of explanation
"""
print("{}{}{}".format(a, b, 10))
print("{:s}{:s}{:d}".format(a, b, 10))
print("{:f}".format(pi))
print("{:f}".format(pi**3))
print("{:f}".format(pi**10))
print("{:12f}".format(pi))
print("{:12f}".format(pi**3))
print("{:12f}".format(pi**10))
print("{:012f}".format(pi))
print("{:012f}".format(pi**3))
print("{:012f}".format(pi**10))
"""
Explanation: format 함수 사용하여 문자열 인쇄하기
format 함수를 이용하여 서식지정자(%)를 사용한 결과를 동일하게 구할 수 있다.
End of explanation
"""
print("{2}{1}{0}".format(a, b, 10))
"""
Explanation: format 함수는 인덱싱 기능까지 지원한다.
End of explanation
"""
print("{s1}{s2}{s1}".format(s1=a, s2=b, i1=10))
print("{i1}{s2}{s1}".format(s1=a, s2=b, i1=10))
"""
Explanation: 인덱싱을 위해 키워드를 사용할 수도 있다.
End of explanation
"""
print("{1:12f}, {0:12f}".format(pi, pi**3))
print("{p1:12f}, {p0:12f}".format(p0=pi, p1=pi**3))
"""
Explanation: 인덱싱과 서식지정자를 함께 사용할 수 있다
End of explanation
"""
a = 3.141592
print(a)
a.__str__()
str(a)
b = [2, 3.5, ['school', 'bus'], (1,2)]
str(b)
"""
Explanation: % 연산자를 사용하는 방식과 format 함수를 사용하는 방식 중에 어떤 방식을 선택할지는 경우에 따라 다르다.
% 방식은 C, Java 등에서 일반적으로 지원되는 방식이다.
반면에 format 방식은 좀 더 다양한 활용법을 갖고 있다.
고급 기술
str 함수와 repr 함수에 대해 알아둘 필요가 있다.
str 함수
파이썬에서 사용하는 모든 객체는 __str__ 메소드를 갖고 있으며, 이 메소드는 해당 객체를 보여주는 데에 사용된다. 예를 들어 print 명령어를 사용하면 무조건 해당 객체의 __str__ 메소드가 호출되어 사용된다.
또한 str 함수가 있어서 동일한 역할을 수행한다.
End of explanation
"""
type(str(b))
"""
Explanation: str 함수는 문자열값을 리턴한다.
End of explanation
"""
print(b)
"%s" % b
"{}".format(b)
"""
Explanation: 앞서 언급한 대로 __str__ 메소드는 print 함수뿐만 아니라 서식을 사용할 때도 기본적으로 사용된다.
End of explanation
"""
c = str(pi)
pi1 = eval(c)
pi1
"""
Explanation: repr 함수
__str__ 와 더불어 __repr__ 또한 모든 파이썬 객체의 메소드로 존재한다.
__str__와 비슷한 기능을 수행하지만 __str__ 보다 정확한 값을 사용한다.
eval 함수와 사용하여 본래의 값을 되살리는 기능을 제공한다.
repr 함수를 사용하면 해당 객체의 __repr__ 메소드가 호출된다.
End of explanation
"""
pi1 - pi
"""
Explanation: 하지만 pi1을 이용하여 원주율 pi 값을 되살릴 수는 없다.
End of explanation
"""
pi2 = repr(pi)
type(pi2)
eval(pi2) - pi
"""
Explanation: repr 함수는 eval 함수를 이용하여 pi 값을 되살릴 수 있다.
End of explanation
"""
"%s" % pi
"%r" % pi
"%d" % pi
"""
Explanation: repr 함수를 활용하는 서식지정자는 %r 이다.
End of explanation
"""
|
tensorflow/docs | tools/templates/notebook.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import numpy as np
"""
Explanation: Title
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/not_a_real_link"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/tools/templates/notebook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/tools/templates/notebook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/tools/templates/notebook.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
[Update button links]
See model on TFHub is only required if the notebook uses a model from tfhub.dev
Overview
[Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.]
Setup
[Put all your imports and installs up into a setup section.]
End of explanation
"""
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
"""
Explanation: Resources
TensorFlow documentation contributor guide
TensorFlow documentation style guide
Google developer documentation style guide
Notebook style
Include the collapsed license at the top (uses the Colab "Form" mode to hide the cells).
Save the notebook with the table of contents open.
Use one H1 header for the title.
Include the button-bar immediately after the H1.
Headers that areH4 and below are not visible in the navigation
bar of tensorflow.org.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Keep code and text cells as brief as possible.
Break text cells at headings
Break code cells between "building" and "running", and between "printing one result" and "printing another result".
Necessary but uninteresting code (like plotting logic) should be hidden in a toggleable code cell by putting #@title as the first line.
Code style
Notebooks are for people. Write code optimized for clarity.
Use the Google Python Style Guide, where applicable.
tensorflow.org doesn't support interactive plots.
Keep examples quick. Use small datasets, or small slices of datasets. Don't train to convergence, train until it's obvious it's making progress.
If you define a function, run it and show us what it does before using it in another function.
Demonstrate small parts before combining them into something more complex, like this:
End of explanation
"""
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
"""
Explanation: Run the model on a single batch of data, and inspect the output:
End of explanation
"""
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
"""
Explanation: Compile the model for training:
End of explanation
"""
|
robertoalotufo/ia898 | master/tutorial_python_1_1.ipynb | mit | a = 3
print(type(a))
b = 3.14
print(type(b))
c = 3 + 4j
print(type(c))
d = False
print(type(d))
print(a + b)
print(b * c)
print(c / a)
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Introdução-ao-Python-no-ambiente-Adessowiki" data-toc-modified-id="Introdução-ao-Python-no-ambiente-Adessowiki-1"><span class="toc-item-num">1 </span>Introdução ao Python no ambiente Adessowiki</a></div><div class="lev2 toc-item"><a href="#Tipos-de-variáveis-em-Python-com-ênfase-nos-tipos-sequênciais" data-toc-modified-id="Tipos-de-variáveis-em-Python-com-ênfase-nos-tipos-sequênciais-11"><span class="toc-item-num">1.1 </span>Tipos de variáveis em Python com ênfase nos tipos sequênciais</a></div><div class="lev2 toc-item"><a href="#Tipos-Númericos:" data-toc-modified-id="Tipos-Númericos:-12"><span class="toc-item-num">1.2 </span>Tipos Númericos:</a></div><div class="lev2 toc-item"><a href="#Tipos-Sequenciais:" data-toc-modified-id="Tipos-Sequenciais:-13"><span class="toc-item-num">1.3 </span>Tipos Sequenciais:</a></div><div class="lev3 toc-item"><a href="#Strings:" data-toc-modified-id="Strings:-131"><span class="toc-item-num">1.3.1 </span>Strings:</a></div><div class="lev3 toc-item"><a href="#Listas:" data-toc-modified-id="Listas:-132"><span class="toc-item-num">1.3.2 </span>Listas:</a></div><div class="lev3 toc-item"><a href="#Tuplas:" data-toc-modified-id="Tuplas:-133"><span class="toc-item-num">1.3.3 </span>Tuplas:</a></div><div class="lev4 toc-item"><a href="#Atenção:" data-toc-modified-id="Atenção:-1331"><span class="toc-item-num">1.3.3.1 </span>Atenção:</a></div><div class="lev2 toc-item"><a href="#Slicing-em-tipos-sequenciais" data-toc-modified-id="Slicing-em-tipos-sequenciais-14"><span class="toc-item-num">1.4 </span>Slicing em tipos sequenciais</a></div><div class="lev3 toc-item"><a href="#Atenção:" data-toc-modified-id="Atenção:-141"><span class="toc-item-num">1.4.1 </span>Atenção:</a></div><div class="lev2 toc-item"><a href="#Atribuição-em-tipos-sequenciais" data-toc-modified-id="Atribuição-em-tipos-sequenciais-15"><span class="toc-item-num">1.5 </span>Atribuição em tipos sequenciais</a></div><div class="lev2 toc-item"><a href="#Formatação-de-string-para-impressão" data-toc-modified-id="Formatação-de-string-para-impressão-16"><span class="toc-item-num">1.6 </span>Formatação de string para impressão</a></div><div class="lev2 toc-item"><a href="#Outros-tipos" data-toc-modified-id="Outros-tipos-17"><span class="toc-item-num">1.7 </span>Outros tipos</a></div><div class="lev2 toc-item"><a href="#Dicionários:" data-toc-modified-id="Dicionários:-18"><span class="toc-item-num">1.8 </span>Dicionários:</a></div><div class="lev2 toc-item"><a href="#Conjuntos" data-toc-modified-id="Conjuntos-19"><span class="toc-item-num">1.9 </span>Conjuntos</a></div>
# Introdução ao Python no ambiente Adessowiki
## Tipos de variáveis em Python com ênfase nos tipos sequênciais
O Python é uma linguagem de programação de alto nível, interpretada, imperativa, orientada a objetos,
de tipagem dinâmica e forte, que possui ainda as seguintes características:
- Não há pré-declaração de variáveis, e os tipos das variáveis são determinados dinamicamente.
- O controle de bloco é feito apenas por indentação; não há delimitadores do tipo BEGIN e END ou { e }.
- Oferece tipos de dados de alto nível: strings, listas, tuplas, dicionários, arquivos, classes.
- É orientada a objetos.
É uma linguagem moderna e adaptada para o desenvolvimento tanto de aplicações genéricas como científicas. Para aplicações
científicas, o Python possui um pacote muito importante e eficiente para o processamento de arrays multidimensionais: *Numpy*.
Em sua forma nativa, o Python suporta os seguintes tipos de variáveis:
| Tipo Variável | Descrição | Exemplo de sintaxe |
|---------------|---------------------------------------------|-------------------------|
| *int* | Variável inteira | a = 103458 |
| *float* | Variável de ponto flutuante | pi = 3.14159265 |
| *bool* | Variável *booleana* - *True* ou *False* | a = False |
| *complex* | Variável de número complexo | c = 2+3j |
| *str* | Variável para cadeia de caracteres ASCII | a = "Exemplo" |
| *list* | Lista heterogênea pode alterar os valores | lista = [4,'eu',1] |
| *tuple* | Tupla heterogênea imutável | tupla = (1,'eu',2) |
| *dict* |Conjunto associativo de valores | dic = {1:'eu',2:'você'} |
## Tipos Númericos:
- Declarando variáveis dos tipos inteiro, booleano, ponto flutuante e complexo e realizando algumas
operações simples
End of explanation
"""
nome1 = 'Faraday'
nome2 = "Maxwell"
print('string do tipo:', type(nome1), 'nome1:', nome1, "comprimento:", len(nome1))
"""
Explanation: Observe que em operações envolvendo elementos de tipos diferentes a linguagem realiza a
conversão dos elementos ao tipo adequado, conforme a seguinte hierarquia: complexo > ponto flutuante > inteiro
Tipos Sequenciais:
Python possui três tipos sequenciais principais: listas, tuplas e cadeia de caracteres (string).
Strings:
Pode-se declarar strings tanto usando aspas simples como duplas ou triplas.
Strings são vetores imutáveis compostos de caracteres. Pode-se calcular o tamanho do
string usando-se len.
End of explanation
"""
print('Primeiro caractere de ', nome1, ' é: ', nome1[0])
print('Último caractere de ', nome1, ' é: ', nome1[-1])
print('Repetindo-se strings 3 vezes', 3 * nome1)
"""
Explanation: String é um vetor imutável de caracteres. É possível indexar um caractere único e é possível aplicar regras
consistentes do Python no tratamento de sequências, tais como fatiamento (slicing) e formas de indexação.
Em Python, o primeiro elemento é sempre indexado como zero, assim quando tenho um string de 5 caracteres, ele
é indexado de 0 a 4. É possível também indexar os elementos da direita para a esquerda utilizando índices
negativos. Assim, o último elemento do vetor pode ser indexado pelo índice -1.
End of explanation
"""
lista1 = [1, 1.1, 'um'] # Listas podem conter elementos de diferentes tipos.
lista2 = [3+4j, lista1] # Inclusive uma lista pode conter outras listas como elementos!
print('tipo da lista1=', type(lista1))
print('lista2=', lista2)
lista2[1] = 'Faraday' #Diferentemente das strings, pode-se atribuir novos valores a elementos da lista.
print('lista2=', lista2)
lista3 = lista1 + lista2 # Concatenando 2 listas
print('lista3=',lista3)
print('concatenando 2 vezes:',2*lista3)
"""
Explanation: Listas:
Lista é uma sequência de elementos de diferentes tipos que podem ser indexados, alterados e operados.
Listas são definidas por elementos separados por vírgulas iniciado e terminado por colchetes.
End of explanation
"""
#Declarando tuplas
tupla1 = () # tupla vazia
tupla2 = ('Gauss',) # tupla com apenas um elemento. Note a vírgula.
tupla3 = (1.1, 'Ohm', 3+4j)
tupla4 = 3, 'aqui', True
print('tupla1=', tupla1)
print('tupla2=', tupla2)
print('tupla3=', tupla3)
print('tupla4=', tupla4)
print('tipo da tupla3=', type(tupla3))
"""
Explanation: Tuplas:
Tupla é similar a lista, porém seus valores são imutáveis. Tupla é uma sequência de objetos separados
por vírgulas que podem, opcionalmente, serem iniciados e terminados por parênteses. Tupla contendo
um único elemento precisa ser seguido de uma vírgula.
Atenção:
O entendimento da tupla é muito importante e ela
será bastante utilizada neste curso, pois muitos parâmetros do ndarray do NumPy utilizando tuplas.
End of explanation
"""
s = 'abcdefg'
print('s=',s)
print('s[0:2] =', s[0:2]) # caracteres a partir da posição 0 (inclusivo) até 2 (exclusivo)
print('s[2:5] =', s[2:5]) # caracteres a partir da posição 2 (inclusivo) até 5 (exclusivo)
"""
Explanation: Slicing em tipos sequenciais
Além dos tipos sequenciais como listas, tuplas e strings poderem ser indexados, é possível também
selecionar subconjuntos através do conceito de slicing (fatiamento).
Por exemplo:
End of explanation
"""
s = 'abcdefg'
print('s=',s)
print('s[:2] =', s[:2]) # caracteres a partir do início até 2 (exclusivo)
print('s[2:] =', s[2:]) # caracteres a partir da posição 2 (inclusivo) até o final do string
print('s[-2:] =', s[-2:]) # últimos 2 caracteres
"""
Explanation: Quando o início for zero e o final for o comprimento do string, ele pode ser omitido. Veja os
exemplos:
End of explanation
"""
s = 'abcdefg'
print('s=',s)
print('s[2:5]=', s[2:5])
print('s[0:5:2]=',s[0:5:2])
print('s[::2]=', s[::2])
print('s[:5]=', s[:5])
print('s[3:]=', s[3:])
print('s[::-1]=', s[::-1])
"""
Explanation: Note que a posição de início é sempre inclusiva e a posição final é sempre exclusiva.
Isto é feito para que a concatenação entre s[:i] e s[i:] seja igual a s.
O slicing permite ainda um terceiro valor que é opcional: step.
Para quem é familiarizado com a linguagem C, os 3 parâmetros do slicing é similar ao for:
|comando for | slicing |
|-------------------------------------------|-----------------------|
|for (i=inicio; i < fim; i += passo) a[i] | a[inicio:fim:passo] |
Veja exemplos de indexação usando slicing num string de 7 caracteres, indexados de 0 a 6:
|slice | indices | explicação |
|---------|-------------|---------------------------------------|
| 0:5 |0,1,2,3,4 |vai de 0 até 4 que é menor que 5 |
| 2:5 |2,3,4 |vai de 2 até 4 |
| 0:5:2 |0,2,4 |vai de 0 até 4, pulando de 2 em 2 |
| ::2 |0,2,4,6 |vai do início até o final de 2 em 2 |
| :5 |0,1,2,3,4 |vai do início até 4, que é menor que 5 |
| 3: |3,4,5,6 |vai de 3 até o final |
| ::-1 |6,5,4,3,2,1,0|vai do final (6) até o início |
Veja estes exemplos aplicados no string 'abcdefg':
End of explanation
"""
s = "abc"
s1,s2,s3 = s
print('s1:',s1)
print('s2:',s2)
print('s3:',s3)
list = [1,2,3]
t = 8,9,True
print('list=',list)
list = t
print('list=',list)
(_,a,_) = t
print('a=',a)
"""
Explanation: Atenção:
Este conceito de *slicing* será essencial neste curso. Ele pode ser aplicado em strings,
tuplas, listas e principalmente no ``ndarray`` do NumPy. Procure entendê-lo integralmente.
Atribuição em tipos sequenciais
Tanto strings, tuplas como listas podem ser desempacotados através de atribuição.
O importante é que o mapeamento seja consistente. Lembrar que a única sequência
que é mutável, isto é, pode ser modificada por atribuição é a lista.
Procure estudar os exemplos abaixo:
End of explanation
"""
s = 'formatação inteiro:%d, float:%f, string:%s' % (5, 3.2, 'alo')
print(s)
"""
Explanation: Formatação de string para impressão
Um string pode ser formatado de modo parecido com a sintaxe do sprintf em C/C++ na forma:
string % tupla. Iremos usar bastante este modelo para colocar legendas nas imagens. Exemplos:
.. code:: python
End of explanation
"""
dict1 = {'blue':135,'green':12.34,'red':'ocean'} #definindo um dicionário
print(type(dict1))
print(dict1)
print(dict1['blue'])
print(dict1.keys()) # Mostra as chaves do dicionário
del dict1['blue'] # Deleta o elemento com a chave 'blue'
print(dict1.keys()) # Mostra as chaves do dicionário após o elemento com a chave 'blue' ser apagado
"""
Explanation: Outros tipos
Existem ainda os Dicionários e Conjuntos, entretanto eles não serão utilizados durante este curso.
Dicionários:
Dicionários podem ser definidos como sendo listas associativas que ao invés de associar os seus elementos
a índices númericos, associa os seus elementos a palavras-chave.
Declarando dicionários e realizando algumas operações simples
End of explanation
"""
lista1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
lista2 = ['red', 'blue', 'green','red','red']
conjunto1 = set(lista1) # Definindo um conjunto
conjunto2 = set(lista2)
print(conjunto1) # Observe que os elementos repetidos são contados apenas uma vez
print((type(conjunto1)))
print((conjunto1 | conjunto2)) # União de 2 conjuntos
"""
Explanation: Conjuntos
Conjuntos são coleções de elementos que não possuem ordenação e também não apresentam
elementos repetidos.
Declarando conjuntos e realizando algumas operações simples
End of explanation
"""
|
steinam/teacher | jup_notebooks/datenbanken/.ipynb_checkpoints/12FI1_Abschlusstest-checkpoint.ipynb | mit | %load_ext sql
"""
Explanation: Unterricht zur Kammerprüfung
End of explanation
"""
%sql mysql://steinam:steinam@localhost/sommer_2014
"""
Explanation: Sommer_2014
End of explanation
"""
%%sql
select * from artikel
where Art_Bezeichnung like '%Schmerzmittel%' or
Art_Bezeichnung like '%schmerzmittel%';
"""
Explanation: Frage 1
Erstellen Sie eine SQL-Abfrage, die alle Artikel auflistet, deren Artikelbezeichnungen die Zeichenketten "Schmerzmittel" oder "schmerzmittel" enthalten. Zu jedem Artikel sollen jeweils alle Attribute ausgeben werden.
Lösung
End of explanation
"""
%%sql
select k.Kd_firma, sum(rp.RgPos_Menge * rp.RgPos_Preis) as Umsatz
from Kunde k left join Rechnung r
on k.Kd_Id = r.Rg_Kd_ID
inner join Rechnungsposition rp
on r.Rg_ID = rp.RgPos_RgID
group by k.`Kd_Firma`
order by Umsatz desc;
%%sql
-- Originallösung bringt das gleiche Ergebnis
select k.`Kd_Firma`,
(select sum(RgPos_menge * RgPos_Preis)
from `rechnungsposition` rp, rechnung r
where r.`Rg_ID` = `rp`.`RgPos_RgID` and r.`Rg_Kd_ID` = k.`Kd_ID`) as Umsatz
from kunde k order by Umsatz desc
"""
Explanation: Frage 2
Erstellen Sie eine Abfrage, die alle Kunden und deren Umsätze auflistet. Zu jedem Kunden aollen alle Attribute ausgegeben werden. Die Liste soll nach Umsatz absteigend sortiert werden.
Lösung
End of explanation
"""
%%sql
-- meine Lösung
select artikel.*, sum(RgPos_Menge) as Menge, count(RgPos_ID) as Anzahl
from artikel inner join `rechnungsposition`
where `rechnungsposition`.`RgPos_ArtID` = `artikel`.`Art_ID`
group by artikel.`Art_ID`
%%sql
-- Leitungslösung
select artikel.* ,
(select sum(RgPOS_Menge) from Rechnungsposition rp
where rp.RgPos_ArtID = artikel.Art_ID) as Menge,
(select count(RgPOS_menge) from Rechnungsposition rp
where rp.RgPos_ArtID = artikel.Art_ID) as Anzahl
from Artikel
"""
Explanation: Frage 3
Erstellen Sie eine SQL-Abfrage, die für jeden Artikel Folgendes ermittelt:
- Die Menge, die insgesamt verkauft wurde
- Die Anzahl der Rechnungspositionen
Lösung
End of explanation
"""
%%sql
-- Original
select left(kunde.`Kd_PLZ`,1) as Region,
sum(`rechnungsposition`.`RgPos_Menge` * `rechnungsposition`.`RgPos_Preis`) as Summe
from kunde left join rechnung
on kunde.`Kd_ID` = rechnung.`Rg_Kd_ID`
left join rechnungsposition
on `rechnung`.`Rg_ID` = `rechnungsposition`.`RgPos_RgID`
group by Region
order by Summe;
%%sql
-- Inner join ändert nichts
select left(kunde.`Kd_PLZ`,1) as Region,
sum(`rechnungsposition`.`RgPos_Menge` * `rechnungsposition`.`RgPos_Preis`) as Summe
from kunde inner join rechnung
on kunde.`Kd_ID` = rechnung.`Rg_Kd_ID`
inner join rechnungsposition
on `rechnung`.`Rg_ID` = `rechnungsposition`.`RgPos_RgID`
group by Region
order by Summe;
"""
Explanation: Frage 4
Deutschland ist in 10 Postleitzahlregionen (0-9, 1. Stelle der PLZ) eingeteilt.
Erstellen Sie eine SQl-Abfrage für eine Liste, die für jede PLZ-Region (0-9) den Gesamtumsatz aufweist.
Die Liste soll nach Gesamtumsatz absteigend sortiert werden.
Lösung
End of explanation
"""
%%sql
select kunde.*, umsatz from kunde
inner join (
select (RgPos_menge * RgPos_Preis) as Umsatz, kd_id
from `rechnungsposition`
inner join rechnung on `rechnungsposition`.`RgPos_ID` = `rechnung`.`Rg_ID`
inner join kunde on `rechnung`.`Rg_Kd_ID` = Kunde.`Kd_ID`
group by `Kd_ID`
) a
on Kunde.`Kd_ID` = a.Kd_ID
order by umsatz desc;
"""
Explanation: Heiko Mader
O-Ton: ich glaube es ist richtig :-)
Aufgabe 2
Syntax geht, aber Ergebnis stimmt nicht
End of explanation
"""
%%sql
select a.*, mengeGesamt,anzahlRechPos
from artikel a
Inner join (
select SUM(RgPos_menge) as mengeGesamt, art_id
from `rechnungsposition` inner join artikel
on `rechnungsposition`.`RgPos_ArtID` = artikel.`Art_ID`
group by art_id
) b on a.`Art_ID` = b.art_id
Inner join
(select count(*) as anzahlRechPos, art_id
from `rechnungsposition` inner join artikel
on `rechnungsposition`.`RgPos_ArtID` = artikel.`Art_ID`
group by art_id
) c on a.`Art_ID` = c.art_id
"""
Explanation: Aufgabe 3
End of explanation
"""
%%sql
select gebiet, umsatz from `kunde`
inner join (
select kd_plz as gebiet, kd_id from `kunde`
where kd_plz in
(0%,1%,2%,3%,4%,5%,6%,7%,8%,9%)
group by kd_id
) a on kunde.`Kd_ID` = b.kd_id
inner join (
select rgPos_Menge * rgPos_Preis as Umsatz2, kd_id
from `rechnungsposition` inner join
rechnung on `rechnungsposition`.`RgPos_RgID` = rechnung.`Rg_ID`
inner join kunde on `rechnung`.`Rg_Kd_ID` = kunde.`Kd_ID`
group by kd_id
) b on `kunde`.`Kd_ID` = b.kd_id
order by umsatz desc;
"""
Explanation: Aufgabe 4
Original von H.M ergibt fehler
End of explanation
"""
%%sql
select gebiet, umsatz from `kunde`
inner join (
select kd_plz as gebiet, kd_id from `kunde`
where left(kd_plz,1) in
(0,1,2,3,4,5,6,7,8,9)
group by kd_id
) a on kunde.`Kd_ID` = a.kd_id
inner join (
select sum(rgPos_Menge * rgPos_Preis) as Umsatz, kd_id
from `rechnungsposition` inner join
rechnung on `rechnungsposition`.`RgPos_RgID` = rechnung.`Rg_ID`
inner join kunde on `rechnung`.`Rg_Kd_ID` = kunde.`Kd_ID`
group by kd_id
) b on `kunde`.`Kd_ID` = b.kd_id
order by umsatz desc;
"""
Explanation: Leichte Änderungen führen zu einem "fast richtigen" Ergebnis
er multipliziert dabei aber nur den jeweils ersten Datensatz aus der Rechnungsposition-Tabelle (siehe 2527,2) für PLZ 9
das wird auch bei der Aufgabe 3 ein möglicher fehler sein, der fällt aber da nicht evtl. auf ???
End of explanation
"""
|
gojomo/gensim | docs/notebooks/WordRank_wrapper_quickstart.ipynb | lgpl-2.1 | from gensim.models.wrappers import Wordrank
wr_path = 'wordrank' # path to Wordrank directory
out_dir = 'model' # name of output directory to save data to
data = '../../gensim/test/test_data/lee.cor' # sample corpus
model = Wordrank.train(wr_path, data, out_dir, iter=11, dump_period=5)
"""
Explanation: WordRank wrapper tutorial on Lee Corpus
WordRank is a new word embedding algorithm which captures the semantic similarities in a text data well. See this notebook for it's comparisons to other popular embedding models. This tutorial will serve as a guide to use the WordRank wrapper in gensim. You need to install WordRank before proceeding with this tutorial.
Train model
We'll use Lee corpus for training which is already available in gensim. Now for Wordrank, two parameters dump_period and iter needs to be in sync as it dumps the embedding file with the start of next iteration. For example, if you want results after 10 iterations, you need to use iter=11 and dump_period can be anything that gives mod 0 with resulting iteration, in this case 2 or 5.
End of explanation
"""
model.most_similar('President')
model.similarity('President', 'military')
"""
Explanation: Now, you can use any of the Keyed Vector function in gensim, on this model for further tasks. For example,
End of explanation
"""
wr_word_embedding = 'wordrank.words'
vocab_file = 'vocab.txt'
model = Wordrank.load_wordrank_model(wr_word_embedding, vocab_file, sorted_vocab=1)
"""
Explanation: As Wordrank provides two sets of embeddings, the word and context embedding, you can obtain their addition by setting ensemble parameter to 1 in the train method.
Save and Load models
In case, you have trained the model yourself using demo scripts in Wordrank, you can then simply load the embedding files in gensim.
Also, Wordrank doesn't return the embeddings sorted according to the word frequency in corpus, so you can use the sorted_vocab parameter in the load method. But for that, you need to provide the vocabulary file generated in the 'matrix.toy' directory(if you used default names in demo) where all the metadata is stored.
End of explanation
"""
wr_context_file = 'wordrank.contexts'
model = Wordrank.load_wordrank_model(wr_word_embedding, vocab_file, wr_context_file, sorted_vocab=1, ensemble=1)
"""
Explanation: If you want to load the ensemble embedding, you similarly need to provide the context embedding file and set ensemble to 1 in load_wordrank_model method.
End of explanation
"""
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
"""
Explanation: You can save these sorted embeddings using the standard gensim methods.
End of explanation
"""
word_analogies_file = 'datasets/questions-words.txt'
model.accuracy(word_analogies_file)
word_similarity_file = 'datasets/ws-353.txt'
model.evaluate_word_pairs(word_similarity_file)
"""
Explanation: Evaluating models
Now that the embeddings are loaded in Word2Vec format and sorted according to the word frequencies in corpus, you can use the evaluations provided by gensim on this model.
For example, it can be evaluated on following Word Analogies and Word Similarity benchmarks.
End of explanation
"""
|
peakrisk/peakrisk | posts/comparing-pressure-data-from-two-sensors.ipynb | gpl-3.0 | # Tell matplotlib to plot in line
%matplotlib inline
import datetime
# import pandas
import pandas
# seaborn magically adds a layer of goodness on top of Matplotlib
# mostly this is just changing matplotlib defaults, but it does also
# provide some higher level plotting methods.
import seaborn
# Tell seaborn to set things up
seaborn.set()
# input files: the data from the two sensors
infiles = ["../files/kittycam_weather.csv", "../files/pijessie_weather.csv"]
# Read the data
data = []
for infile in infiles:
data.append(pandas.read_csv(infile, index_col='date', parse_dates=['date']))
# take a look at what we got
data[0].describe()
# plots are always good
data[0].plot(subplots=True)
"""
Explanation: I have had two simple raspberry pi weather stations running for a while now.
Both have pressure, temperature and humidity sensors.
One I have in the carefully controlled environment of my study, the
other is hanging out of the window.
The study one is known as pijessie as it started life as a Raspberry
Pi running the Jessie version of Raspbian.
The outside station is known as kittycam as I intend at some point
to attach a camera so I can watch our cat come and go.
For a while I have been noticing that the pressure values have been
quite a way apart. The software I am includes a conversion to
altitude and I find these numbers more natural for me to think about.
The altitude conversion assumes the pressure at sea level is 1023.25
hPa, which is the mean pressure at sea level.
When the pressure is higher than this the altitude comes out below sea
level, when pressure is lower than this above sea level.
As always, Wikipedia has good information on this:
https://en.wikipedia.org/wiki/Atmospheric_pressure
For a while I had been noticing the two sensors giving values
differing by about 10 metres altitude.
I had put this down to the sensors not being calibrated accurately,
but also noticed that kittycam was more prone to weird glitches.
Now the glitches I put down to the fact I have one process collecting
data every minute and another process creating a display on my laptop
so I can glance over and see what the weather is doing. The latter
was just polling the sensor every 10 minutes.
The code does not do anything smart like get a lock and my guess was
that the two processes were occasionally trampling on each other's feet.
Long story short, I decided to take a closer look.
End of explanation
"""
# align returns two new dataframes, now aligned
d1, d2 = data[0].align(data[1])
# have a look, note the count is just the valid data.
# Things have been aligned, but missing values are set ton NaN
d1.describe()
# Use interpolation to fill in the missing values
d1 = d1.interpolate(method='time')
d2 = d2.interpolate(method='time')
# Now plot
d1.altitude.plot()
print(len(d1))
# For convenience, add a new series to d1 with the altitude data from d2
d1['altitude2'] = d2.altitude
# Now plot the two
d1[['altitude', 'altitude2']][10000:30000].clip(-60,60).plot()
(d1.altitude - d1.altitude2)[10000:30000].clip(-20,15).plot()
"""
Explanation: Now the two sets of data have different indices since the processes
collecting the data are not in sync.
So we need to align the data and then fill in missing values
End of explanation
"""
|
DJCordhose/speed-limit-signs | notebooks/cnn-train-augmented.ipynb | apache-2.0 | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pylab as plt
import numpy as np
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
import pandas as pd
print(pd.__version__)
assert StrictVersion(pd.__version__) >= StrictVersion('0.19.0')
"""
Explanation: CNN trainiert mit den erweiternten Daten
End of explanation
"""
!curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/speed-limit-signs.zip
!curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/augmented-signs.zip
# https://docs.python.org/3/library/zipfile.html
from zipfile import ZipFile
zip = ZipFile('speed-limit-signs.zip')
zip.extractall('.')
zip = ZipFile('augmented-signs.zip')
zip.extractall('.')
!ls -lh
import os
import skimage.data
import skimage.transform
from keras.utils.np_utils import to_categorical
import numpy as np
def load_data(data_dir, type=".ppm"):
num_categories = 6
# Get all subdirectories of data_dir. Each represents a label.
directories = [d for d in os.listdir(data_dir)
if os.path.isdir(os.path.join(data_dir, d))]
# Loop through the label directories and collect the data in
# two lists, labels and images.
labels = []
images = []
for d in directories:
label_dir = os.path.join(data_dir, d)
file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)]
# For each label, load it's images and add them to the images list.
# And add the label number (i.e. directory name) to the labels list.
for f in file_names:
images.append(skimage.data.imread(f))
labels.append(int(d))
images64 = [skimage.transform.resize(image, (64, 64)) for image in images]
y = np.array(labels)
y = to_categorical(y, num_categories)
X = np.array(images64)
return X, y
# Load datasets.
ROOT_PATH = "./"
original_dir = os.path.join(ROOT_PATH, "speed-limit-signs")
original_images, original_labels = load_data(original_dir, type=".ppm")
data_dir = os.path.join(ROOT_PATH, "augmented-signs")
augmented_images, augmented_labels = load_data(data_dir, type=".png")
all_images = np.vstack((original_images, augmented_images))
all_labels = np.vstack((original_labels, augmented_labels))
# https://stackoverflow.com/a/4602224
p = numpy.random.permutation(len(all_labels))
shuffled_images = all_images[p]
shuffled_labels = all_labels[p]
# Turn this around if you want the large training set using augmented data or the original one
# X, y = original_images, original_labels
# X, y = augmented_images, augmented_labels
X, y = shuffled_images, shuffled_labels
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
X_train.shape, y_train.shape
from keras.models import Model
from keras.layers import Dense, Dropout, Activation, Flatten, Input
from keras.layers import Convolution2D, MaxPooling2D
# this is important, try and vary between .4 and .75
drop_out = 0.7
# input tensor for a 3-channel 64x64 image
inputs = Input(shape=(64, 64, 3))
# one block of convolutional layers
x = Convolution2D(64, 3, activation='relu', padding='same')(inputs)
x = Convolution2D(64, 3, activation='relu', padding='same')(x)
x = Convolution2D(64, 3, activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(drop_out)(x)
# one more block
x = Convolution2D(128, 3, activation='relu', padding='same')(x)
x = Convolution2D(128, 3, activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(drop_out)(x)
# one more block
x = Convolution2D(256, 3, activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(drop_out)(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(drop_out)(x)
# softmax activation, 6 categories
predictions = Dense(6, activation='softmax')(x)
model = Model(input=inputs, output=predictions)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# https://keras.io/callbacks/#tensorboard
tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')
# To start tensorboard
# tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log
# open http://localhost:6006
early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=100, verbose=1)
checkpoint_callback = keras.callbacks.ModelCheckpoint('./model-checkpoints/weights.epoch-{epoch:02d}-val_loss-{val_loss:.2f}.hdf5');
!rm -r tf_log
!rm -r model-checkpoints
!mkdir model-checkpoints
# Depends on harware GPU architecture, set as high as possible (this works well on K80)
BATCH_SIZE = 1000
# %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback, early_stopping_callback])
%time model.fit(X_train, y_train, epochs=1500, batch_size=BATCH_SIZE, validation_split=0.3, callbacks=[tb_callback])
# %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2)
"""
Explanation: Laden und und Mischen der Bild-Daten
wir nehmen eine Mischung aus den originalen und den transformieren Bilddaten
End of explanation
"""
model.save('conv-vgg-augmented.hdf5')
!ls -lh
# https://transfer.sh/
# Speichert eure Daten für 14 Tage
!curl --upload-file conv-vgg-augmented.hdf5 https://transfer.sh/conv-vgg-augmented.hdf5
# Vortrainiertes Modell
# https://transfer.sh/Cvcar/conv-vgg-augmented.hdf5
"""
Explanation: Sichern des Modells
unser Modell ist 55 MB groß, das ist ein wirklich großes Modell
End of explanation
"""
train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
"""
Explanation: Bewertung
End of explanation
"""
import random
# Pick 10 random images for test data set
random.seed(4) # to make this deterministic
sample_indexes = random.sample(range(len(X_test)), 10)
sample_images = [X_test[i] for i in sample_indexes]
sample_labels = [y_test[i] for i in sample_indexes]
ground_truth = np.argmax(sample_labels, axis=1)
X_sample = np.array(sample_images)
prediction = model.predict(X_sample)
predicted_categories = np.argmax(prediction, axis=1)
predicted_categories
# Display the predictions and the ground truth visually.
def display_prediction (images, true_labels, predicted_labels):
fig = plt.figure(figsize=(10, 10))
for i in range(len(true_labels)):
truth = true_labels[i]
prediction = predicted_labels[i]
plt.subplot(5, 2,1+i)
plt.axis('off')
color='green' if truth == prediction else 'red'
plt.text(80, 10, "Truth: {0}\nPrediction: {1}".format(truth, prediction),
fontsize=12, color=color)
plt.imshow(images[i])
display_prediction(sample_images, ground_truth, predicted_categories)
"""
Explanation: Ausprobieren auf ein paar Test-Daten
End of explanation
"""
|
RTHMaK/RPGOne | scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb | apache-2.0 | from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
])
vectorizer.vocabulary_
"""
Explanation: SciPy 2016 Scikit-learn Tutorial
Out-of-core Learning - Large Scale Text Classification for Sentiment Analysis
Scalability Issues
The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices.
The main scalability issues are:
Memory usage of the text vectorizer: the all the string representations of the features are loaded in memory
Parallelization problems for text feature extraction: the vocabulary_ would be a shared state: complex synchronization and overhead
Impossibility to do online or out-of-core / streaming learning: the vocabulary_ needs to be learned from the data: its size cannot be known before making one pass over the full dataset
To better understand the issue let's have a look at how the vocabulary_ attribute work. At fit time the tokens of the corpus are uniquely indentified by a integer index and this mapping stored in the vocabulary:
End of explanation
"""
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
"""
Explanation: The vocabulary is used at transform time to build the occurrence matrix:
End of explanation
"""
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
"The quick brown fox jumps over the lazy dog.",
])
vectorizer.vocabulary_
"""
Explanation: Let's refit with a slightly larger corpus:
End of explanation
"""
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
"""
Explanation: The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especially if we want to distribute the processing on a cluster.
With this new vocabulary, the dimensionality of the output space is now larger:
End of explanation
"""
import os
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
test_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'test')
"""
Explanation: The IMDb movie dataset
To illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task: sentiment analysis on text documents. The goal is to tell apart negative from positive movie reviews from the Internet Movie Database (IMDb).
In the following sections, with a large subset of movie reviews from the IMDb that has been collected by Maas et al.
A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectors for Sentiment Analysis. In the proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
This dataset contains 50,000 movie reviews, which were split into 25,000 training samples and 25,000 test samples. The reviews are labeled as either negative (neg) or positive (pos). Moreover, positive means that a movie received >6 stars on IMDb; negative means that a movie received <5 stars, respectively.
Assuming that the ../fetch_data.py script was run successfully the following files should be available:
End of explanation
"""
from sklearn.datasets import load_files
train = load_files(container_path=(train_path),
categories=['pos', 'neg'])
test = load_files(container_path=(test_path),
categories=['pos', 'neg'])
"""
Explanation: Now, let's load them into our active session via scikit-learn's load_files function
End of explanation
"""
train.keys()
"""
Explanation: Note
Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer.
The load_files function loaded the datasets into sklearn.datasets.base.Bunch objects, which are Python dictionaries:
End of explanation
"""
import numpy as np
for label, data in zip(('TRAINING', 'TEST'), (train, test)):
print('\n\n%s' % label)
print('Number of documents:', len(data['data']))
print('\n1st document:\n', data['data'][0])
print('\n1st label:', data['target'][0])
print('\nClass names:', data['target_names'])
print('Class count:',
np.unique(data['target']), ' -> ',
np.bincount(data['target']))
"""
Explanation: In particular, we are only interested in the data and target arrays.
End of explanation
"""
from sklearn.utils.murmurhash import murmurhash3_bytes_u32
# encode for python 3 compatibility
for word in "the cat sat on the mat".encode("utf-8").split():
print("{0} => {1}".format(
word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))
"""
Explanation: As we can see above the 'target' array consists of integers 0 and 1, where 0 stands for negative and 1 stands for positive.
The Hashing Trick
Remember the bag of word representation using a vocabulary based vectorizer:
<img src="figures/bag_of_words.svg" width="100%">
To workaround the limitations of the vocabulary-based vectorizers, one can use the hashing trick. Instead of building and storing an explicit mapping from the feature names to the feature indices in a Python dict, we can just use a hash function and a modulus operation:
<img src="figures/hashing_vectorizer.svg" width="100%">
More info and reference for the original papers on the Hashing Trick in the following site as well as a description specific to language here.
End of explanation
"""
from sklearn.feature_extraction.text import HashingVectorizer
h_vectorizer = HashingVectorizer(encoding='latin-1')
h_vectorizer
"""
Explanation: This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core learning.
The HashingVectorizer class is an alternative to the CountVectorizer (or TfidfVectorizer class with use_idf=False) that internally uses the murmurhash hash function:
End of explanation
"""
analyzer = h_vectorizer.build_analyzer()
analyzer('This is a test sentence.')
"""
Explanation: It shares the same "preprocessor", "tokenizer" and "analyzer" infrastructure:
End of explanation
"""
docs_train, y_train = train['data'], train['target']
docs_valid, y_valid = test['data'][:12500], test['target'][:12500]
docs_test, y_test = test['data'][12500:], test['target'][12500:]
"""
Explanation: We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method: there is no need to fit as HashingVectorizer is a stateless transformer:
End of explanation
"""
h_vectorizer.transform(docs_train)
"""
Explanation: The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute):
End of explanation
"""
h_vec = HashingVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 h_vec.fit(docs_train, y_train)
count_vec = CountVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 count_vec.fit(docs_train, y_train)
"""
Explanation: Now, let's compare the computational efficiency of the HashingVectorizer to the CountVectorizer:
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
h_pipeline = Pipeline((
('vec', HashingVectorizer(encoding='latin-1')),
('clf', LogisticRegression(random_state=1)),
))
h_pipeline.fit(docs_train, y_train)
print('Train accuracy', h_pipeline.score(docs_train, y_train))
print('Validation accuracy', h_pipeline.score(docs_valid, y_valid))
import gc
del count_vec
del h_pipeline
gc.collect()
"""
Explanation: As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case.
Finally, let us train a LogisticRegression classifier on the IMDb training subset:
End of explanation
"""
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
train_pos = os.path.join(train_path, 'pos')
train_neg = os.path.join(train_path, 'neg')
fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\
[os.path.join(train_neg, f) for f in os.listdir(train_neg)]
fnames[:3]
"""
Explanation: Out-of-Core learning
Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions:
a feature extraction layer with fixed output dimensionality
knowing the list of all classes in advance (in this case we only have positive and negative tweets)
a machine learning algorithm that supports incremental learning (the partial_fit method in scikit-learn).
In the following sections, we will set up a simple batch-training function to train an SGDClassifier iteratively.
But first, let us load the file names into a Python list:
End of explanation
"""
y_train = np.zeros((len(fnames), ), dtype=int)
y_train[:12500] = 1
np.bincount(y_train)
"""
Explanation: Next, let us create the target label array:
End of explanation
"""
from sklearn.base import clone
def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1):
vec = HashingVectorizer(encoding='latin-1')
idx = np.arange(labels.shape[0])
c_clf = clone(clf)
rng = np.random.RandomState(seed=random_seed)
for i in range(iterations):
rnd_idx = rng.choice(idx, size=batchsize)
documents = []
for i in rnd_idx:
with open(fnames[i], 'r') as f:
documents.append(f.read())
X_batch = vec.transform(documents)
batch_labels = labels[rnd_idx]
c_clf.partial_fit(X=X_batch,
y=batch_labels,
classes=[0, 1])
return c_clf
"""
Explanation: Now, we implement the batch_train function as follows:
End of explanation
"""
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(loss='log', random_state=1)
sgd = batch_train(clf=sgd,
fnames=fnames,
labels=y_train)
"""
Explanation: Note that we are not using LogisticRegression as in the previous section, but we will use a SGDClassifier with a logistic cost function instead. SGD stands for stochastic gradient descent, an optimization alrogithm that optimizes the weight coefficients iteratively sample by sample, which allows us to feed the data to the classifier chunk by chuck.
And we train the SGDClassifier; using the default settings of the batch_train function, it will train the classifier on 25*1000=25000 documents. (Depending on your machine, this may take >2 min)
End of explanation
"""
vec = HashingVectorizer(encoding='latin-1')
sgd.score(vec.transform(docs_test), y_test)
"""
Explanation: Eventually, let us evaluate its performance:
End of explanation
"""
# %load solutions/27_B-batchtrain.py
"""
Explanation: Limitations of the Hashing Vectorizer
Using the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues:
The collisions can introduce too much noise in the data and degrade prediction quality,
The HashingVectorizer does not provide "Inverse Document Frequency" reweighting (lack of a use_idf=True option).
There is no easy way to inverse the mapping and find the feature names from the feature index.
The collision issues can be controlled by increasing the n_features parameters.
The IDF weighting might be reintroduced by appending a TfidfTransformer instance on the output of the vectorizer. However computing the idf_ statistic used for the feature reweighting will require to do at least one additional pass over the training set before being able to start training the classifier: this breaks the online learning scheme.
The lack of inverse mapping (the get_feature_names() method of TfidfVectorizer) is even harder to workaround. That would require extending the HashingVectorizer class to add a "trace" mode to record the mapping of the most important features to provide statistical debugging information.
In the mean time to debug feature extraction issues, it is recommended to use TfidfVectorizer(use_idf=False) on a small-ish subset of the dataset to simulate a HashingVectorizer() instance that have the get_feature_names() method and no collision issues.
Exercise
In our implementation of the batch_train function above, we randomly draw k training samples as a batch in each iteration, which can be considered as a random subsampling with replacement. Can you modify the batch_train function so that it iterates over the documents without replacement, i.e., that it uses each document exactly once per iteration?
End of explanation
"""
|
theavey/ParaTemp | examples/paratemp_setup_example.ipynb | apache-2.0 | import re, os, sys, shutil
import shlex, subprocess
import glob
import pandas as pd
import panedr
import numpy as np
import MDAnalysis as mda
import nglview
import matplotlib.pyplot as plt
import parmed as pmd
import py
import scipy
from scipy import stats
from importlib import reload
from thtools import cd
from paratemp import copy_no_overwrite
from paratemp import geometries as gm
from paratemp import coordinate_analysis as ca
import paratemp.para_temp_setup as pts
import paratemp as pt
from gautools import submit_gaussian as subg
from gautools.tools import use_gen_template as ugt
"""
Explanation: Imports and setup
Imports
End of explanation
"""
def plot_prop_PT(edict, prop):
fig, axes = plt.subplots(4, 4, figsize=(16,16))
for i in range(16):
ax = axes.flat[i]
edict[i][prop].plot(ax=ax)
fig.tight_layout()
return fig, axes
def plot_e_props(df, labels, nrows=2, ncols=2):
fig, axes = plt.subplots(nrows, ncols, sharex=True)
for label, ax in zip(labels, axes.flat):
df[label].plot(ax=ax)
ax.set_title(label)
fig.tight_layout()
return fig, axes
def plot_rd(univ): # rd = reaction distance
univ.calculate_distances(rd=(20,39))
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
univ.data.rd.plot(ax=axes[0])
univ.data.rd.hist(ax=axes[1], grid=False)
print(f'reaction distance mean: {univ.data.rd.mean():.2f} and sd: {univ.data.rd.std():.2f}')
return fig, axes
def plot_hist_dist(univ, name, indexes=None):
if indexes is not None:
kwargs = {name: indexes}
univ.calculate_distances(**kwargs)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
univ.data[name].plot(ax=axes[0])
univ.data[name].hist(ax=axes[1], grid=False)
print(f'{name} distance mean: {univ.data[name].mean():.2f} and sd: {univ.data[name].std():.2f}')
def get_solvent_count_solvate(proc):
for line in proc.stdout.split('\n'):
m = re.search(r'(?:atoms\):\s+)(\d+)(?:\s+residues)', line)
if m:
return int(m.group(1))
else:
raise ValueError('Solvent count not found.')
def set_solv_count(n_gro, s_count,
res_name='DCM', prepend='unequal-'):
"""
Remove solvent residues from the end of a gro file to match s_count
This assumes all non-solvent molecules are listed in the input gro
file before the solvent residues.
"""
bak_name = os.path.join(os.path.dirname(n_gro),
prepend+os.path.basename(n_gro))
copy_no_overwrite(n_gro, bak_name)
with open(n_gro, 'r') as in_gro:
lines = in_gro.readlines()
for line in lines[2:]:
if res_name in line:
non_s_res_count = resid
break
else:
resid = int(line[:5])
res_count = s_count + non_s_res_count
# TODO check reasonability of this number
box = lines.pop()
while True:
line = lines.pop()
if int(line[:5]) > res_count:
continue
elif int(line[:5]) == res_count:
atom_count = line[15:20]
lines.append(line)
break
elif int(line[:5]) < res_count:
raise ValueError("Desired res "
"count is larger than "
"line's resid.\n" +
"res_count: {}\n".format(res_count) +
"line: {}".format(line))
lines[1] = atom_count + '\n'
lines.append(box)
with open(n_gro, 'w') as out_gro:
for line in lines:
out_gro.write(line)
def get_solv_count_top(n_top, res_name='DCM'):
"""
Return residue count of specified residue from n_top"""
with open(n_top, 'r') as in_top:
mol_section = False
for line in in_top:
if line.strip().startswith(';'):
pass
elif not mol_section:
if re.search(r'\[\s*molecules\s*\]', line,
flags=re.IGNORECASE):
mol_section = True
else:
if res_name.lower() in line.lower():
return int(line.split()[1])
def set_solv_count_top(n_top, s_count,
res_name='DCM', prepend='unequal-'):
"""
Set count of res_name residues in n_top
This will make a backup copy of the top file with `prepend`
prepended to the name of the file."""
bak_name = os.path.join(os.path.dirname(n_top),
prepend+os.path.basename(n_top))
copy_no_overwrite(n_top, bak_name)
with open(n_top, 'r') as in_top:
lines = in_top.readlines()
with open(n_top, 'w') as out_top:
mol_section = False
for line in lines:
if line.strip().startswith(';'):
pass
elif not mol_section:
if re.search(r'\[\s*molecules\s*\]', line,
flags=re.IGNORECASE):
mol_section = True
else:
if res_name.lower() in line.lower():
line = re.sub(r'\d+', str(s_count), line)
out_top.write(line)
"""
Explanation: Common functions
End of explanation
"""
d_charge_params = dict(opt='SCF=tight Test Pop=MK iop(6/33=2) iop(6/42=6) iop(6/50=1)',
func='HF',
basis='6-31G*',
footer='\ng16.gesp\n\ng16.gesp\n\n')
l_scripts = []
s = subg.write_sub_script('01-charges/TS2.com',
executable='g16',
make_xyz='../TS2.pdb',
make_input=True,
ugt_dict={'job_name':'GPX TS2 charges',
'charg_mult':'+1 1',
**d_charge_params})
l_scripts.append(s)
s = subg.write_sub_script('01-charges/R-NO2-CPA.com',
executable='g16',
make_xyz='../R-NO2-CPA.pdb',
make_input=True,
ugt_dict={'job_name':'GPX R-NO2-CPA charges',
'charg_mult':'-1 1',
**d_charge_params})
l_scripts.append(s)
l_scripts
subg.submit_scripts(l_scripts, batch=True, submit=True)
"""
Explanation: Get charges
Calculate RESP charges using Gaussian through submit_gaussian for use with GAFF.
End of explanation
"""
gpx = pmd.gromacs.GromacsTopologyFile('01-charges/GPX-ts.acpype/GPX-ts_GMX.top', xyz='01-charges/GPX-ts.acpype/GPX-ts_GMX.gro')
cpa = pmd.gromacs.GromacsTopologyFile('01-charges/CPA-gesp.acpype/CPA-gesp_GMX.top', xyz='01-charges/CPA-gesp.acpype/CPA-gesp_GMX.gro')
for res in gpx.residues:
if res.name == 'MOL':
res.name = 'GPX'
for res in cpa.residues:
if res.name == 'MOL':
res.name = 'CPA'
struc_comb = gpx + cpa
struc_comb
struc_comb.write('gpx-cpa-dry.top')
struc_comb.save('gpx-cpa-dry.gro')
"""
Explanation: Parameterize molecule in GAFF with ANTECHAMBER and ACPYPE
Note, ACPYPE was installed from this repository, which seems to be from the original author, though maybe not the one who put it onto pypi.
For the catalyst:
Use antechamber to create mol2 file with Gaussian ESP charges (though wrong atom types and such, for now):
antechamber -i R-NO2-CPA.gesp -fi gesp -o R-NO2-CPA.mol2 -fo mol2
Use ACPYPE to use this mol2 file (and it's GESP charges) to generate GROMACS input files:
acpype.py -i R-NO2-CPA.mol2 -b CPA-gesp --net_charge=-1 -o gmx -d -c user
For the reactant:
antechamber -i TS2.gesp -fi gesp -o TS2.mol2 -fo mol2
acpype.py -i TS2.mol2 -b GPX-ts --net_charge=1 -o gmx -c user
Then the different molecules can be combined using ParmEd.
End of explanation
"""
f_dcm = py.path.local('~/GROMACS-basics/DCM-GAFF/')
f_solvate = py.path.local('02-solvate/')
sep_gro = py.path.local('gpx-cpa-sep.gro')
boxed_gro = f_solvate.join('gpx-cpa-boxed.gro')
box = '3.5 3.5 3.5'
solvent_source = f_dcm.join('dichloromethane-T293.15.gro')
solvent_top = f_dcm.join('dichloromethane.top')
solv_gro = f_solvate.join('gpx-cpa-dcm.gro')
top = py.path.local('../params/gpxTS-cpa-dcm.top')
verbose = True
solvent_counts, key = dict(), 'GPX'
with f_solvate.as_cwd():
## Make box
cl = shlex.split(f'gmx_mpi editconf -f {sep_gro} ' +
f'-o {boxed_gro} -box {box}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_editconf'] = proc.stdout
proc.check_returncode()
## Solvate
cl = shlex.split(f'gmx_mpi solvate -cp {boxed_gro} ' +
f'-cs {solvent_source} -o {solv_gro}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_solvate'] = proc.stdout
proc.check_returncode()
solvent_counts[key] = get_solvent_count_solvate(proc)
if verbose:
print(f'Solvated system into {solv_gro}')
struc_g_c = pmd.load_file('gpx-cpa-dry.top')
struc_dcm = pmd.load_file(str(f_dcm.join('dichloromethane.top')))
struc_g_c_d = struc_g_c + solvent_counts['GPX'] * struc_dcm
struc_g_c_d.save(str(top))
"""
Explanation: Move molecules
In VMD, the molecules were moved so that they were not sitting on top of each other.
Solvate
As before, using DCM parameters and solvent box from virtualchemistry.org.
End of explanation
"""
ppl = py.path.local
f_min = ppl('03-minimize/')
f_g_basics = py.path.local('~/GROMACS-basics/')
mdp_min = f_g_basics.join('minim.mdp')
tpr_min = f_min.join('min.tpr')
deffnm_min = f_min.join('min-out')
gro_min = deffnm_min + '.gro'
with f_min.as_cwd():
## Compile tpr
if not tpr_min.exists():
cl = shlex.split(f'gmx_mpi grompp -f {mdp_min} '
f'-c {solv_gro} '
f'-p {top} '
f'-o {tpr_min}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_grompp_em'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Compiled em tpr to {tpr_min}')
elif verbose:
print(f'em tpr file already exists ({tpr_min})')
## Run minimization
if not gro_min.exists():
cl = shlex.split('gmx_mpi mdrun '
f'-s {tpr_min} '
f'-deffnm {deffnm_min} ')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_mdrun_em'] = proc.stdout
# TODO Get the potential energy from this output
proc.check_returncode()
if verbose:
print(f'Ran {key} em to make {gro_min}')
elif verbose:
print(f'em output gro already exists (gro_min)')
"""
Explanation: Minimize
End of explanation
"""
f_equil = ppl('04-equilibrate/')
plumed = f_equil.join('plumed.dat')
mdp_equil = f_g_basics.join('npt-298.mdp')
tpr_equil = f_equil.join('equil.tpr')
deffnm_equil = f_equil.join('equil-out')
gro_equil = deffnm_equil + '.gro'
gro_input = gro_min
with f_equil.as_cwd():
## Compile equilibration
if not tpr_equil.exists():
cl = shlex.split(f'gmx_mpi grompp -f {mdp_equil} '
f'-c {gro_input} '
f'-p {top} '
f'-o {tpr_equil}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_grompp_equil'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Compiled equil tpr to {tpr_equil}')
elif verbose:
print(f'equil tpr file already exists ({tpr_equil})')
## Run equilibration
if not gro_equil.exists():
cl = shlex.split('gmx_mpi mdrun '
f'-s {tpr_equil} '
f'-deffnm {deffnm_equil} '
f'-plumed {plumed}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_mdrun_equil'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Ran {key} equil to make {gro_equil}')
elif verbose:
print(f'equil output gro already exists (gro_equil)')
"""
Explanation: Equilibrate
End of explanation
"""
f_pt = ppl('05-PT/')
template = f_pt.join('template-mdp.txt')
index = ppl('index.ndx')
sub_templ = f_g_basics.join('sub-template-128.sub')
d_sub_templ = dict(tpr_base = 'TOPO/npt',
deffnm = 'PT-out',
name = 'GPX-PT',
plumed = plumed,
)
scaling_exponent = 0.025
maxwarn = 0
start_temp = 298.
verbose = True
skip_existing = True
jobs = []
failed_procs = []
for key in ['GPX']:
kwargs = {'template': str(template),
'topology': str(top),
'structure': str(gro_equil),
'index': str(index),
'scaling_exponent': scaling_exponent,
'start_temp': start_temp,
'maxwarn': maxwarn}
with f_pt.as_cwd():
try:
os.mkdir('TOPO')
except FileExistsError:
if skip_existing:
print(f'Skipping {key} because it seems to '
'already be done.\nMoving on...')
continue
with cd('TOPO'):
print(f'Now in {os.getcwd()}\nAttempting to compile TPRs...')
pts.compile_tprs(**kwargs)
print('Done compiling. Moving on...')
print(f'Now in {os.getcwd()}\nWriting submission script...')
with sub_templ.open(mode='r') as templ_f, \
open('gromacs-start-job.sub', 'w') as sub_s:
[sub_s.write(l.format(**d_sub_templ)) for l in templ_f]
print('Done.\nNow submitting job...')
cl = ['qsub', 'gromacs-start-job.sub']
proc = subprocess.run(cl,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
if proc.returncode == 0:
output = proc.stdout
jobs.append(re.search('[0-9].+\)', output).group(0))
print(output, '\nDone.\nMoving to next...')
else:
print('\n\n'+5*'!!!---'+'\n')
print(f'Error with calling qsub on {key}')
print('Command line input was', cl)
print('Check input and try again manually.'
'\nMoving to next anyway...')
failed_procs.append(proc)
print('-----Done-----\nSummary of jobs submitted:')
for job in jobs:
print(job)
"""
Explanation: Setup and submit parallel tempering (PT)
End of explanation
"""
e_05s = dict()
for i in range(16):
e_05s[i] = panedr.edr_to_df(f'05-PT/PT-out{i}.edr')
fig, axes = plot_prop_PT(e_05s, 'Pressure')
"""
Explanation: The energies from the simulations can be read in as a pandas DataFrame using panedr and then analyzed or plotted to check on equilibration, convergence, etc.
End of explanation
"""
l_scripts = []
s = subg.write_sub_script('01-charges/TS1.com',
executable='g16',
make_xyz='../TS1protonated.mol2',
make_input=True,
ugt_dict={'job_name':'GPX TS1 charges',
'charg_mult':'+1 1',
**d_charge_params})
l_scripts.append(s)
s = subg.write_sub_script('01-charges/TS3.com',
executable='g16',
make_xyz='../TS3protonated.mol2',
make_input=True,
ugt_dict={'job_name':'GPX TS3 charges',
'charg_mult':'+1 1',
**d_charge_params})
l_scripts.append(s)
s = subg.write_sub_script('01-charges/anti-cat-yamamoto.com',
executable='g16',
make_xyz='../R-Yamamoto-Cat.pdb',
make_input=True,
ugt_dict={'job_name':
'yamamoto catalyst charges',
'charg_mult':'-1 1',
**d_charge_params})
l_scripts.append(s)
l_scripts
subg.submit_scripts(l_scripts, batch=True, submit=True)
"""
Explanation: Setup for several systems/molecules at once
Working based on what was done above (using some things that were defined up there as well
Get charges
End of explanation
"""
ts1 = pmd.gromacs.GromacsTopologyFile(
'01-charges/TS1-gesp.acpype/TS1-gesp_GMX.top',
xyz='01-charges/TS1-gesp.acpype/TS1-gesp_GMX.gro')
ts3 = pmd.gromacs.GromacsTopologyFile(
'01-charges/TS3-gesp.acpype/TS3-gesp_GMX.top',
xyz='01-charges/TS3-gesp.acpype/TS3-gesp_GMX.gro')
ycp = pmd.gromacs.GromacsTopologyFile(
'01-charges/YCP-gesp.acpype/YCP-gesp_GMX.top',
xyz='01-charges/YCP-gesp.acpype/YCP-gesp_GMX.gro')
for res in ts1.residues:
if res.name == 'MOL':
res.name = 'TS1'
for res in ts3.residues:
if res.name == 'MOL':
res.name = 'TS3'
for res in ycp.residues:
if res.name == 'MOL':
res.name = 'YCP'
ts1_en = ts1.copy(pmd.gromacs.GromacsTopologyFile)
ts3_en = ts3.copy(pmd.gromacs.GromacsTopologyFile)
ts1_en.coordinates = - ts1.coordinates
ts3_en.coordinates = - ts3.coordinates
sys_ts1 = ts1 + ycp
sys_ts1_en = ts1_en + ycp
sys_ts3 = ts3 + ycp
sys_ts3_en = ts3_en + ycp
sys_ts1.write('ts1-ycp-dry.top')
sys_ts3.write('ts3-ycp-dry.top')
sys_ts1.save('ts1-ycp-dry.gro')
sys_ts1_en.save('ts1_en-ycp-dry.gro')
sys_ts3.save('ts3-ycp-dry.gro')
sys_ts3_en.save('ts3_en-ycp-dry.gro')
"""
Explanation: Copied over the g16.gesp files and renamed them for each molecule.
Make input files
Loaded amber/2016 module (and its dependencies).
antechamber -i TS1.gesp -fi gesp -o TS1.mol2 -fo mol2
acpype.py -i TS1.mol2 -b TS1-gesp --net_charge=1 -o gmx -d -c user
There was a warning for assigning bond types.
antechamber -i TS3.gesp -fi gesp -o TS3.mol2 -fo mol2
acpype.py -i TS3.mol2 -b TS3-gesp --net_charge=1 -o gmx -d -c user
Similar warning.
antechamber -i YCP.gesp -fi gesp -o YCP.mol2 -fo mol2
acpype.py -i YCP.mol2 -b YCP-gesp --net_charge=-1 -o gmx -d -c use
No similar warning here.
End of explanation
"""
f_dcm = py.path.local('~/GROMACS-basics/DCM-GAFF/')
f_solvate = py.path.local('37-solvate-anti/')
box = '3.7 3.7 3.7'
solvent_source = f_dcm.join('dichloromethane-T293.15.gro')
solvent_top = f_dcm.join('dichloromethane.top')
solv_gro = f_solvate.join('gpx-cpa-dcm.gro')
ts1_top = ppl('../params/ts1-ycp-dcm.top')
ts3_top = ppl('../params/ts3-ycp-dcm.top')
l_syss = ['TS1', 'TS1_en', 'TS3', 'TS3_en']
verbose = True
solvent_counts = dict()
for key in l_syss:
sep_gro = ppl(f'{key.lower()}-ycp-dry.gro')
if not sep_gro.exists():
raise FileNotFoundError(f'{sep_gro} does not exist')
boxed_gro = f'{key.lower()}-ycp-box.gro'
solv_gro = f'{key.lower()}-ycp-dcm.gro'
with f_solvate.ensure_dir().as_cwd():
## Make box
cl = shlex.split(f'gmx_mpi editconf -f {sep_gro} ' +
f'-o {boxed_gro} -box {box}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_editconf'] = proc.stdout
proc.check_returncode()
## Solvate
cl = shlex.split(f'gmx_mpi solvate -cp {boxed_gro} ' +
f'-cs {solvent_source} -o {solv_gro}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_solvate'] = proc.stdout
proc.check_returncode()
solvent_counts[key] = get_solvent_count_solvate(proc)
if verbose:
print(f'Solvated system into {solv_gro}')
# min_solv_count = min(solvent_counts.values())
min_solv_count = 328 # want to match with syn calculations
if min(solvent_counts.values()) < min_solv_count:
raise ValueError('At least one of the structures has <328 DCMs.\n'
'Check and/or make the box larger')
for key in l_syss:
solv_gro = f'{key.lower()}-ycp-dcm.gro'
with f_solvate.as_cwd():
set_solv_count(solv_gro, min_solv_count)
struc_ts1 = pmd.load_file('ts1-ycp-dry.top')
struc_ts3 = pmd.load_file('ts3-ycp-dry.top')
struc_dcm = pmd.load_file(str(f_dcm.join('dichloromethane.top')))
struc_ts1_d = struc_ts1 + min_solv_count * struc_dcm
struc_ts1_d.save(str(ts1_top))
struc_ts3_d = struc_ts3 + min_solv_count * struc_dcm
struc_ts3_d.save(str(ts3_top))
"""
Explanation: Move molecules
I presume I will again need to make the molecules non-overlapping, and that will be done manually in VMD.
Box and solvate
End of explanation
"""
f_min = ppl('38-relax-anti/')
f_min.ensure_dir()
f_g_basics = py.path.local('~/GROMACS-basics/')
mdp_min = f_g_basics.join('minim.mdp')
d_tops = dict(TS1=ts1_top, TS1_en=ts1_top, TS3=ts3_top, TS3_en=ts3_top)
for key in l_syss:
solv_gro = ppl(f'37-solvate-anti/{key.lower()}-ycp-dcm.gro')
tpr_min = f_min.join(f'{key.lower()}-min.tpr')
deffnm_min = f_min.join(f'{key.lower()}-min-out')
gro_min = deffnm_min + '.gro'
top = d_tops[key]
with f_min.as_cwd():
## Compile tpr
if not tpr_min.exists():
cl = shlex.split(f'gmx_mpi grompp -f {mdp_min} '
f'-c {solv_gro} '
f'-p {top} '
f'-o {tpr_min}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_grompp_em'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Compiled em tpr to {tpr_min}')
elif verbose:
print(f'em tpr file already exists ({tpr_min})')
## Run minimization
if not gro_min.exists():
cl = shlex.split('gmx_mpi mdrun '
f'-s {tpr_min} '
f'-deffnm {deffnm_min} ')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_mdrun_em'] = proc.stdout
# TODO Get the potential energy from this output
proc.check_returncode()
if verbose:
print(f'Ran {key} em to make {gro_min}')
elif verbose:
print(f'em output gro already exists (gro_min)')
"""
Explanation: Minimize
End of explanation
"""
f_pt = ppl('38-relax-anti/')
template = ppl('33-SA-NPT-rest-no-LINCS/template-mdp.txt')
index = ppl('../params/index-ycp.ndx')
scaling_exponent = 0.025
maxwarn = 0
start_temp = 298.
nsims = 16
verbose = True
skip_existing = True
jobs = []
failed_procs = []
for key in l_syss:
d_sub_templ = dict(
tpr = f'{key.lower()}-TOPO/npt',
deffnm = f'{key.lower()}-SA-out',
name = f'{key.lower()}-SA',
nsims = nsims,
tpn = 16,
cores = 128,
multi = True,
)
gro_equil = f_min.join(f'{key.lower()}-min-out.gro')
top = d_tops[key]
kwargs = {'template': str(template),
'topology': str(top),
'structure': str(gro_equil),
'index': str(index),
'scaling_exponent': scaling_exponent,
'start_temp': start_temp,
'maxwarn': maxwarn,
'number': nsims,
'grompp_exe': 'gmx_mpi grompp'}
with f_pt.as_cwd():
try:
os.mkdir(f'{key.lower()}-TOPO/')
except FileExistsError:
if (os.path.exists(f'{key.lower()}-TOPO/temperatures.dat') and
skip_existing):
print(f'Skipping {key} because it seems to '
'already be done.\nMoving on...')
continue
with cd(f'{key.lower()}-TOPO/'):
print(f'Now in {os.getcwd()}\nAttempting to compile TPRs...')
pts.compile_tprs(**kwargs)
print('Done compiling. Moving on...')
print(f'Now in {os.getcwd()}\nWriting submission script...')
lp_sub = pt.sim_setup.make_gromacs_sub_script(
f'gromacs-start-{key}-job.sub', **d_sub_templ)
print('Done.\nNow submitting job...')
cl = shlex.split(f'qsub {lp_sub}')
proc = subprocess.run(cl,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
if proc.returncode == 0:
output = proc.stdout
jobs.append(re.search('[0-9].+\)', output).group(0))
print(output, '\nDone.\nMoving to next...')
else:
print('\n\n'+5*'!!!---'+'\n')
print(f'Error with calling qsub on {key}')
print('Command line input was', cl)
print('Check input and try again manually.'
'\nMoving to next anyway...')
failed_procs.append(proc)
print('-----Done-----\nSummary of jobs submitted:')
for job in jobs:
print(job)
"""
Explanation: Made index file (called index-ycp.ndx) with solutes and solvent groups.
SA equilibration
End of explanation
"""
e_38s = dict()
for key in l_syss:
deffnm = f'{key.lower()}-SA-out'
e_38s[key] = dict()
d = e_38s[key]
for i in range(16):
d[i] = panedr.edr_to_df(f'38-relax-anti/{deffnm}{i}.edr')
for key in l_syss:
d = e_38s[key]
fig, axes = plot_prop_PT(d, 'Volume')
"""
Explanation: !!! Need to check distance on restraint !!!
Check equilibration
End of explanation
"""
for key in l_syss:
d = e_38s[key]
fig, ax = plt.subplots()
for key in list(d.keys()):
ax.hist(d[key]['Total Energy'], bins=100)
del d[key]
"""
Explanation: The volumes seem to look okay.
Started high (I did remove some solvents and it hadn't relaxed much), dropped quickly, then seemed to grow appropriately as the temperatures rose.
None seems to have boiled.
End of explanation
"""
|
opencb/opencga | opencga-client/src/main/python/notebooks/user-training/pyopencga_first_steps.ipynb | apache-2.0 | from pyopencga.opencga_config import ClientConfiguration # import configuration module
from pyopencga.opencga_client import OpencgaClient # import client module
from pprint import pprint
from IPython.display import JSON
import matplotlib.pyplot as plt
import datetime
"""
Explanation: First Steps with pyopencga; the Python client of OpenCGA
This notebook provides guidance for getting started with the pyopencga library, which is the Python client of OpenCGA. pyopencga is a REST client that fully implements OpenCGA REST API.
These notebooks use a demo installation available at the Univeristy of Cambrdige, feel free to change OpenCGA host and credentials to use any other OpenCGA server.
We assume that your workstation (Linux, Mac, Windows) is connected to the internet and you have Python 3 and the pip package manager installed. We then show you how to:
Install pyopencga.
Connect to an OpenCGA instance.
Execute OpenCGA calls and work with responses.
Launch asynchronous jobs and retrieve results.
Walk-through guides of some common use cases are provided in two further notebooks:<BR>
- pyopencga_catalog.ipynb
- pyopencga_variant_query.ipynb
- pyopencga_variant_analysis.ipynb
You can check OpenCGA REST Web Service API with the following public OpenCGA installation:
- https://ws.opencb.org/opencga-prod/webservices
Installing and importing the pyopencga library
1. Install pyopencga with pip
pyopencga is the OpenCGA python client available at PyPI (https://pypi.org/project/pyopencga/). You can easily install it by exeuting:
$ pip install pyopencga
pyopencga uses some other dependencies , make sure you have installed Pandas, IPython or MatplotLib.
2. Import pyopencga library
You can find here the import section with all the dependecies required to use pyopencga:
End of explanation
"""
# Server host
host = 'http://bioinfo.hpc.cam.ac.uk/opencga-prod'
# User credentials
user = 'demouser'
passwd = 'demouser' ## You can skip this, see below.
"""
Explanation: 3. Setup OpenCGA Client
HOST: You need to provide at least a OpenCGA host server URL in the standard configuration format for OpenCGA as a python dictionary or in a json file.
CREDENTIALS: Regarding credentials, you can set both user and password as two variables in the script. If you prefer not to show the password, it would be asked interactively without echo.
Set variables for server host, user credentials and project owner
End of explanation
"""
# Creating ClientConfiguration dict
config_dict = {'rest': {
'host': host
}
}
print('Config information:\n',config_dict)
"""
Explanation: Creating ConfigClient dictionary for server connection configuration
End of explanation
"""
## Create the configuration
config = ClientConfiguration(config_dict)
## Define the client
oc = OpencgaClient(config)
"""
Explanation: 4. Initialize the Client
Now we need to pass the config_dict dictionary to the ClientConfiguration method.<br>
Once we have the configuration defined as config (see below), we can initiate the client. This is the most important step.
OpencgaClient: what is and why is so important?
The OpencgaClient (see oc variable below) implements all the methods to query the REST API of OpenCGA. All the webservices available, can be directly accesed through the client.
End of explanation
"""
# ## Option 1: here we put only the user in order to be asked for the password interactively
# oc.login(user)
# print('Logged succesfuly to {}, your token is: {} well done!'.format(host, oc.token))
"""
Explanation: Once we have defined a variable with the client configuration and credentials, we can access to all the methods defined for the client.
These methods implement calls to the OpenCGA web service endpoints used by pyopencga.
5. Import the credentials and Login into OpenCGA
Option 1: pass the user and be asked for the password interactively. This option is more secure if you don't want to have your passwords hardcoded or you will run the notebook with public. Uncomment the cell bellow to try:
End of explanation
"""
# Option 2: you can pass the user and passwd
oc.login(user, passwd)
print('Logged succesfuly to {}, your token is: {} well done!'.format(host, oc.token))
"""
Explanation: Option 2: pass the user and the password as variables. Be careful with this option as the password can be publicly showed.
End of explanation
"""
## Let's fecth the available projects.
## First let's get the project client and execute search() funciton
projects = oc.projects.search(include='id,name')
## Loop through all diferent projects
for project in projects.responses[0]['results']:
print(project['id'], project['name'])
"""
Explanation: ✅ Congrats! You are should be now connected to your OpenCGA installation
Understanding REST Response
pyopencga queries web services that return a RESTResponse object, which might be difficult to interpretate. The RESTResponse type provide the data in a manner that is not as intuitive as a python list or dictionary. Because of this, we have develop a useful functionality that retrieves the data in a simpler format.
OpenCGA Client Libraries, including pyopencga, implement a RESTReponse wrapper to make even easier to work with REST web services responses. <br>REST responses include metadata and OpenCGA 2.0.1 has been designed to work in a federation mode (more information about OpenCGA federations can be found here).
All these can make a first-time user to struggle when start working with the responses. Please read this brief documentation about OpenCGA RESTful Web Services.
Let's see a quick example of how to use RESTResponse wrapper in pyopencga.
You can get some extra inforamtion here. Let's execute a first simple query to fetch all projects for the user demouser already logged in Installing and importing the pyopencga library.
Example of foor loop:
Although you can iterate through all the different projects provided by the response by executing the next chunk of code, this is a not recommended way. The next query iterates over all the projects retrieved from projects.search()
End of explanation
"""
## Let's fecth the available projects.
projects = oc.projects.search()
## Uncomment next line to display an interactive JSON viewer
# JSON(projects.get_results())
"""
Explanation: RestResponse API
Note: Table with API funcitons and the description
1. Using the get_results() function
Using the functions that pyopencga implements for the RestResponse object makes things much easier! <br> Let's dig into an example using the same query as above:
End of explanation
"""
## Let's fecth the available projects.
projects = oc.projects.search()
## Iterate through all diferent projects
for project in projects.result_iterator():
print(project['id'], project['name'], project['creationDate'], project['organism'])
"""
Explanation: 2. Using the result_iterator() function to iterate over the Rest results
You can also iterate results, this is specially interesting when fetching many results from the server.
End of explanation
"""
## This function iterates over all the results, it can be configured to exclude metadata, change separator or even select the fields!
## Set a title to display the results
user_defined_title = 'These are the projects you can access with your user'
projects.print_results(title=user_defined_title, separator=',', fields='id,name,creationDate,organism')
"""
Explanation: 3. Using print_results() function to iterate over the Rest results
IMPORTANT: This function implements a configuration to exclude metadata, change separator or even select the fields! Then it reaches all the user-desired results and prints them directly in the terminal.<br>In this way, the RESTResponse objectt implements a very powerful custom function to print results 😎
[NOTE]: From pyopencga 2.0.1.2 you can use the title parameter in the function to add a header to the results printed.
End of explanation
"""
## Lets exclude metadata and print only few fields, use dot notation for nested fields
user_defined_title = 'Display selected fields from the projects data model'
selected_fields='id,name,organism.scientificName,organism.assembly'
projects.print_results(fields=selected_fields, metadata=False, title=user_defined_title)
"""
Explanation: Exercise:
Let's try to costumize the results so we can get printed only the portion of the data that we might be interested in.
The metadata=False parameter allows you to skip the header with the rest response information in the printed results.
End of explanation
"""
## You can change separator
print('Print the projects with a header and a different separator:\n')
projects.print_results(fields='id,name,organism.scientificName,organism.assembly', separator=',', metadata=False)
"""
Explanation: Exercise:
A very useful parameter is the separator. It allows the user to decide the format in which the data is printed. For example, it's possible to print a CSV-like style:
End of explanation
"""
## Convert REST response object 'projects' to Pandas dataframe
df = projects.to_data_frame()
## Select some specific columns from the data frame
formatted_df = df[['id', 'name', 'fqn', 'creationDate', 'studies', 'organism.scientificName']]
print('The results can be stored and printed as a pandas DF:\n\n', formatted_df)
"""
Explanation: 4. Using Pandas DataFrame: to_data_frame()
Pandas provides a very useful functionality for data science. You can convert RestResponse objects to Pandas DataFrames using the following function:
rest_response.to_data_frame()
End of explanation
"""
## Eexecute GWAS analysis
#rest_response = oc.variant().gwas()
## wait for the job to finish
#oc.wait_for_job(rest_response)
#rest_response.print_results()
"""
Explanation: Working with Jobs [UNDER ACTIVE DEVELOPMENT]
NOTE: this section is under construction.
Please check the latest version of this notebook at https://github.com/opencb/opencga/blob/develop/opencga-client/src/main/python/notebooks/user-training/pyopencga_first_steps.ipynb
OpenCGA implemets both a powerful interactive API to query data but also allows users to execute more demanding analysis by executing jobs. There are a number of analysis and operations that are executed as jobs such as Variant Export or GWAS analysis.
1. Job Info
Job data model contain all the information about a single excution: date, id, tool, status, files, ... You can filter jobs by any of these parameters and even get some stats.
2. Executing Jobs
Job execution invovle different lifecycle stages: pending, queued, running, done, rejected.
OpenCGA takes care of executing and notifying job changes.
Executing a job is as simpla as the following code
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_read_and_write_raw_data.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(fname)
# Set up pick list: MEG + STI 014 - bad channels
want_meg = True
want_eeg = False
want_stim = False
include = ['STI 014']
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more
picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,
include=include, exclude='bads')
some_picks = picks[:5] # take 5 first
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
data, times = raw[some_picks, start:(stop + 1)]
# save 150s of MEG data in FIF file
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
"""
Explanation: Reading and writing raw files
In this example, we read a raw file. Plot a segment of MEG data
restricted to MEG channels. And save these data in a new
raw file.
End of explanation
"""
raw.plot()
"""
Explanation: Show MEG data
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak | past-semesters/fall_2016/day-by-day/day12-exploratory-data-analysis-day1/Data_Exploration_Plotting.ipynb | agpl-3.0 | # put your code here, and add additional cells as necessary.
"""
Explanation: Exploring data
Names of group members
// Put your names here!
Goals of this assignment
The purpose of this assignment is to explore data using visualization and statistics.
Section 1
The file datafile_1.csv contains a three-dimensional dataset and associated uncertainty in the data. Read the data file into numpy arrays and visualize it using two new types of plots:
2D plots of the various combinations of dimensions (x-y, x-z, y-z), including error bars (using the pyplot errorbar() method). Try plotting using symbols instead of lines, and make the error bars a different color than the points themselves.
3D plots of all three dimensions at the same time using the mplot3d toolkit - in particular, look at the scatter() method.
Hints:
Look at the documentation for numpy's loadtxt() method - in particular, what do the parameters skiprows, comments, and unpack do?
If you set up the 3D plot as described above, you can adjust the viewing angle with the command ax.view_init(elev=ANGLE1,azim=ANGLE2), where ANGLE1 and ANGLE2 are in degrees.
End of explanation
"""
# put your code here, and add additional cells as necessary.
"""
Explanation: Section 2
Now, we're going to experiment with data exploration. You have two data files to examine:
GLB.Ts.csv, which contains mean global air temperature from 1880 through the present day (retrieved from the NASA GISS surface temperature website, "Global-mean monthly, seasonal, and annual means, 1880-present"). Each row in the data file contains the year, monthly global average, yearly global average, and seasonal global average. See this file for clues as to what the columns mean.
bintanja2008.txt, which is a reconstruction of the global surface temperature, deep-sea temperature, ice volume, and relative sea level for the last 3 million years. This data comes from the National Oceanic and Atmospheric Administration's National Climatic Data Center website, and can be found here.
Some important notes:
These data files are slightly modified versions of those on the website - they have been altered to remove some characters that don't play nicely with numpy (letters with accents), and symbols for missing data have been replaced with 'NaN', or "Not a Number", which numpy knows to ignore. No actual data has been changed.
In the file GLB.Ts.csv, the temperature units are in 0.01 degrees Celsius difference from the reference period 1950-1980 - in other words, the number 40 corresponds to a difference of +0.4 degrees C compared to the average temperature between 1950 and 1980. (This means you'll have to renormalize your values by a factor of 100.)
In the file bintanja2008.txt, column 9, "Global sea level relative to present," is in confusing units - more positive values actually correspond to lower sea levels than less positive values. You may want to multiply column 9 by -1 in order to get more sensible values.
There are many possible ways to examine this data. First, read both data files into numpy arrays - it's fine to load them into a single combined multi-dimensional array if you want, or split the data into multiple arrays. We'll then try a few things:
For both datasets, make some plots of the raw data, particularly as a function of time. What do you see? How is the data "shaped"? Is there periodicity?
Do some simple data analysis. What are the minimum, maximum, and mean values of the various quantities? (You may have problems with NaN - see nanmin and similar methods)
If you calculate some sort of average for annual temperature in GLB.Ts.csv (say, the average temperature smoothed over 10 years), how might you characterize the yearly variability? Try plotting the smoothed value along with the raw data and show how they differ.
There are several variables in the file bintanja2008.txt - try plotting multiple variables as a function of time together using the pyplot subplot functionality (and some more complicated subplot examples for further help). Do they seem to be related in some way? (Hint: plot surface temperature, deep sea temperature, ice volume, and sea level, and zoom in from 3 Myr to ~100,000 years)
What about plotting the non-time quantities in bintanja2008.txt versus each other (i.e., surface temperature vs. ice volume or sea level) - do you see correlations?
End of explanation
"""
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/Jg6Mxb0ZTvwiSe4R2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
"""
Explanation: In the cell below, describe some of the conclusions that you've drawn from the data you have just explored!
// put your thoughts here.
Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation
"""
|
hongguangguo/shogun | doc/ipython-notebooks/evaluation/xval_modelselection.ipynb | gpl-3.0 | %pylab inline
%matplotlib inline
# include all Shogun classes
from modshogun import *
# generate some ultra easy training data
gray()
n=20
title('Toy data for binary classification')
X=hstack((randn(2,n), randn(2,n)+1))
Y=hstack((-ones(n), ones(n)))
_=scatter(X[0], X[1], c=Y , s=100)
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Class 1", "Class 2"], loc=2)
# training data in Shogun representation
features=RealFeatures(X)
labels=BinaryLabels(Y)
"""
Explanation: Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - <a href="mailto:heiko.strathmann@gmail.com">heiko.strathmann@gmail.com</a> - <a href="github.com/karlnapf">github.com/karlnapf</a> - <a href="herrstrathmann.de">herrstrathmann.de</a>. Based on the model selection framework of his <a href="http://www.google-melange.com/gsoc/project/google/gsoc2011/XXX">Google summer of code 2011 project</a> | Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann
This notebook illustrates the evaluation of prediction algorithms in Shogun using <a href="http://en.wikipedia.org/wiki/Cross-validation_(statistics)">cross-validation</a>, and selecting their parameters using <a href="http://en.wikipedia.org/wiki/Hyperparameter_optimization">grid-search</a>. We demonstrate this for a toy example on <a href="http://en.wikipedia.org/wiki/Binary_classification">Binary Classification</a> using <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> and also a regression problem on a real world dataset.
General Idea
Splitting Strategies
K-fold cross-validation
Stratified cross-validation
Example: Binary classification
Example: Regression
Model Selection: Grid Search
General Idea
Cross validation aims to estimate an algorithm's performance on unseen data. For example, one might be interested in the average classification accuracy of a Support Vector Machine when being applied to new data, that it was not trained on. This is important in order to compare the performance different algorithms on the same target. Most crucial is the point that the data that was used for running/training the algorithm is not used for testing. Different algorithms here also can mean different parameters of the same algorithm. Thus, cross-validation can be used to tune parameters of learning algorithms, as well as comparing different families of algorithms against each other. Cross-validation estimates are related to the marginal likelihood in Bayesian statistics in the sense that using them for selecting models avoids overfitting.
Evaluating an algorithm's performance on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. This is one of the reasons behind splitting the data and using different splits for training and testing, which can be done using cross-validation.
Let us generate some toy data for binary classification to try cross validation on.
End of explanation
"""
k=5
normal_split=CrossValidationSplitting(labels, k)
"""
Explanation: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
End of explanation
"""
stratified_split=StratifiedCrossValidationSplitting(labels, k)
"""
Explanation: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
End of explanation
"""
split_strategies=[stratified_split, normal_split]
#code to visualize splitting
def get_folds(split, num):
split.build_subsets()
x=[]
y=[]
lab=[]
for j in range(num):
indices=split.generate_subset_indices(j)
x_=[]
y_=[]
lab_=[]
for i in range(len(indices)):
x_.append(X[0][indices[i]])
y_.append(X[1][indices[i]])
lab_.append(Y[indices[i]])
x.append(x_)
y.append(y_)
lab.append(lab_)
return x, y, lab
def plot_folds(split_strategies, num):
for i in range(len(split_strategies)):
x, y, lab=get_folds(split_strategies[i], num)
figure(figsize=(18,4))
gray()
suptitle(split_strategies[i].get_name(), fontsize=12)
for j in range(0, num):
subplot(1, num, (j+1), title='Fold %s' %(j+1))
scatter(x[j], y[j], c=lab[j], s=100)
_=plot_folds(split_strategies, 4)
"""
Explanation: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
End of explanation
"""
# define SVM with a small rbf kernel (always normalise the kernel!)
C=1
kernel=GaussianKernel(2, 0.001)
kernel.init(features, features)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
classifier=LibSVM(C, kernel, labels)
# train
_=classifier.train()
"""
Explanation: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example: Binary Support Vector Classification
Following the example from above, we will tune the performance of a SVM on the binary classification problem. We will
demonstrate how to evaluate a loss function or metric on a given algorithm
then learn how to estimate this metric for the algorithm performing on unseen data
and finally use those techniques to tune the parameters to obtain the best possible results.
The involved methods are
LibSVM as the binary classification algorithms
the area under the ROC curve (AUC) as performance metric
three different kernels to compare
End of explanation
"""
# instanciate a number of Shogun performance measures
metrics=[ROCEvaluation(), AccuracyMeasure(), ErrorRateMeasure(), F1Measure(), PrecisionMeasure(), RecallMeasure(), SpecificityMeasure()]
for metric in metrics:
print metric.get_name(), metric.evaluate(classifier.apply(features), labels)
"""
Explanation: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
End of explanation
"""
metric=AccuracyMeasure()
cross=CrossValidation(classifier, features, labels, stratified_split, metric)
# perform the cross-validation, note that this call involved a lot of computation
result=cross.evaluate()
# the result needs to be casted to CrossValidationResult
result=CrossValidationResult.obtain_from_generic(result)
# this class contains a field "mean" which contain the mean performance metric
print "Testing", metric.get_name(), result.mean
"""
Explanation: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
End of explanation
"""
print "Testing", metric.get_name(), [CrossValidationResult.obtain_from_generic(cross.evaluate()).mean for _ in range(10)]
"""
Explanation: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson: Never judge your algorithms based on the performance on training data!
Note that for small data sizes, the cross-validation estimates are quite noisy. If we run it multiple times, we get different results.
End of explanation
"""
# 25 runs and 95% confidence intervals
cross.set_num_runs(25)
# perform x-validation (now even more expensive)
cross.evaluate()
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print "Testing cross-validation mean %.2f " \
% (result.mean)
"""
Explanation: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
End of explanation
"""
widths=2**linspace(-5,25,10)
results=zeros(len(widths))
for i in range(len(results)):
kernel.set_width(widths[i])
result=CrossValidationResult.obtain_from_generic(cross.evaluate())
results[i]=result.mean
plot(log2(widths), results, 'blue')
xlabel("log2 Kernel width")
ylabel(metric.get_name())
_=title("Accuracy for different kernel widths")
print "Best Gaussian kernel width %.2f" % widths[results.argmax()], "gives", results.max()
# compare this with a linear kernel
classifier.set_kernel(LinearKernel())
lin_k=CrossValidationResult.obtain_from_generic(cross.evaluate())
plot([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.mean,lin_k.mean], 'r')
# please excuse this horrible code :)
print "Linear kernel gives", lin_k.mean
_=legend(["Gaussian", "Linear"], loc="lower center")
"""
Explanation: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
End of explanation
"""
feats=RealFeatures(CSVFile('../../../data/uci/housing/fm_housing.dat'))
labels=RegressionLabels(CSVFile('../../../data/uci/housing/housing_label.dat'))
preproc=RescaleFeatures()
preproc.init(feats)
feats.add_preprocessor(preproc)
feats.apply_preprocessor(True)
#Regression models
ls=LeastSquaresRegression(feats, labels)
tau=1
rr=LinearRidgeRegression(tau, feats, labels)
width=1
tau=1
kernel=GaussianKernel(feats, feats, width)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
krr=KernelRidgeRegression(tau, kernel, labels)
regression_models=[ls, rr, krr]
"""
Explanation: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
End of explanation
"""
n=30
taus = logspace(-4, 1, n)
#5-fold cross-validation
k=5
split=CrossValidationSplitting(labels, k)
metric=MeanSquaredError()
cross=CrossValidation(rr, feats, labels, split, metric)
cross.set_num_runs(50)
errors=[]
for tau in taus:
#set necessary parameter
rr.set_tau(tau)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#Enlist mean error for all runs
errors.append(result.mean)
figure(figsize=(20,6))
suptitle("Finding best (tau) parameter using cross-validation", fontsize=12)
p=subplot(121)
title("Ridge Regression")
plot(taus, errors, linewidth=3)
p.set_xscale('log')
p.set_ylim([0, 80])
xlabel("Taus")
ylabel("Mean Squared Error")
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(50)
errors=[]
for tau in taus:
krr.set_tau(tau)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#print tau, "error", result.mean
errors.append(result.mean)
p2=subplot(122)
title("Kernel Ridge regression")
plot(taus, errors, linewidth=3)
p2.set_xscale('log')
xlabel("Taus")
_=ylabel("Mean Squared Error")
"""
Explanation: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
End of explanation
"""
n=50
widths=logspace(-2, 3, n)
krr.set_tau(0.1)
metric=MeanSquaredError()
k=5
split=CrossValidationSplitting(labels, k)
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(10)
errors=[]
for width in widths:
kernel.set_width(width)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#print width, "error", result.mean
errors.append(result.mean)
figure(figsize=(15,5))
p=subplot(121)
title("Finding best width using cross-validation")
plot(widths, errors, linewidth=3)
p.set_xscale('log')
xlabel("Widths")
_=ylabel("Mean Squared Error")
"""
Explanation: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
End of explanation
"""
n=40
taus = logspace(-3, 0, n)
widths=logspace(-1, 4, n)
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(1)
x, y=meshgrid(taus, widths)
grid=array((ravel(x), ravel(y)))
print grid.shape
errors=[]
for i in range(0, n*n):
krr.set_tau(grid[:,i][0])
kernel.set_width(grid[:,i][1])
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
errors.append(result.mean)
errors=array(errors).reshape((n, n))
from mpl_toolkits.mplot3d import Axes3D
#taus = logspace(0.5, 1, n)
jet()
fig=figure(figsize(15,7))
ax=subplot(121)
c=pcolor(x, y, errors)
_=contour(x, y, errors, linewidths=1, colors='black')
_=colorbar(c)
xlabel('Taus')
ylabel('Widths')
ax.set_xscale('log')
ax.set_yscale('log')
ax1=fig.add_subplot(122, projection='3d')
ax1.plot_wireframe(log10(y),log10(x), errors, linewidths=2, alpha=0.6)
ax1.view_init(30,-40)
xlabel('Taus')
ylabel('Widths')
_=ax1.set_zlabel('Error')
"""
Explanation: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
End of explanation
"""
#use the best parameters
rr.set_tau(1)
krr.set_tau(0.05)
kernel.set_width(2)
title_='Performance on Boston Housing dataset'
print "%50s" %title_
for machine in regression_models:
metric=MeanSquaredError()
cross=CrossValidation(machine, feats, labels, split, metric)
cross.set_num_runs(25)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print "-"*80
print "|", "%30s" % machine.get_name(),"|", "%20s" %metric.get_name(),"|","%20s" %result.mean ,"|"
print "-"*80
"""
Explanation: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
End of explanation
"""
#Root
param_tree_root=ModelSelectionParameters()
#Parameter tau
tau=ModelSelectionParameters("tau")
param_tree_root.append_child(tau)
# also R_LINEAR/R_LOG is available as type
min_value=0.01
max_value=1
type_=R_LINEAR
step=0.05
base=2
tau.build_values(min_value, max_value, type_, step, base)
"""
Explanation: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
End of explanation
"""
#kernel object
param_gaussian_kernel=ModelSelectionParameters("kernel", kernel)
gaussian_kernel_width=ModelSelectionParameters("log_width")
gaussian_kernel_width.build_values(0.1, 6.0, R_LINEAR, 0.5, 2.0)
#kernel parameter
param_gaussian_kernel.append_child(gaussian_kernel_width)
param_tree_root.append_child(param_gaussian_kernel)
# cross validation instance used
cross_validation=CrossValidation(krr, feats, labels, split, metric)
cross_validation.set_num_runs(1)
# model selection instance
model_selection=GridSearchModelSelection(cross_validation, param_tree_root)
print_state=False
# TODO: enable it once crossval has been fixed
#best_parameters=model_selection.select_model(print_state)
#best_parameters.apply_to_machine(krr)
#best_parameters.print_tree()
result=cross_validation.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print 'Error with Best parameters:', result.mean
"""
Explanation: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search.
End of explanation
"""
|
flowersteam/naminggamesal | notebooks/2_Intro_Strategy.ipynb | agpl-3.0 | import naminggamesal.ngstrat as ngstrat
import naminggamesal.ngvoc as ngvoc
"""
Explanation: Strategies
The strategy object describes the behaviour of an agent, given its vocabulary. The main algorithms that vary among strategies are:
* how to choose a link (meaning-word) to enact,
* how to guess a meaning from a word
* how to update the vocabulary
End of explanation
"""
M = 5
W = 10
voc_cfg = {
'voc_type':'matrix',
'M':M,
'W':W
}
nlink = 0
voctest_speaker = ngvoc.Vocabulary(**voc_cfg)
for i in range(0, nlink):
voctest_speaker.add(random.randint(0,M-1), random.randint(0,W-1), 0.2)
voctest_hearer=ngvoc.Vocabulary(**voc_cfg)
for i in range(0, nlink):
voctest_hearer.add(random.randint(0,M-1), random.randint(0,W-1), 0.2)
print("Speaker:")
print(voctest_speaker)
print(" ")
print("Hearer:")
print(voctest_hearer)
strat_cfg={"strat_type":"success_threshold_epirob",'vu_cfg':{'vu_type':'BLIS'},}
teststrat=ngstrat.Strategy(**strat_cfg)
teststrat
"""
Explanation: Let's create a strategy. We will also need two vocabularies to work on (speaker and hearer). (more info on other strategy types: Design_newStrategy.ipynb)
End of explanation
"""
memory_s=teststrat.init_memory(voctest_speaker) #Not important for the naive strategy, here it simply is {}
memory_h=teststrat.init_memory(voctest_hearer)
print("Initial vocabulary of the speaker:")
print(voctest_speaker)
print(" ")
print("Initial vocabulary of the hearer:")
print(voctest_hearer)
print(" ")
ms=teststrat.pick_m(voctest_speaker,memory_s,context=[])
print("Meaning chosen by speaker:")
print(ms)
print (" ")
w=teststrat.pick_w(voc=voctest_speaker,mem=memory_s,m=ms,context=[])
print("Word uttered by speaker:")
print(w)
print (" ")
mh=teststrat.guess_m(w,voctest_hearer,memory_h,context=[])
print("Meaning interpreted by hearer:")
print(mh)
print (" ")
if (ms==mh):
print("Success!")
bool_succ = 1
else:
bool_succ = 0
print("Failure!")
print(" ")
teststrat.update_speaker(ms,w,mh,voctest_speaker,memory_s,bool_succ)
teststrat.update_hearer(ms,w,mh,voctest_hearer,memory_h,bool_succ)
print("Updated vocabulary of the speaker:")
print(voctest_speaker)
print(" ")
print("Updated vocabulary of the hearer:")
print(voctest_hearer)
"""
Explanation: Now that we have a strategy, we can test the different functions. Exec
!! Vocabularies are modified, but this way you can observe progressive growth of the number of links !!
End of explanation
"""
#voctest_speaker.add(0,0,1)
#voctest_speaker.add(0,0,0)
#voctest_hearer.add(0,0,1)
#voctest_hearer.add(0,0,0)
print("Speaker:")
print(voctest_speaker)
print(" ")
print("Hearer:")
print(voctest_hearer)
"""
Explanation: Here you can modify by hand the 2 vocabularies before re-executing the code:
End of explanation
"""
voctest_speaker.visual()
teststrat.visual(voc=voctest_speaker,mem=memory_s,vtype="pick_m",iterr=500)
voctest_speaker.visual()
teststrat.visual(voc=voctest_speaker,mem=memory_s,vtype="pick_mw",iterr=500)
voctest_speaker.visual()
teststrat.visual(voc=voctest_speaker,mem=memory_s,vtype="pick_w",iterr=500)
voctest_hearer.visual()
teststrat.visual(voc=voctest_hearer,mem=memory_h,vtype="guess_m",iterr=500)
dir(teststrat)
teststrat.voc_update
"""
Explanation: Approximation of the probability density of the different procedures of the strategy:
End of explanation
"""
|
jinntrance/MOOC | coursera/ml-foundations/week6/Deep Features for Image Retrieval.ipynb | cc0-1.0 | import graphlab
"""
Explanation: Building an image retrieval system with deep features
Fire up GraphLab Create
End of explanation
"""
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
"""
Explanation: Load the CIFAR-10 dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set. In this simple retrieval example, there is no notion of "testing", so we will only use the training data.
End of explanation
"""
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
image_train.head()
"""
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(image_train[image_train['label']=='cat'],features=['deep_features'],
label='id')
"""
Explanation: Train a nearest-neighbors model for retrieving images using deep features
We will now build a simple image retrieval system that finds the nearest neighbors for any image.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
cat = image_train[18:19]
cat['image'].show()
knn_model.query(image_test[0:1])
"""
Explanation: Use image retrieval model with deep features to find similar images
Let's find similar images to this cat picture.
End of explanation
"""
def get_images_from_ids(query_result):
return image_train.filter_by(query_result['reference_label'],'id')
cat_neighbors = get_images_from_ids(knn_model.query(cat))
cat_neighbors['image'].show()
"""
Explanation: We are going to create a simple function to view the nearest neighbors to save typing:
End of explanation
"""
car = image_train[8:9]
car['image'].show()
get_images_from_ids(knn_model.query(car))['image'].show()
"""
Explanation: Very cool results showing similar cats.
Finding similar images to a car
End of explanation
"""
show_neighbors = lambda i: get_images_from_ids(knn_model.query(image_train[i:i+1]))['image'].show()
show_neighbors(8)
show_neighbors(26)
"""
Explanation: Just for fun, let's create a lambda to find and show nearest neighbor images
End of explanation
"""
|
marcus-nystrom/python_course | Week2_lecture.ipynb | gpl-3.0 | # This is a sentence
sentence = 'This is a rather long sentence. I want to find the number of words with two letters'
# This is the code you need to find the number of words of length 2 (e.g., is, to, and of)
words = sentence.split(' ') # Split the sentence string into a list of words, the space between the
# quotations marks means that the sentence is
# split at every space character.
# Using sentence.split(',') would instead divide the sentence at commas.
print(words)
# Not count the number of words of length 2
nWords_len2 = 0 # We need this variable for word counting
for word in words: # Go through all the words in the list, one by one
if len(word) == 2: # If the length of the word is equal to 2
nWords_len2 += 1 # This means increasing the value of nWords_len2 by +1.
# Same as nWords_len2 = nWords_len2 + 1
print(nWords_len2) # There are three words with two letters (is, to, of)
"""
Explanation: Lecture notes from second week
Programming for the Behavioral Sciences
Last week, basic concepts in Python were introducted. This week, functions will be introduced. A function is a piece of code that helps you to organize, re-use, and share your code. We have already used many function in course, for instance, print() and range(). More information and examples of functions can be found here:
https://www.tutorialspoint.com/python/python_functions.htm
Let start with an example. This time, enter the code in the 'editor' window in Spyder, and press the green button shaped like a triangle to run your code. Any output of your code is shown in the iPython console.
Functions
Example: Assume that you have a long text, and you want to count the number of words with two letters. Further assume that this is a really important part of your research; you do it every day from many large bodies of text. This is how it can be done.
End of explanation
"""
def find_len2_words(input_text):
'''Returns the number of words containing two letters
Args:
input_text (str): input text
Returns:
nWords (int): number of words
''' # This is called a doc-string and is shown as help when you try to use the function
words = input_text.split(' ')
nWords = 0
for word in words:
if len(word) == 2:
nWords += 1
return nWords
"""
Explanation: Imagine this is a task we do every day, wouldn't it be nice to have a way to perform this without re-typing all this code every time? Something like:
Now you can easily and quickly repeat the task.
find_len2_words() is in fact a function. Let's look at how it is defined.
End of explanation
"""
nWords_with_two_letters = find_len2_words(sentence)
print(nWords_with_two_letters)
"""
Explanation: The function (and all functions) starts by def, followed by the name of the function and the input variables within parenthesis. In this case, there is one input variable, but there could be two or more.
The next line contains a so-called 'doc [documentation] string', which tells the user what the functions does, and what the inputs (arguments) and outputs (what the function returns) are.
At the end of the funtion, there is a 'return' argument. This is what the function outputs or return back. One could see a function as a 'black box', which provided an input returns an output. Many times, the programmer does not see the content of the box (the function), but can use it anyways. This is the case with the print()-function, for instance; we have used it many times, but we don't know what's 'under the hood'.
Now let's repeat the task to find the number of two letter words from 'sentence'-string, but now by using the function.
End of explanation
"""
def find_lenn_words(input_text, n=2):
'''Returns the number of words containing n letters.
If no input n is given, n = 2.
Args:
input_text (str): input text.
n (int, optional): Defaults to 2.
Returns:
nWords (int): number of words.
'''
words = input_text.split(' ')
nWords = 0
for word in words:
if len(word) == n:
nWords += 1
return nWords
# Test the function
my_input_text = sentence # borrow the sentence defined above as input
nWords = find_lenn_words(my_input_text) # What happens if I don't specify the length n?
nWords2 = find_lenn_words(my_input_text, n = 2)
nWords6 = find_lenn_words(my_input_text, n = 6)
# Print the results
print(nWords, nWords2, nWords6)
"""
Explanation: We completed the task in just one line of code and got the same result! The function can easily be re-used and even shared with collegues in your community.
Function arguments
There may also be an interest in finding the number of words with other lengths, e.g., three or four letters. Let see how the above function can be generalized.
End of explanation
"""
variable_definded_outside_function = 12
def myfun1(a, b):
''' Adds two number and returns the result
Args:
a (int)
b (int)
Returns:
c (int)
'''
c = a + b
# Test to print a variable defined outside of the function
print(variable_definded_outside_function) # This works! The function can 'see'
# variables defined outside of the function
return c
# Try the function
a1 = 5
a2 = 4
myres = myfun1(a1, a2)
print(myres)
"""
Explanation: Local and global variables
Some variable are local to the function, i.e., they exist and can only be used inside that function 'box'. Global variables can be used both inside and outside the function.
End of explanation
"""
print(a) # a (and b and c) only exists inside the function (local scope), so I can print it here.
"""
Explanation: What if I try to print the local variable 'a' outside of the function?
End of explanation
"""
x = 5 # Define a global variable x = 5
def return_global():
'''Returns global variable'''
return x
def modify_global():
'''Modifies global variable. '''
# A global variable can only be modified if it is defined as global within the function
global x
x = 'global'
return x
def create_local():
'''Creates a local variable x'''
# The local variable 'x' knows nothing about the global variable 'x'.
x = 'local'
return x
# Test the functions
y0 = return_global() # Returns 5
y1 = modify_global() # Returns 'global'
y2 = create_local() # Returns 'local'
y3 = return_global() # Now returns 'global', why?
print(y0, y1, y2, y3)
"""
Explanation: Consider the code below and try to understand why the output looks the way it does.
End of explanation
"""
import matplotlib.pyplot as plt # a library for plotting
import pandas as pd # a library for reading csv files
import numpy as np # A library to work with numbers
# Read data from a csv file and plot them. Data are in a numpy array.
# The data are stored in a folder 'img' in the
# same directory as the script containing this code.
eye_velocity = np.array(pd.read_csv("img/eye_velocity.csv")).flatten()
# Plot the data and annotate the plot
plt.plot(eye_velocity)
plt.xlabel('Time (ms)')
plt.ylabel('Velocity (deg/s)')
plt.show()
# Zoom in on a smaller part of the data
eye_velocity_short = eye_velocity[100:800]
plt.plot(eye_velocity_short)
plt.xlabel('Time (ms)')
plt.ylabel('Velocity (deg/s)')
# Indicate fixations with interval shaded
plt.axvspan(90, 200, color='y', alpha=0.5, lw=0)
plt.axvspan(270, 360, color='y', alpha=0.5, lw=0)
plt.axvspan(415, 700, color='y', alpha=0.5, lw=0)
# Show the results
plt.show()
# Fixations are located where velocity < 30 deg/s
threshold = 30
fixation_samples = (eye_velocity_short < threshold) * 1 # *1 to convert True / False to 1 / 0
plt.plot(fixation_samples)
plt.show()
"""
Explanation: Introduction to Lab 2
Implementing an event detector for eye tracking data, which consist of eye movements recorded at 1000 Hz from a person reading a text. An event detector finds prototypical patterns in the data known as fixations (when the eye is still) and saccades (when the eye moves fast). We will use the velocity of the eye movements to find the number and durations of the fixations.
End of explanation
"""
import numpy as np # Now we need numpy, so let's import it
plt.plot(np.diff(fixation_samples)) # diff means to the the difference between two consecutive
# samples i - (i-1). [0, 1, 0] -> [1, -1]
plt.show()
"""
Explanation: Now we have simplified the problem. Let's find the onsets (positions where the plot changes from 0->1) and offsets (positions where the plot changes from 1->0) of the fixations.
End of explanation
"""
fixation_samples_0 = np.hstack((0, fixation_samples, 0))
plt.plot(np.diff(fixation_samples_0)) # diff means to the the difference between two consecutive
# samples i - (i-1). [0, 1, 0] -> [1, -1]
plt.show()
"""
Explanation: One problem remains: the offset of the last fixation was not found. We can solve this as follows.
End of explanation
"""
# Find the fixation onsets
fix_onsets = np.where(np.diff(fixation_samples_0) == 1)
print(fix_onsets)
"""
Explanation: Now the problem is made even simpler. To find the number of fixations, we need to find the number of 1s (representing the onsets) or the number of -1s (representing the offsets). The duration of a fixation can be computed by taking the difference between offset and onset locations. To find out where in an array something happens, the 'where' function in numpy can be used. Note that the output is a tuple!
End of explanation
"""
|
google/eng-edu | ml/cc/prework/zh-CN/creating_and_manipulating_tensors.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2017 Google LLC.
End of explanation
"""
from __future__ import print_function
import tensorflow as tf
try:
tf.contrib.eager.enable_eager_execution()
print("TF imported with eager execution!")
except ValueError:
print("TF already imported with eager execution!")
"""
Explanation: 创建和操控张量
学习目标:
* 初始化 TensorFlow Variable 并赋值
* 创建和操控张量
* 回忆线性代数中的加法和乘法知识(如果这些内容对您来说很陌生,请参阅矩阵加法和乘法简介)
* 熟悉基本的 TensorFlow 数学和数组运算
End of explanation
"""
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes
)
ones = tf.ones([6], dtype=tf.int32)
print("ones:", ones)
just_beyond_primes = tf.add(primes, ones)
print("just_beyond_primes:", just_beyond_primes)
twos = tf.constant([2, 2, 2, 2, 2, 2], dtype=tf.int32)
primes_doubled = primes * twos
print("primes_doubled:", primes_doubled)
"""
Explanation: 矢量加法
您可以对张量执行很多典型数学运算 (TF API)。以下代码会创建下列矢量(一维张量),所有矢量都正好有六个元素:
一个包含质数的 primes 矢量。
一个值全为 1 的 ones 矢量。
一个通过对前两个矢量执行元素级加法而创建的矢量。
一个通过将 primes 矢量中的元素翻倍而创建的矢量。
End of explanation
"""
some_matrix = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int32)
print(some_matrix)
print("\nvalue of some_matrix is:\n", some_matrix.numpy())
"""
Explanation: 输出张量不仅会返回其值,还会返回其形状(将在下一部分中讨论)以及存储在张量中的值的类型。调用张量的 numpy 方法会返回该张量的值(以 NumPy 数组形式):
End of explanation
"""
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.numpy())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.numpy())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.numpy())
"""
Explanation: 张量形状
形状用于描述张量维度的大小和数量。张量的形状表示为 list,其中第 i 个元素表示维度 i 的大小。列表的长度表示张量的阶(即维数)。
有关详情,请参阅 TensorFlow 文档。
以下是一些基本示例:
End of explanation
"""
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes)
one = tf.constant(1, dtype=tf.int32)
print("one:", one)
just_beyond_primes = tf.add(primes, one)
print("just_beyond_primes:", just_beyond_primes)
two = tf.constant(2, dtype=tf.int32)
primes_doubled = primes * two
print("primes_doubled:", primes_doubled)
"""
Explanation: 广播
在数学中,您只能对形状相同的张量执行元素级运算(例如,相加和等于)。不过,在 TensorFlow 中,您可以对张量执行传统意义上不可行的运算。TensorFlow 支持广播(一种借鉴自 NumPy 的概念)。利用广播,元素级运算中的较小数组会增大到与较大数组具有相同的形状。例如,通过广播:
如果运算需要大小为 [6] 的张量,则大小为 [1] 或 [] 的张量可以作为运算数。
如果运算需要大小为 [4, 6] 的张量,则以下任何大小的张量都可以作为运算数:
[1, 6]
[6]
[]
如果运算需要大小为 [3, 5, 6] 的张量,则以下任何大小的张量都可以作为运算数:
[1, 5, 6]
[3, 1, 6]
[3, 5, 1]
[1, 1, 1]
[5, 6]
[1, 6]
[6]
[1]
[]
注意:当张量被广播时,从概念上来说,系统会复制其条目(出于性能考虑,实际并不复制。广播专为实现性能优化而设计)。
有关完整的广播规则集,请参阅简单易懂的 NumPy 广播文档。
以下代码执行了与之前一样的张量运算,不过使用的是标量值(而不是全包含 1 或全包含 2 的矢量)和广播。
End of explanation
"""
# Write your code for Task 1 here.
"""
Explanation: 练习 1:矢量运算。
执行矢量运算以创建一个“just_under_primes_squared”矢量,其中第 i 个元素等于 primes 中第 i 个元素的平方减 1。例如,第二个元素为 3 * 3 - 1 = 8。
使用 tf.multiply 或 tf.pow 操作可求得 primes 矢量中每个元素值的平方。
End of explanation
"""
# Task: Square each element in the primes vector, then subtract 1.
def solution(primes):
primes_squared = tf.multiply(primes, primes)
neg_one = tf.constant(-1, dtype=tf.int32)
just_under_primes_squared = tf.add(primes_squared, neg_one)
return just_under_primes_squared
def alternative_solution(primes):
primes_squared = tf.pow(primes, 2)
one = tf.constant(1, dtype=tf.int32)
just_under_primes_squared = tf.subtract(primes_squared, one)
return just_under_primes_squared
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
just_under_primes_squared = solution(primes)
print("just_under_primes_squared:", just_under_primes_squared)
"""
Explanation: 解决方案
点击下方,查看解决方案。
End of explanation
"""
# A 3x4 matrix (2-d tensor).
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# A 4x2 matrix (2-d tensor).
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`; result is 3x2 matrix.
matrix_multiply_result = tf.matmul(x, y)
print(matrix_multiply_result)
"""
Explanation: 矩阵乘法
在线性代数中,当两个矩阵相乘时,第一个矩阵的列数必须等于第二个矩阵的行数。
3x4 矩阵乘以 4x2 矩阵是有效的,可以得出一个 3x2 矩阵。
4x2 矩阵乘以 3x4 矩阵是无效*的。
End of explanation
"""
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x8_matrix = tf.reshape(matrix, [2, 8])
reshaped_4x4_matrix = tf.reshape(matrix, [4, 4])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.numpy())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.numpy())
"""
Explanation: 张量变形
由于张量加法和矩阵乘法均对运算数施加了限制条件,TensorFlow 编程者需要频繁改变张量的形状。
您可以使用 tf.reshape 方法改变张量的形状。
例如,您可以将 8x2 张量变形为 2x8 张量或 4x4 张量:
End of explanation
"""
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x2x4_tensor = tf.reshape(matrix, [2, 2, 4])
one_dimensional_vector = tf.reshape(matrix, [16])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.numpy())
print("1-D vector:")
print(one_dimensional_vector.numpy())
"""
Explanation: 此外,您还可以使用 tf.reshape 更改张量的维数(“阶”)。
例如,您可以将 8x2 张量变形为三维 2x2x4 张量或一维 16 元素张量。
End of explanation
"""
# Write your code for Task 2 here.
"""
Explanation: 练习 2:改变两个张量的形状,使其能够相乘。
下面两个矢量无法进行矩阵乘法运算:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
请改变这两个矢量的形状,使其成为可以进行矩阵乘法运算的运算数。
然后,对变形后的张量调用矩阵乘法运算。
End of explanation
"""
# Task: Reshape two tensors in order to multiply them
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
reshaped_a = tf.reshape(a, [2, 3])
reshaped_b = tf.reshape(b, [3, 1])
c = tf.matmul(reshaped_a, reshaped_b)
print("reshaped_a (2x3):")
print(reshaped_a.numpy())
print("reshaped_b (3x1):")
print(reshaped_b.numpy())
print("reshaped_a x reshaped_b (2x1):")
print(c.numpy())
"""
Explanation: 解决方案
点击下方,查看解决方案。
请注意,当两个矩阵相乘时,第一个矩阵的列数必须等于第二个矩阵的行数。
一个可行的解决方案是,将 a 变形为 2x3 矩阵,并将 b 变形为 3x1 矩阵,从而在相乘后得到一个 2x1 矩阵:
End of explanation
"""
# Create a scalar variable with the initial value 3.
v = tf.contrib.eager.Variable([3])
# Create a vector variable of shape [1, 4], with random initial values,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.contrib.eager.Variable(tf.random_normal([1, 4], mean=1.0, stddev=0.35))
print("v:", v.numpy())
print("w:", w.numpy())
"""
Explanation: 还有一个解决方案是,将 a 变形为 6x1 矩阵,并将 b 变形为 1x3 矩阵,从而在相乘后得到一个 6x3 矩阵。
变量、初始化和赋值
到目前为止,我们执行的所有运算都针对的是静态值 (tf.constant);调用 numpy() 始终返回同一结果。在 TensorFlow 中可以定义 Variable 对象,它的值是可以更改的。
创建变量时,您可以明确设置一个初始值,也可以使用初始化程序(例如分布):
End of explanation
"""
v = tf.contrib.eager.Variable([3])
print(v.numpy())
tf.assign(v, [7])
print(v.numpy())
v.assign([5])
print(v.numpy())
"""
Explanation: 要更改变量的值,请使用 assign 操作:
End of explanation
"""
v = tf.contrib.eager.Variable([[1, 2, 3], [4, 5, 6]])
print(v.numpy())
try:
print("Assigning [7, 8, 9] to v")
v.assign([7, 8, 9])
except ValueError as e:
print("Exception:", e)
"""
Explanation: 向变量赋予新值时,其形状必须和之前的形状一致:
End of explanation
"""
# Write your code for Task 3 here.
"""
Explanation: 还有很多关于变量的内容我们并未在这里提及,例如加载和存储。要了解详情,请参阅 TensorFlow 文档。
练习 3:模拟投掷两个骰子 10 次。
创建一个骰子模拟,在模拟中生成一个 10x3 二维张量,其中:
列 1 和 2 均存储一个六面骰子(值为 1-6)的一次投掷值。
列 3 存储同一行中列 1 和 2 的值的总和。
例如,第一行中可能会包含以下值:
列 1 存储 4
列 2 存储 3
列 3 存储 7
要完成此任务,您需要浏览 TensorFlow 文档。
End of explanation
"""
# Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix.
die1 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
die2 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
dice_sum = tf.add(die1, die2)
resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1)
print(resulting_matrix.numpy())
"""
Explanation: 解决方案
点击下方,查看解决方案。
我们将投掷骰子得到的值分别放入两个 10x1 矩阵中,即 die1 和 die2。两次投掷骰子得到的值的总和将存储在 dice_sum 中,然后,将三个 10x1 矩阵连接成一个矩阵,从而创建一个 10x3 矩阵。
或者,我们可以将投掷骰子得到的值放入一个 10x2 矩阵中,但将同一矩阵的不同列相加会更加复杂。我们还可以将投掷骰子得到的值放入两个一维张量(矢量)中,但这样做需要转置结果。
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/499a81f33500445fc2e1eac0be346d47/temporal_whitening.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import fit_iir_model_raw
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_raw.fif'
proj_fname = meg_path / 'sample_audvis_ecg-proj.fif'
raw = mne.io.read_raw_fif(raw_fname)
proj = mne.read_proj(proj_fname)
raw.add_proj(proj)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # mark bad channels
# Set up pick list: Gradiometers - bad channels
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')
order = 5 # define model order
picks = picks[:1]
# Estimate AR models on raw data
b, a = fit_iir_model_raw(raw, order=order, picks=picks, tmin=60, tmax=180)
d, times = raw[0, 10000:20000] # look at one channel from now on
d = d.ravel() # make flat vector
innovation = signal.convolve(d, a, 'valid')
d_ = signal.lfilter(b, a, innovation) # regenerate the signal
d_ = np.r_[d_[0] * np.ones(order), d_] # dummy samples to keep signal length
"""
Explanation: Temporal whitening with AR model
Here we fit an AR model to the data and use it
to temporally whiten the signals.
End of explanation
"""
plt.close('all')
plt.figure()
plt.plot(d[:100], label='signal')
plt.plot(d_[:100], label='regenerated signal')
plt.legend()
plt.figure()
plt.psd(d, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(innovation, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(d_, Fs=raw.info['sfreq'], NFFT=2048, linestyle='--')
plt.legend(('Signal', 'Innovation', 'Regenerated signal'))
plt.show()
"""
Explanation: Plot the different time series and PSDs
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.push_weights.ipynb | gpl-3.0 | import vcsn
"""
Explanation: automaton.push_weights
Push the weights towards in the initial states.
Preconditions:
- None
Postconditions:
- The Result is equivalent to the input automaton.
Examples
End of explanation
"""
%%automaton --strip a
context = "lal_char, zmin"
$ -> 0
0 -> 1 <0>a, <1>b, <5>c
0 -> 2 <0>d, <1>e
1 -> 3 <0>e, <1>f
2 -> 3 <4>e, <5>f
3 -> $
a.push_weights()
"""
Explanation: In a Tropical Semiring
The following example is taken from mohri.2009.hwa, Figure 12.
End of explanation
"""
a.minimize()
a.push_weights().minimize()
"""
Explanation: Note that weight pushing improves the "minimizability" of weighted automata:
End of explanation
"""
%%automaton --strip a
context = "lal_char, q"
$ -> 0
0 -> 1 <0>a, <1>b, <5>c
0 -> 2 <0>d, <1>e
1 -> 3 <0>e, <1>f
2 -> 3 <4>e, <5>f
3 -> $
a.push_weights()
"""
Explanation: In $\mathbb{Q}$
Again, the following example is taken from mohri.2009.hwa, Figure 12 (subfigure 12.d lacks two transitions), but computed in $\mathbb{Q}$ rather than $\mathbb{R}$ to render more readable results.
End of explanation
"""
|
mrustl/flopy | examples/Notebooks/flopy3_LoadSWRBinaryData.ipynb | bsd-3-clause | %matplotlib inline
from IPython.display import Image
import os
import numpy as np
import matplotlib.pyplot as plt
import flopy
#Set the paths
datapth = os.path.join('..', 'data', 'swr_test')
# SWR Process binary files
files = ('SWR004.obs', 'SWR004.vel', 'SWR004.str', 'SWR004.stg', 'SWR004.flow')
"""
Explanation: FloPy
Plotting SWR Process Results
This notebook demonstrates the use of the SwrObs and SwrStage, SwrBudget, SwrFlow, and SwrExchange, SwrStructure, classes to read binary SWR Process observation, stage, budget, reach to reach flows, reach-aquifer exchange, and structure files. It demonstrates these capabilities by loading these binary file types and showing examples of plotting SWR Process data. An example showing how the simulated water surface profile at a selected time along a selection of reaches can be plotted is also presented.
End of explanation
"""
sobj = flopy.utils.SwrObs(os.path.join(datapth, files[0]))
ts = sobj.get_data()
"""
Explanation: Load SWR Process observations
Create an instance of the SwrObs class and load the observation data.
End of explanation
"""
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(3, 1, 1)
ax1.semilogx(ts['totim']/3600., -ts['OBS1'], label='OBS1')
ax1.semilogx(ts['totim']/3600., -ts['OBS2'], label='OBS2')
ax1.semilogx(ts['totim']/3600., -ts['OBS9'], label='OBS3')
ax1.set_ylabel('Flow, in cubic meters per second')
ax1.legend()
ax = fig.add_subplot(3, 1, 2, sharex=ax1)
ax.semilogx(ts['totim']/3600., -ts['OBS4'], label='OBS4')
ax.semilogx(ts['totim']/3600., -ts['OBS5'], label='OBS5')
ax.set_ylabel('Flow, in cubic meters per second')
ax.legend()
ax = fig.add_subplot(3, 1, 3, sharex=ax1)
ax.semilogx(ts['totim']/3600., ts['OBS6'], label='OBS6')
ax.semilogx(ts['totim']/3600., ts['OBS7'], label='OBS7')
ax.set_xlim(1, 100)
ax.set_ylabel('Stage, in meters')
ax.set_xlabel('Time, in hours')
ax.legend();
"""
Explanation: Plot the data from the binary SWR Process observation file
End of explanation
"""
sobj = flopy.utils.SwrFlow(os.path.join(datapth, files[1]))
times = np.array(sobj.get_times())/3600.
obs1 = sobj.get_ts(irec=1, iconn=0)
obs2 = sobj.get_ts(irec=14, iconn=13)
obs4 = sobj.get_ts(irec=4, iconn=3)
obs5 = sobj.get_ts(irec=5, iconn=4)
"""
Explanation: Load the same data from the individual binary SWR Process files
Load discharge data from the flow file. The flow file contains the simulated flow between connected reaches for each connection in the model.
End of explanation
"""
sobj = flopy.utils.SwrStructure(os.path.join(datapth, files[2]))
obs3 = sobj.get_ts(irec=17, istr=0)
"""
Explanation: Load discharge data from the structure file. The structure file contains the simulated structure flow for each reach with a structure.
End of explanation
"""
sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3]))
obs6 = sobj.get_ts(irec=13)
"""
Explanation: Load stage data from the stage file. The flow file contains the simulated stage for each reach in the model.
End of explanation
"""
sobj = flopy.utils.SwrBudget(os.path.join(datapth, files[4]))
obs7 = sobj.get_ts(irec=17)
"""
Explanation: Load budget data from the budget file. The budget file contains the simulated budget for each reach group in the model. The budget file also contains the stage data for each reach group. In this case the number of reach groups equals the number of reaches in the model.
End of explanation
"""
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(3, 1, 1)
ax1.semilogx(times, obs1['flow'], label='OBS1')
ax1.semilogx(times, obs2['flow'], label='OBS2')
ax1.semilogx(times, -obs3['strflow'], label='OBS3')
ax1.set_ylabel('Flow, in cubic meters per second')
ax1.legend()
ax = fig.add_subplot(3, 1, 2, sharex=ax1)
ax.semilogx(times, obs4['flow'], label='OBS4')
ax.semilogx(times, obs5['flow'], label='OBS5')
ax.set_ylabel('Flow, in cubic meters per second')
ax.legend()
ax = fig.add_subplot(3, 1, 3, sharex=ax1)
ax.semilogx(times, obs6['stage'], label='OBS6')
ax.semilogx(times, obs7['stage'], label='OBS7')
ax.set_xlim(1, 100)
ax.set_ylabel('Stage, in meters')
ax.set_xlabel('Time, in hours')
ax.legend();
"""
Explanation: Plot the data loaded from the individual binary SWR Process files.
Note that the plots are identical to the plots generated from the binary SWR observation data.
End of explanation
"""
sd = np.genfromtxt(os.path.join(datapth, 'SWR004.dis.ref'), names=True)
"""
Explanation: Plot simulated water surface profiles
Simulated water surface profiles can be created using the ModelCrossSection class.
Several things that we need in addition to the stage data include reach lengths and bottom elevations. We load these data from an existing file.
End of explanation
"""
fc = open(os.path.join(datapth, 'SWR004.dis.ref')).readlines()
fc
"""
Explanation: The contents of the file are shown in the cell below.
End of explanation
"""
sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3]))
"""
Explanation: Create an instance of the SwrStage class for SWR Process stage data.
End of explanation
"""
iprof = sd['IRCH'] > 0
iprof[2:8] = False
dx = np.extract(iprof, sd['RLEN'])
belev = np.extract(iprof, sd['BELEV'])
"""
Explanation: Create a selection condition (iprof) that can be used to extract data for the reaches of interest (reaches 0, 1, and 8 through 17). Use this selection condition to extract reach lengths (from sd['RLEN']) and the bottom elevation (from sd['BELEV']) for the reaches of interest. The selection condition will also be used to extract the stage data for reaches of interest.
End of explanation
"""
ml = flopy.modflow.Modflow()
dis = flopy.modflow.ModflowDis(ml, nrow=1, ncol=dx.shape[0], delr=dx, top=4.5, botm=belev.reshape(1,1,12))
"""
Explanation: Create a fake model instance so that the ModelCrossSection class can be used.
End of explanation
"""
x = np.cumsum(dx)
"""
Explanation: Create an array with the x position at the downstream end of each reach, which will be used to color the plots below each reach.
End of explanation
"""
fig = plt.figure(figsize=(12, 12))
for idx, v in enumerate([19, 29, 34, 39, 44, 49, 54, 59]):
ax = fig.add_subplot(4, 2, idx+1)
s = sobj.get_data(idx=v)
stage = np.extract(iprof, s['stage'])
xs = flopy.plot.ModelCrossSection(model=ml, line={'Row': 0})
xs.plot_fill_between(stage.reshape(1,1,12), colors=['none', 'blue'], ax=ax, edgecolors='none')
linecollection = xs.plot_grid(ax=ax, zorder=10)
ax.fill_between(np.append(0., x), y1=np.append(belev[0], belev), y2=-0.5,
facecolor='0.5', edgecolor='none', step='pre')
ax.set_title('{} hours'.format(times[v]))
ax.set_ylim(-0.5, 4.5)
"""
Explanation: Plot simulated water surface profiles for 8 times.
End of explanation
"""
|
GPflow/GPflowOpt | doc/source/notebooks/structure.ipynb | apache-2.0 | import numpy as np
def fx(X):
X = np.atleast_2d(X)
# Return objective & gradient
return np.sum(np.square(X), axis=1)[:,None], 2*X
"""
Explanation: The structure of GPflowOpt
Joachim van der Herten
In this document, the structure of the GPflowOpt library is explained, including some small examples. First the Domain and Optimizer interfaces are shortly illustrated, followed by a description of the BayesianOptimizer. At the end, a step-by-step walkthrough of the BayesianOptimizer is given.
Optimization
The underlying design principles of GPflowOpt were chosen to address the following task: optimizing problems of the form
$$\underset{\boldsymbol{x} \in \mathcal{X}}{\operatorname{argmin}} f(\boldsymbol{x}).$$ The objective function $f: \mathcal{X} \rightarrow \mathbb{R}^p$ maps a candidate optimum to a score (or multiple). Here $\mathcal{X}$ represents the input domain. This domain encloses all candidate solutions to the optimization problem and can be entirely continuous (i.e., a $d$-dimensional hypercube) but may also consist of discrete and categorical parameters.
In GPflowOpt, the Domain and Optimizer interfaces and corresponding subclasses are used to explicitly represent the optimization problem.
Objective function
The objective function itself is provided and must be implemented as any python callable (function or object with implemented call operator), accepting a two dimensional numpy array with shape $(n, d)$ as an input, with $n$ the batch size. It returns a tuple of numpy arrays: the first element of the tuple has shape $(n, p)$ and returns the objective scores for each point to evaluate. The second element is the gradient in every point, shaped either $(n, d)$ if we have a single-objective optimization problem, or $(n, d, p)$ in case of a multi-objective function. If the objective function is passed on to a gradient-free optimization method, the gradient is automatically discarded. GPflowOpt provides decorators which handle batch application of a function along the n points of the input matrix, or dealing with functions which accept each feature as function argument.
Here, we define a simple quadratic objective function:
End of explanation
"""
from gpflowopt.domain import ContinuousParameter
domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
domain
"""
Explanation: Domain
Then, we represent $\mathcal{X}$ by composing parameters. This is how a simple continuous square domain is defined:
End of explanation
"""
from gpflowopt.optim import SciPyOptimizer
optimizer = SciPyOptimizer(domain, method='SLSQP')
optimizer.set_initial([-1,-1])
optimizer.optimize(fx)
"""
Explanation: Optimize
Based on the domain and a valid objective function, we can now easily apply one of the included optimizers to optimize objective functions. GPflowOpt defines an intuitive Optimizer interface which can be used to specify the domain, the initial point(s), constraints (to be implemented) etc. Some popular optimization approaches are provided.
Here is how our function is optimized using one of the available methods of SciPy's minimize:
End of explanation
"""
from gpflowopt.optim import MCOptimizer
optimizer = MCOptimizer(domain, 200)
optimizer.optimize(fx)
"""
Explanation: And here is how we optimize it Monte-Carlo. We can pass the same function as the gradients are automatically discarded.
End of explanation
"""
from gpflowopt.bo import BayesianOptimizer
from gpflowopt.design import FactorialDesign
from gpflowopt.acquisition import ExpectedImprovement
import gpflow
# The Bayesian Optimizer does not expect gradients to be returned
def fx(X):
X = np.atleast_2d(X)
# Return objective & gradient
return np.sum(np.square(X), axis=1)[:,None]
X = FactorialDesign(2, domain).generate()
Y = fx(X)
# initializing a standard BO model, Gaussian Process Regression with
# Matern52 ARD Kernel
model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
alpha = ExpectedImprovement(model)
# Now we must specify an optimization algorithm to optimize the acquisition
# function, each iteration.
acqopt = SciPyOptimizer(domain)
# Now create the Bayesian Optimizer
optimizer = BayesianOptimizer(domain, alpha, optimizer=acqopt)
with optimizer.silent():
r = optimizer.optimize(fx, n_iter=15)
print(r)
"""
Explanation: Bayesian Optimization
In Bayesian Optimization (BO), the typical assumption is that $f$ is expensive to evaluate and no gradients are available. The typical approach is to sequentially select a limited set of decisions $\boldsymbol{x}0, \boldsymbol{x}_1, ... \boldsymbol{x}{n-1}$ using a sampling policy. Hence each decision $\boldsymbol{x}_i \in \mathcal{X}$ itself is the result of an optimization problem
$$\boldsymbol{x}_i = \underset{\boldsymbol{x}}{\operatorname{argmax}} \alpha_i(\boldsymbol{x})$$
Each iteration, a function $\alpha_i$ which is cheap-to-evaluate acts as a surrogate for the expensive function. It is typically a mapping of the predictive distribution of a (Bayesian) model built on all decisions and their corresponding (noisy) evaluations. The mapping introduces an order in $\mathcal{X}$ to obtain a certain goal. The typical goal within the context of BO is the search for optimality or feasibility while keeping the amount of required evaluations ($n$) a small number. As we can have several functions $f$ representing objectives and constraints, BO may invoke several models and mappings $\alpha$. These mappings are typically referred to as acquisition functions (or infill criteria). GPflowOpt defines an Acquisition interface to implement these mappings and provides implementations of some default choices. In combination with a special Optimizer implementation for BO, following steps are required for a typical workflow:
1) Define the problem domain. Its dimensionality matches the input to the objective and constraint functions. (like normal optimization)
2) Specify the (GP) models for the constraints and objectives. This involves choice of kernels, priors, fixes, transforms... this step follows the standard way of setting up GPflow models. GPflowOpt does not further wrap models hence it is possible to implement custom models in GPflow and use them directly in GPflowOpt
3) Set up the acquisition function(s) using the available built-in implementations in GPflowOpt, or design your own by implementing the Acquisition interface.
4) Set up an optimizer for the acquisition function.
5) Run the high-level BayesianOptimizer which implements a typical BO flow. BayesianOptimizer in itself is compliant with the Optimizer interface. Exceptionally, the BayesianOptimizer requires that the objective function returns no gradients.
Alternatively, advanced users requiring finer control can easily implement their own flow based on the low-level interfaces of GPflowOpt, as the coupling between these objects was intentionally kept loose.
As illustration of the described flow, the previous example is optimized using Bayesian optimization instead, with the well-known Expected Improvement acquisition function:
End of explanation
"""
|
rice-solar-physics/hot_plasma_single_nanoflares | notebooks/plot_nei_results.ipynb | bsd-2-clause | import os
import sys
import pickle
import numpy as np
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from matplotlib import ticker
sys.path.append(os.path.join(os.environ['EXP_DIR'],'EBTEL_analysis/src'))
import em_binner as emb
%matplotlib inline
plt.rcParams.update({'figure.figsize' : [16,5]})
"""
Explanation: Plot Non-equilibrium Ionization Results
Compare the NEI results to the ionization equilibrium results for two pulse durations, $\tau=20$ s and $\tau=500$ s using both the temperature and density profiles as well as the emission measure distributions, $\mathrm{EM}(T)$.
End of explanation
"""
colors = {'20':sns.color_palette('deep')[0],'500':sns.color_palette('deep')[3]}
linestyles = {'single':'solid','electron':'dashed','ion':'-.'}
"""
Explanation: Set some colors and linestyles.
End of explanation
"""
loop_length = 40.e+8
"""
Explanation: Hardcode the loop length. In this paper, we only use a single loop half-length of $L=40$ Mm.
End of explanation
"""
nei_results = {'single':{},'electron':{},'ion':{}}
file_template = "../results/tau%d.%s.sol.txt"
species = ['single','electron','ion']
tau = [20,500]
for s in species:
for t in tau:
_tmp_ = np.loadtxt(file_template%(t,s))
nei_results[s]['tau%d'%t] = {'t':_tmp_[:,0],'T':_tmp_[:,1],'Teff':_tmp_[:,2],'n':_tmp_[:,3]}
"""
Explanation: First, we'll load in the NEI results.
End of explanation
"""
fig,ax = plt.subplots(1,2,sharey=True)
plt.subplots_adjust(hspace=0.0,wspace=0.0)
for t in tau:
for s in species:
ax[tau.index(t)].plot(nei_results[s]['tau%d'%t]['t'],nei_results[s]['tau%d'%t]['T']/1.e+6,
color=colors[str(t)],linestyle=linestyles[s],label=r'$\mathrm{%s}$'%s)
ax[tau.index(t)].plot(nei_results[s]['tau%d'%t]['t'],nei_results[s]['tau%d'%t]['Teff']/1.e+6,
color=sns.color_palette('bright')[2],linestyle=linestyles[s])
#scale
ax[0].set_xscale('log')
ax[1].set_xscale('log')
#limits
ax[0].set_xlim([0.5,5000])
ax[1].set_xlim([0.5,5000])
ax[0].set_ylim([0,25])
ax[1].set_ylim([0,25])
#tick labels
ax[0].yaxis.set_major_locator(ticker.MaxNLocator(nbins=6,prune='lower'))
ax[1].yaxis.set_major_locator(ticker.MaxNLocator(nbins=6))
#axes labels
ax[0].set_ylabel(r'$T$ $\mathrm{(MK)}$')
global_xlab = fig.text(0.5, 0.015, r'$t$ $\mathrm{(s)}$', ha='center', va='center',fontsize=22)
#legend
ax[0].legend(loc='best')
plt.savefig(__dest__[0],bbox_extra_artists=[global_xlab], bbox_inches='tight')
plt.show()
"""
Explanation: Now, build the temperature profiles.
End of explanation
"""
fig,ax = plt.subplots(1,2,sharey=True)
plt.subplots_adjust(wspace=0.0)
for t in tau:
for s in species:
#IEQ
binner = emb.EM_Binner(2.*loop_length,time=nei_results[s]['tau%d'%t]['t'],
temp=nei_results[s]['tau%d'%t]['T'],density=nei_results[s]['tau%d'%t]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax[tau.index(t)].plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,
color=colors[str(t)],linestyle=linestyles[s],label=r'$\mathrm{%s}$'%s)
#NEI
binner = emb.EM_Binner(2.*loop_length,time=nei_results[s]['tau%d'%t]['t'],
temp=nei_results[s]['tau%d'%t]['Teff'],density=nei_results[s]['tau%d'%t]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax[tau.index(t)].plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,
color=sns.color_palette('bright')[2],linestyle=linestyles[s])
#scale
ax[0].set_yscale('log')
ax[1].set_yscale('log')
ax[0].set_xscale('log')
ax[1].set_xscale('log')
#limits
ax[0].set_xlim([10**6.5,10**7.5])
ax[0].set_ylim([1e+23,1e+28])
ax[1].set_xlim([10**6.5,10**7.5])
ax[1].set_ylim([1e+23,1e+28])
#labels
global_xlab = fig.text(0.5, 0.015, r'${T}\,\,\mathrm{(K)}$', ha='center', va='center',fontsize=22)
ax[0].set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax[0].legend(loc='best')
#save
plt.savefig(__dest__[1],bbox_extra_artists=[global_xlab], bbox_inches='tight')
plt.show()
"""
Explanation: Now, plot the emission measure distributions with their NEI counterparts.
End of explanation
"""
|
baifan-wang/structural-bioinformatics_in_python | docs/Introduction.ipynb | gpl-3.0 | from SBio import *
mol = create_molecule('test.pdb')
mol
"""
Explanation: Overall layout of a Molecule object.
The Molecule object has the following architecture:
* A Molecule is composed of Models (conformations)
* A Model is composed of Residues
* A Residue is composed of Atoms
Atom object is the basic component in SBio python, which is a container of an atomic coordinate line in a 3D-coordinates file. For example, when reading a pdb file, each line starting with ‘ATOM’ or ‘HETATM’ will be used to create an Atom object.
Residue object is used to represent a residue (amino acid or nucleic acid residue, sometimes could be a small molecule.) in a macromolecule. Residue object is composed of several Atom object of atoms belong to a specific residue.
Model object is composed of Residue objects and used to represent a model (or conformation) of a macromolecule.
Molecule object is the top container for Atom objects. Molecule can have at least 1 Model object.
Access
Atom, Residue and Model objects are stored in a python dict of their parent containers. Access of Atom, Residue and Model objects within a Molecule object is supported by using the properties of python object. Normally a Model object will be assigned with the key of ‘m+number’, such as ‘m1, m100’. Suppose we have a Molecule object name ‘mol’, then the 1st Model object of ‘mol’ is ‘mol.m1’. Residue objects within a model will be assigned with the key of ‘chain id+residue serial’, 1st residue of chain A will have the name of ‘A1’, and can be access by ‘mol.m1.A1’. The name of an atom in its 3D coordinates will be used as the key. Then an Atom object with the name of ‘CA’ in residue ‘A1’ in 1st model can be access by ‘mol.m1.A1.CA’. However some atom has the name end with quotes, in this case the quotes will be replaced by ‘’ (underscore). E.g., the key for “C5’” will be ‘C5’
Create a Molecule object
Molecule objects can be created from PDB files or other formats (to be implemented), for example:
End of explanation
"""
for m in mol.get_model():
for r in m.get_residue():
print(r)
"""
Explanation: navigate through a Molecule object:
End of explanation
"""
atoms=mol.m1.get_atom()
residue=mol.m1.get_residue()
for r in residue:
print(r)
"""
Explanation: The "get_model", "get_atom" and "get_residue" are python generators, can be more conveniently used like this:
End of explanation
"""
mol.write_pdb('mol_new.pdb') # write all conformation into a single pdb file
i = 1
for m in mol.get_model():
name = 'mol_m'+str(i)+'.pdb'
m.write_pdb(name) #write one conformation to a single pdb file
i+=1
"""
Explanation: write coordinates to pdb
Both Molecule and Model object can be written into pdb file. Thus it provides a method to split pdb file with multiple conformations.
End of explanation
"""
m1 = mol.m1
print(m1.get_atom_num())
print(m1.get_residue_list())
print(m1.get_sequence('A')) #the sequence of chain A
m1.write_fasta('A', 'test.fasta', comment='test')
m1.get_mw()
"""
Explanation: get information of a molecule
The 'Model' module provide several methods for extraction information of a molecule
get_atom_num: return the number of atoms in a molecule
get_residue_list: return a list of residue of a molecule
get_sequence: return the sequence information of a molecule
write_fasta: write sequence information into a fasta file
get_mw: return the molecular weight of a molecule
get_dist_matrix: compute the complete inter-atomic distance matrix
usage:
End of explanation
"""
a1 = mol.m1.A2.O4_ # the actual name for this atom is "O4'"
a2 = mol.m1.A2.C1_
a3 = mol.m1.A2.N9
a4 = mol.m1.A2.C4
get_distance(a1, a2)
get_angle(a1, a2, a3)
chi = get_torsion(a1,a2,a3,a4)
print('the CHI torsion angle is {}'.format(chi))
"""
Explanation: compute geometry information
The 'Geometry' module contains several methods for the measurement of distance, angle and torsion angle among atoms:
End of explanation
"""
a5 = m1.A2.N6
a6 = m1.A2.H61
a7 = m1.A1.O2
print(get_hydrogen_bond(a5, a7, a6)) #arguments order: donor, acceptor, donor_H=None
print(get_polar_interaction(a5, a7))
"""
Explanation: compute the interaction between atoms
The 'Interaction' module provides several methods to check the interaction between atoms:
* get_hydrogen_bond: check whether hydrogen bond formation between given atoms
* get_polar_interaction: compute the polar interaction between given atoms
* get_pi_pi_interaction: compute the aromatic pi-pi interaction between given atom groups
End of explanation
"""
m1 = mol.m1
m2 = mol.m2
molecule_list = [m1,m2]
residue_range = [[1,2],[1,2]]
sa = Structural_alignment(molecule_list, residue_range, update_coord=False)
sa.run()
print(sa.rmsd)
"""
Explanation: Structure alignment
The 'Structural_alignment' module provide funtion to align a set of molecules. The RMSD values for the strcuture superimpose can be calculated, coordinates of the aligned structure can also be updated.
End of explanation
"""
seq = 'D:\\python\\structural bioinformatics_in_python\\PPO-crystal.clustal'
alignment=Seq_align_parser(seq)
alignment.run()
con_res = []
con_res.append(alignment.align_res_mask['O24164|PPOM_TOBAC '])
con_res.append(alignment.align_res_mask['P56601|PPOX_MYXXA '])
print(con_res[0]) #conserved residue in 'PPOM_TOBAC'
print(con_res[1])
"""
Explanation: Sequence alignment
The 'Structural_alignment' module is used to deal with the multiple sequence alignment to mapping residues between different residues, i.e., to get the residue serials of the conserved residues among different molecules. The conserved residue serials can than be used in the structure alignment.
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow | Syllabus_in_notebooks/Sec5_4_2_Questions_starting_with_A_canal_in_a_dune_area.ipynb | gpl-3.0 | # import required modules / functionality
import numpy as np # for numerical stuff and arrays
import matplotlib.pyplot as plt # for visualization
import scipy.special as sp # scipy.special hold the less usual mathematical functions
def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None,
xscale='linear', yscale='linear', size_inches=(14, 8)):
'''Setup a new axis for plotting'''
fig, ax = plt.subplots()
fig.set_size_inches(size_inches)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xscale(xscale)
ax.set_yscale(yscale)
if xlim is not None: ax.set_xlim(xlim)
if ylim is not None: ax.set_ylim(ylim)
ax.grid(True)
return ax
# Aquifer properties
kD = 100 # m2/d
S = 0.2 # [-]
A = 5 # m, the sudden drop of head in the canal
"""
Explanation: Section 5.4.2, questions 1, 2 and 4
Context:
A canal in a dune area along the Dutch coast serves to provide storage for drinking water in the case of emergencies. During an emergency, the water level in the canal was suddenly lowered by 5 m.
End of explanation
"""
times = [1, 7, 30, 42] # days
print('The total volume drained after t days (from one side) per m of shore is:')
for t in times:
V = A * np.sqrt(4 * kD * S * t / np.pi)
#print('t =', t, 'd, V =', V, 'm3/m')
print(f't = {t:4.0f} d, V = {V:6.1f} m3/m')
# Show the result graphically
times = np.linspace(0, 42, 100)
ax = newfig(title='Total volume drained [m3/m]', xlabel='t [d]', ylabel='m3/m')
ax.plot(times, A * np.sqrt(4 * kD * S * times / np.pi))
"""
Explanation: Question 1
How much water will flow into the canal from its sides during the first day, the first week, the first month and during 6 weeks after the sudden lowering of its stage by 5 m?
The transient flow solution that applies to this situation is
$$ s = A \, \mbox{erfc} ( u) = A \frac {2} {\sqrt \pi} \intop_u^\infty e^{-y^2} dt $$
$$ u = \sqrt{\frac {x^2 S} {4 kD t}} $$
The discharge anywhere in the aquifer then is
$$ Q = -kD \frac {\partial s} {\partial x} $$
so that, applying the derivative of $s$ with respect to $x$ and multiplying by $-kD$ yields
$$ Q = A \sqrt{ \frac {kD S} {\pi t}} \exp \left( -u^2 \right)$$
Then, at $x=0$, we find
$$ Q = A \sqrt{ \frac {kD S} {\pi t}}$$
The extracted volume from one side since $t=0$ is, therefore,
$$ V = \intop_0^t Q dt = A \sqrt {\frac {4 kD S t} {\pi} } $$
End of explanation
"""
# We'll use the same aquifer parameters as before, so they don't have to be repeated here.
x = [10, 100, 300, 1000] # [m] distances at which drawdown is desired
times = np.linspace(0, 7 * 6, 7 * 6 * 12 + 1) # d (6 weeks, one point every 2 hours)
times[0] = 1e-6 # prevent division by zero
ax = newfig(title=f'Drawdown over time for several x values; kD={kD:.0f} m2/d, S={S:.3f} [_], A={A:.1f} m',
xlabel='time [d]', ylabel='drawdown s [m]')
for xi in x:
ax.plot(times, A * sp.erfc(np.sqrt(xi ** 2 * S / (4 * kD * times))), label=f'x = {xi:.0f} m')
ax.legend()
"""
Explanation: Question 2
How much is the drawdown after 6 weeks at different distances ?
Let's answer this question by showning the drawdown development over 6 weeks at several distances from the shore.
End of explanation
"""
w = 20 # m, width of canal
t = 42 # d
# V_c over V is the ratio of the water from the canal over that from the ground
V_c_over_V = w * np.sqrt(np.pi) / (2 * np.sqrt(4 * kD * S *t))
print(f'V_c over V after {t:.0f} days = {V_c_over_V:.0%}')
"""
Explanation: Question 3
Wat is the ratio between the amount of groundwater extracted in 6 weeks compared to the amount of water in the 20 m wide canal?
From the ground we have from two sides
$$ V = 2 \, A \sqrt {\frac {4 kD S t} {\pi} } $$
The volume in the canal is
$$ V_{canal} = A w $$
with $w$ the canal width.
Hence
$$ \frac {V_{canal}} {V_{grw} } = \frac {w \sqrt{\pi}} {2 \sqrt{4 kD S t}}$$
End of explanation
"""
|
kevroy314/msl-iposition-pipeline | examples/iTouch Analyses.ipynb | gpl-3.0 | import os
data_directory = r'C:\Users\Kevin\Documents\GitHub\msl-iposition-pipeline\examples'
touch_tbt_false_path = os.path.join(data_directory, '2018-04-24_11-35-39_touch_tbt_false.csv')
touch_tbt_true_path = os.path.join(data_directory, '2018-04-24_11-35-03_touch_tbt_true.csv')
desktop_tbt_false_path = os.path.join(data_directory, '2018-04-24_11-32-11_desktop_tbt_false.csv')
desktop_tbt_true_path = os.path.join(data_directory, '2018-04-24_11-33-40_desktop_tbt_true.csv')
"""
Explanation: iTouch Analyses
This notebook contains the basic analyses for the iTouch data.
First, we identify the paths to the data tables created via the iPosition pipeline.
End of explanation
"""
import pandas as pd
touch_tbt_false_with_practice = pd.read_csv(touch_tbt_false_path, skiprows=[0])
touch_tbt_true_with_practice = pd.read_csv(touch_tbt_true_path, skiprows=[0])
desktop_tbt_false_with_practice = pd.read_csv(desktop_tbt_false_path, skiprows=[0])
desktop_tbt_true_with_practice = pd.read_csv(desktop_tbt_true_path, skiprows=[0])
"""
Explanation: Next, we open the data files.
End of explanation
"""
touch_tbt_false = touch_tbt_false_with_practice[['practice' not in x for x in touch_tbt_false_with_practice['subID']]].reset_index()
touch_tbt_true = touch_tbt_true_with_practice[['practice' not in x for x in touch_tbt_true_with_practice['subID']]].reset_index()
desktop_tbt_false = desktop_tbt_false_with_practice[['practice' not in x for x in desktop_tbt_false_with_practice['subID']]].reset_index()
desktop_tbt_true = desktop_tbt_true_with_practice[['practice' not in x for x in desktop_tbt_true_with_practice['subID']]].reset_index()
data = [touch_tbt_false, touch_tbt_true, desktop_tbt_false, desktop_tbt_true]
labels = ['Touch, Collapsed Accuracy', 'Touch, Trial-by-Trial Accuracy', 'Desktop, Collapsed Accuracy', 'Desktop, Trial-by-Trial Accuracy']
"""
Explanation: Next, we remove the practice trials and reset the indices of the data.
End of explanation
"""
data[0].columns
"""
Explanation: We can list our columns for convenience to get the names right.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
%matplotlib inline
metric_column_name = 'Original Misplacement'
means = [x[metric_column_name].mean() for x in data]
errors = [x[metric_column_name].std()/np.sqrt(len(x)/25.0) for x in data]
ind = np.arange(len(data))
plt.title(metric_column_name)
plt.bar(ind, means, yerr=errors)
plt.ylabel('Mean ' + metric_column_name)
plt.xticks(ind, labels, rotation=20)
print(stats.ttest_ind(data[0][metric_column_name], data[2][metric_column_name], equal_var=False))
plt.show()
"""
Explanation: The first thing we want to check is whether or not there is a significant difference between the overall misplacement between the conditions. This difference is not dependent upon the accuracy evaluation method, so we're really just comparing two of the 4 files, but we'll plot all of them to make that clear.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
%matplotlib inline
metric_column_names = ['Accurate Single-Item Placements', 'Accurate Misassignment',
'Rotation Theta', 'Scaling', 'Translation Magnitude',
'TranslationX', 'TranslationY', 'True Swaps', 'Cycle Swaps']
for metric_column_name in metric_column_names:
means = [x[metric_column_name].mean() for x in data]
errors = [x[metric_column_name].std()/np.sqrt(len(x)/25.0) for x in data]
ind = np.arange(len(data))
plt.title(metric_column_name)
plt.bar(ind, means, yerr=errors)
plt.ylabel('Mean ' + metric_column_name)
plt.xticks(ind, labels, rotation=20)
print('Analysis for ' + metric_column_name)
print('One-Way ANOVA for Analysis/Group Differences')
print(stats.f_oneway(*[list(x[metric_column_name].values) for x in data]))
print('T-Tests (don\'t bother looking if the prevous test isn\'t significant.')
print('______________')
print('T-Test for Analysis Type Difference in Touch Group')
print(stats.ttest_ind(data[0][metric_column_name], data[1][metric_column_name], equal_var=False))
print('T-Test for Analysis Type Difference in Desktop Group')
print(stats.ttest_ind(data[2][metric_column_name], data[3][metric_column_name], equal_var=False))
print('T-Test for Group Difference in Collapsed Analysis')
print(stats.ttest_ind(data[0][metric_column_name], data[2][metric_column_name], equal_var=False))
print('T-Test for Group Difference in Collapsed Analysis')
print(stats.ttest_ind(data[1][metric_column_name], data[3][metric_column_name], equal_var=False))
plt.show()
"""
Explanation: Finally, we can check for group/analysis type differences in our accuracy measures. There are better ways to do this, but the expected result is that there are going to be differences in these values based on analysis type (because we're drawing the accuracy circle two different ways), but not based on group. The ANOVA, therefore, should be significant, but only the t-tests for analysis type should be significant, not group.
End of explanation
"""
|
Mithrillion/pokemon-go-simulator-solver | pokemon_location_simulator.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import random
import matplotlib.patches as patches
from scipy.stats import gennorm
from scipy.stats import gamma
%matplotlib inline
def generate_initial_coordinates(side_length=2000, n_pokemon=9):
pokemons = {}
for i in range(n_pokemon):
pokemons[i] = (random.uniform(-side_length/2, side_length/2), random.uniform(-side_length/2, side_length/2))
return pokemons
pokemons = generate_initial_coordinates()
pokemons
"""
Explanation: footprints:
in game- 0: <10m, 1: 10-25m, 2: 25-100m, 3: 100-1000m
for simplicity, now assume pokemons do not disappear from radar (i.e. 3 footprints represent 50-inf)
Also assume there are always 9 pokemons on display initially, and their initial locations are random in a 100*100 square
Detection: when a pokemon enters a 10m radius around the player, it is detected (tracking game success)
The game will report to the player the rough distance of each pokemon and their relative distance ranking
The goal of the tracking game is to find a particular pokemon
End of explanation
"""
plt.figure(figsize=(15,15))
# non-target pokemons
plt.scatter([x for x, y in [coord for coord in pokemons.values()]][1:],
[y for x, y in [coord for coord in pokemons.values()]][1:])
# target pokemon
plt.scatter([x for x, y in [coord for coord in pokemons.values()]][0],
[y for x, y in [coord for coord in pokemons.values()]][0],
marker="*", color='red', s=15)
plt.axes().set_aspect(1)
plt.axes().set_xlim((-1100, 1100))
plt.axes().set_ylim((-1100, 1100))
# player
plt.scatter(0, 0, color='purple', s=15)
# detection radii
dists = {10:'green', 25:'blue', 100:'yellow', 1000:'red'}
for r in dists:
plt.axes().add_patch(plt.Circle((0,0), r, fill=False, color=dists[r]))
plt.show()
def distance(coord1, coord2):
return np.sqrt((coord1[0] - coord2[0])**2 + (coord1[1] - coord2[1])**2)
# this is not visible to players
def pokemon_distances(player_coord, pokemons):
return {i: distance(player_coord, coord) for i, coord in pokemons.items()}
pokemon_distances((0, 0), pokemons)
def rank(input):
output = [0] * len(input)
for i, x in enumerate(sorted(range(len(input)), key=lambda y: input[y])):
output[x] = i
return output
# player will be able to see this
def pokemon_rankings(player_coord, pokemons):
dists = pokemon_distances(player_coord, pokemons)
rankings = {}
for i, x in enumerate(sorted(range(len(dists)), key=lambda y: dists[y])):
rankings[x] = i
return rankings
pokemon_rankings((0, 0), pokemons)
def plot_pokemon(player_coord, pokemons):
plt.figure(figsize=(15,15))
# non-target pokemons
plt.scatter([x - player_coord[0] for x, y in [coord for coord in pokemons.values()]][1:],
[y - player_coord[1] for x, y in [coord for coord in pokemons.values()]][1:])
# target pokemon
plt.scatter([x - player_coord[0] for x, y in [coord for coord in pokemons.values()]][0],
[y - player_coord[1] for x, y in [coord for coord in pokemons.values()]][0],
marker="*", color='red', s=15)
plt.axes().set_aspect(1)
plt.axes().set_xlim((-1100, 1100))
plt.axes().set_ylim((-1100, 1100))
# player
plt.scatter(0, 0 , color='purple', s=15)
# detection radii
dists = {10:'green', 25:'blue', 100:'yellow', 1000:'red'}
for r in dists:
plt.axes().add_patch(plt.Circle((0,0), r, fill=False, color=dists[r]))
plt.show()
plot_pokemon((0, 600), pokemons)
def footprint(distance):
if distance < 10:
return 0
elif distance < 25:
return 1
elif distance < 100:
return 2
elif distance < 1000:
return 3
else:
return np.nan
def footprint_counts(player_coord, pokemons):
dists = pokemon_distances(player_coord, pokemons)
return {i: footprint(v) for i,v in dists.items()}
footprint_counts((0, 0), pokemons)
"""
Explanation: Now we can visualise the relationship between player location, various detection radii and initial pokemon locations
End of explanation
"""
fig, ax = plt.subplots(4, 1)
fig.set_figwidth(10)
fig.set_figheight(15)
beta = 3
x = np.linspace(-25, 25, 100)
ax[0].plot(x, gennorm.pdf(x / 10, beta), 'r-', lw=5, alpha=0.6, label='gennorm pdf')
ax[0].set_title("no footprints")
beta0 = 3
x = np.linspace(-25, 50, 100)
ax[1].plot(x, gennorm.pdf((x - 17.5) / 7.5, beta0), 'r-', lw=5, alpha=0.6, label='gennorm pdf')
ax[1].set_title("one footprint")
beta1 = 4
# x = np.linspace(gennorm.ppf(0.01, beta), gennorm.ppf(0.99, beta), 100)
x = np.linspace(0, 150, 100)
ax[2].plot(x, gennorm.pdf((x - 62.5) / 37.5, beta1), 'r-', lw=5, alpha=0.6, label='gennorm pdf')
ax[2].set_title("two footprints")
beta2 = 6
x = np.linspace(-250, 1500, 100)
ax[3].plot(x, gennorm.pdf((x - 550) / 430, beta2), 'r-', lw=5, alpha=0.6, label='gennorm pdf')
ax[3].set_title("three footprints")
plt.show()
"""
Explanation: movesets:
move up/down/left/right x (default 5) m
rewards:
target estimated distance increased: moderate penalty
target estimated distance decreased: moderate reward
target ranking increased: slight reward
target ranking decreased: slight penalty
target within catch distance: huge reward (game won)
(optional) target lost (outside of detection range): large penalty
currently new pokemon spawns / pokemons outside of inital detection range are omitted
known information:
distance ranking and distance estimate for all pokemons within detection range
current player location relative to starting point
player moves so far (potentially equivalent to 2)
learning objective:
efficient algorithm to reach the target pokemon
End of explanation
"""
fig, ax = plt.subplots(3, 1)
fig.set_figwidth(10)
fig.set_figheight(15)
a = 2.5
x = np.linspace(0, 1500, 100)
ax[0].plot(x, gamma.pdf((x - 75) / (450/3), a), 'r-', lw=5, alpha=0.6, label='gamma pdf')
ax[0].set_title("inner")
beta2 = 6
x = np.linspace(-250, 1500, 100)
ax[1].plot(x, gennorm.pdf((x - 550) / 450, beta2), 'r-', lw=5, alpha=0.6, label='gennorm pdf')
ax[1].set_title("middle")
a0 = 2.5
x = np.linspace(0, 1500, 100)
ax[2].plot(x, gamma.pdf((-x + 1100) / (450/3), a0), 'r-', lw=5, alpha=0.6, label='gamma pdf')
ax[2].set_title("outer")
plt.show()
"""
Explanation: Assuming no knowledge of player movement history, the above graphs give us a rough probability distribution of actual distance of a pokemon given the estimated distance.
We may establish a relationship between player location plus n_footprints of a pokemon and the probable locations of the pokemon. Combine this with previous estimates of the location, we can improve our estimation step by step. This can be done with particle filter algorithm.
However, the footprints only offer us very limited information (especially because the "three footprints" range is significantly longer than the other two). We must be able to infer information from the pokemon distance rankings.
Suppose we have pokemon A and B. Let ">" denote "ranked before". If A>B, we know that pokemon A is closer to the player than pokemon B. Suppose after some player movement, A and B swapped rankings. Now B>A and B is closer to the player than A. We may infer that:
at the moment of swap, A and B are roughly the same distance from the player
(following 1) in the joint A and B distribution, values where abs(distance(A) - distance(B)) is small should be more probable than values where the difference is large
Now consider pokemon C and D. Both have three footprints but C>D. It is reasonable to believe that C is more likely to be in the inner range of the three-footprint radius and D is more likely to be in the outer range. If there are multiple pokemons in the three-footprint range, we may use a skewed probability distribution to estimate the locations of the highest and lowest ranking pokemons.
End of explanation
"""
fig, ax = plt.subplots(1, 1)
fig.set_figwidth(10)
fig.set_figheight(10)
a = 1
x = np.linspace(0, 50, 100)
ax.plot(x, gamma.pdf(x / 10, a), 'r-', lw=5, alpha=0.6, label='gennorm pdf')
ax.set_title("distribution of distance difference")
"""
Explanation: Now for pokemons with three footprints, we apply these skewed distributions to estimate their distance if they are ranked first or last (or first k / last k, k adjustable).
The other question remains: how do we exploit the information from rank changes?
Suppose we have m particles (estimated locations) for pokemon A and B respectively. The total number of combinations is m*m. For each of the combinations, we calculate the distance difference of A and B. The combinations we select from this population should follow a distribution that is highest at zero and decays as the variable increases.
End of explanation
"""
fig, ax = plt.subplots(2, 1)
fig.set_figwidth(10)
fig.set_figheight(10)
a0 = 1.5
x = np.linspace(0, 1500, 100)
ax[0].plot(x, gamma.pdf((-x + 1100) / (450/6), a0), 'r-', lw=5, alpha=0.6, label='gamma pdf')
ax[0].set_title("appearing in radar")
a = 1.5
x = np.linspace(800, 2000, 100)
ax[1].plot(x, gamma.pdf((x - 900) / (450/6), a), 'r-', lw=5, alpha=0.6, label='gamma pdf')
ax[1].set_title("disappearing from radar")
plt.show()
"""
Explanation: Also we might have to consider situations where a pokemon pops in / disappears from radar. This means they are almost certainly at that point on the edge of the detection radius. Their distance should follow a more skewed distribution.
End of explanation
"""
def random_particle_generation(side_length=2000, n=1000):
particles = [0] * n
for i in range(n):
particles[i] = (random.uniform(-side_length/2, side_length/2), random.uniform(-side_length/2, side_length/2))
return particles
def plot_particles(player_coord, particles):
plt.figure(figsize=(15,15))
plt.scatter([p[0] - player_coord[0] for p in particles],
[p[1] - player_coord[1] for p in particles])
plt.axes().set_aspect(1)
plt.axes().set_xlim((-1100, 1100))
plt.axes().set_ylim((-1100, 1100))
# player
plt.scatter(0, 0 , color='purple', s=15)
# detection radii
dists = {10:'green', 25:'blue', 100:'yellow', 1000:'red'}
for r in dists:
plt.axes().add_patch(plt.Circle((0,0), r, fill=False, color=dists[r]))
plt.show()
particles = random_particle_generation(n=3000)
plot_particles((0, 0), particles)
# sample according to distance distribution
particle_dists = list(map(lambda c: distance(c, (0, 0)), particles))
plt.hist(particle_dists)
def three_middle(x):
beta = 6
return gennorm.pdf((x - 550) / 450, beta)
particle_probs = list(map(three_middle, particle_dists))
def two_fp(x):
beta = 4
return gennorm.pdf((x - 62.5) / 37.5, beta)
particle_probs = list(map(three_middle, particle_dists))
plt.hist(particle_probs)
new_particles = [particles[np.random.choice(range(len(particles)), p=particle_probs / sum(particle_probs))]
for i in range(len(particles))]
plot_particles((0, 0), new_particles)
# now suppose player moved to (200, 200) and the footprint count reduced to 2
player_coord = (200, 200)
particle_dists = list(map(lambda c: distance(c, player_coord), particles))
particle_probs = list(map(two_fp, particle_dists))
new_particles = [particles[np.random.choice(range(len(particles)), p=particle_probs / sum(particle_probs))]
for i in range(len(particles))]
plot_particles(player_coord, new_particles)
"""
Explanation: Situations where we need to re-estimate the distance:
initially, when we first receive the footprint counts and rankings
when the footprint count of a pokemon changes
when a swap in ranking happens (with multiple swaps at the same time, treat it as pairwise swaps)
when the highest / lowest ranking pokemon changes
when a pokemon enters / exits radar radius
In order to help learning an optimal policy, we need a reward / fitness function to estimate how close we are to locating / reaching the target pokemon.
base_fitness = weighed average distance from estimated locations of the target pokemon
small bonus rewards could be given to triggering new weight-updating events as that offers us more information
reward of a move = fitness change + bonus rewards for extra information
A "step" or "move" can be restricted to traveling up/down/left/right 5m for simplicity.
We can know the base fitness change of a step before taking the move, but we do not know the bonus information until after we take the move.
End of explanation
"""
|
probml/pyprobml | deprecated/flow_2d_mlp.ipynb | mit | from typing import Sequence
import distrax
import haiku as hk
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import optax
Array = jnp.ndarray
PRNGKey = Array
prng = hk.PRNGSequence(42)
"""
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/flow_2d_mlp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Mapping a 2d standard Gaussian to a more complex distribution using an invertible MLP
Author: George Papamakarios
Based on the example by Eric Jang from
https://blog.evjang.com/2018/01/nf1.html
Reproduces Figure 23.1 of the book Probabilistic Machine Learning: Advanced Topics by Kevin P. Murphy
Imports and definitions
End of explanation
"""
class Parameter(hk.Module):
"""Helper Haiku module for defining model parameters."""
def __init__(self, module_name: str, param_name: str, shape: Sequence[int], init: hk.initializers.Initializer):
"""Initializer.
Args:
module_name: name of the module.
param_name: name of the parameter.
shape: shape of the parameter.
init: initializer of the parameter value.
"""
super().__init__(name=module_name)
self._param = hk.get_parameter(param_name, shape=shape, init=init)
def __call__(self) -> Array:
return self._param
class LeakyRelu(distrax.Lambda):
"""Leaky ReLU elementwise bijector."""
def __init__(self, slope: Array):
"""Initializer.
Args:
slope: the slope for x < 0. Must be positive.
"""
forward = lambda x: jnp.where(x >= 0.0, x, x * slope)
inverse = lambda y: jnp.where(y >= 0.0, y, y / slope)
forward_log_det_jacobian = lambda x: jnp.where(x >= 0.0, 0.0, jnp.log(slope))
inverse_log_det_jacobian = lambda y: jnp.where(y >= 0.0, 0.0, -jnp.log(slope))
super().__init__(
forward=forward,
inverse=inverse,
forward_log_det_jacobian=forward_log_det_jacobian,
inverse_log_det_jacobian=inverse_log_det_jacobian,
event_ndims_in=0,
)
def make_model() -> distrax.Transformed:
"""Creates the flow model."""
num_layers = 6
layers = []
for _ in range(num_layers - 1):
# Each intermediate layer is an affine transformation followed by a leaky
# ReLU nonlinearity.
matrix = Parameter("affine", "matrix", shape=[2, 2], init=hk.initializers.Identity())()
bias = Parameter("affine", "bias", shape=[2], init=hk.initializers.TruncatedNormal(2.0))()
affine = distrax.UnconstrainedAffine(matrix, bias)
slope = Parameter("nonlinearity", "slope", shape=[2], init=jnp.ones)()
nonlinearity = distrax.Block(LeakyRelu(slope), 1)
layers.append(distrax.Chain([nonlinearity, affine]))
# The final layer is just an affine transformation.
matrix = Parameter("affine", "matrix", shape=[2, 2], init=hk.initializers.Identity())()
bias = Parameter("affine", "bias", shape=[2], init=jnp.zeros)()
affine = distrax.UnconstrainedAffine(matrix, bias)
layers.append(affine)
flow = distrax.Chain(layers[::-1])
base = distrax.MultivariateNormalDiag(loc=jnp.zeros(2), scale_diag=jnp.ones(2))
return distrax.Transformed(base, flow)
@hk.without_apply_rng
@hk.transform
def model_log_prob(x: Array) -> Array:
model = make_model()
return model.log_prob(x)
@hk.without_apply_rng
@hk.transform
def model_sample(key: PRNGKey, num_samples: int) -> Array:
model = make_model()
return model.sample(seed=key, sample_shape=[num_samples])
"""
Explanation: Create flow model
End of explanation
"""
def target_sample(key: PRNGKey, num_samples: int) -> Array:
"""Generates samples from target distribution.
Args:
key: a PRNG key.
num_samples: number of samples to generate.
Returns:
An array of shape [num_samples, 2] containing the samples.
"""
key1, key2 = jax.random.split(key)
x = 0.6 * jax.random.normal(key1, [num_samples])
y = 0.8 * x**2 + 0.2 * jax.random.normal(key2, [num_samples])
return jnp.concatenate([y[:, None], x[:, None]], axis=-1)
# Plot samples from target distribution.
data = target_sample(next(prng), num_samples=1000)
plt.plot(data[:, 0], data[:, 1], ".", color="red", label="Target")
plt.axis("equal")
plt.title("Samples from target distribution")
plt.legend();
"""
Explanation: Define target distribution
End of explanation
"""
# Initialize model parameters.
params = model_sample.init(next(prng), next(prng), num_samples=1)
# Plot samples from the untrained model.
x = target_sample(next(prng), num_samples=1000)
y = model_sample.apply(params, next(prng), num_samples=1000)
plt.plot(x[:, 0], x[:, 1], ".", color="red", label="Target")
plt.plot(y[:, 0], y[:, 1], ".", color="green", label="Model")
plt.axis("equal")
plt.title("Samples from untrained model")
plt.legend();
# Loss function is negative log likelihood.
loss_fn = jax.jit(lambda params, x: -jnp.mean(model_log_prob.apply(params, x)))
# Optimizer.
optimizer = optax.adam(1e-3)
opt_state = optimizer.init(params)
# Training loop.
for i in range(5000):
data = target_sample(next(prng), num_samples=100)
loss, g = jax.value_and_grad(loss_fn)(params, data)
updates, opt_state = optimizer.update(g, opt_state)
params = optax.apply_updates(params, updates)
if i % 100 == 0:
print(f"Step {i}, loss = {loss:.3f}")
# Plot samples from the trained model.
x = target_sample(next(prng), num_samples=1000)
y = model_sample.apply(params, next(prng), num_samples=1000)
plt.plot(x[:, 0], x[:, 1], ".", color="red", label="Target")
plt.plot(y[:, 0], y[:, 1], ".", color="green", label="Model")
plt.axis("equal")
plt.title("Samples from trained model")
plt.legend();
"""
Explanation: Train model
End of explanation
"""
@hk.without_apply_rng
@hk.transform
def model_sample_intermediate(key: PRNGKey, num_samples: int) -> Array:
model = make_model()
samples = []
x = model.distribution.sample(seed=key, sample_shape=[num_samples])
samples.append(x)
for layer in model.bijector.bijectors[::-1]:
x = layer.forward(x)
samples.append(x)
return samples
xs = model_sample_intermediate.apply(params, next(prng), num_samples=2000)
plt.rcParams["figure.figsize"] = [2 * len(xs), 3]
fig, axs = plt.subplots(1, len(xs))
fig.tight_layout()
color = xs[0][:, 1]
cm = plt.cm.get_cmap("gnuplot2")
for i, (x, ax) in enumerate(zip(xs, axs)):
ax.scatter(x[:, 0], x[:, 1], s=10, cmap=cm, c=color)
ax.axis("equal")
if i == 0:
title = "Base distribution"
else:
title = f"Layer {i}"
ax.set_title(title)
"""
Explanation: Create plot with intermediate distributions
End of explanation
"""
|
cjdrake/pyeda | ipynb/SAT_Demo.ipynb | bsd-2-clause | %dotobjs S_rca[2].simplify(), S_ksa[2].simplify()
"""
Explanation: The expression tree is very different:
End of explanation
"""
f = Xor(S_rca[9], S_ksa[9])
%timeit f.satisfy_one()
"""
Explanation: If XOR(f, g) is UNSAT, functions f and g are equivalent.
But sum bit 9 is a deep expression.
Converting to CNF is impossible, so use backtracking:
End of explanation
"""
g = f.tseitin()
%timeit g.satisfy_one()
"""
Explanation: Let's see if we can do better using the Tseitin transformation,
and PicoSAT extension:
End of explanation
"""
assert f.satisfy_one() is None and g.satisfy_one() is None
"""
Explanation: Success!
Verify that both functions returned UNSAT:
End of explanation
"""
|
barjacks/foundations-homework | 07/.ipynb_checkpoints/07 - Introduction to Pandas-checkpoint.ipynb | mit | # import pandas, but call it pd. Why? Because that's What People Do.
import pandas as pd
"""
Explanation: An Introduction to pandas
Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true.
The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
Now let's start coding. Hopefully you did pip install pandas before you started up this notebook.
End of explanation
"""
# We're going to call this df, which means "data frame"
# It isn't in UTF-8 (I saved it from my mac!) so we need to set the encoding
df = pd.read_csv("NBA-Census-10.14.2013.csv", encoding ="mac_roman")
#this is a data frame (df)
"""
Explanation: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd.
You don't have to, but every other person on the planet will be doing it, so you might as well.
Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete.
End of explanation
"""
# Let's look at all of it
df
"""
Explanation: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
Selecting rows
Now let's look at our data, since that's what data is for
End of explanation
"""
# Look at the first few rows
df.head() #shows first 5 rows
"""
Explanation: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
End of explanation
"""
# Let's look at MORE of the first few rows
df.head(10)
"""
Explanation: ...but maybe we want to see more than a measly five results?
End of explanation
"""
# Let's look at the final few rows
df.tail(4)
"""
Explanation: But maybe we want to make a basketball joke and see the final four?
End of explanation
"""
# Show the 6th through the 8th rows
df[5:8]
"""
Explanation: So yes, head and tail work kind of like the terminal commands. That's nice, I guess.
But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too.
End of explanation
"""
# Get the names of the columns, just because
#columns_we_want = ['Name', 'Age']
#df[columns_we_want]
# If we want to be "correct" we add .values on the end of it
df.columns
# Select only name and age
# Combing that with .head() to see not-so-many rows
columns_we_want = ['Name', 'Age']
df[columns_we_want].head()
# We can also do this all in one line, even though it starts looking ugly
# (unlike the cute bears pandas looks ugly pretty often)
df[['Name', 'Age',]].head()
"""
Explanation: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end.
Selecting columns
But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
End of explanation
"""
df.head()
"""
Explanation: NOTE: That was not df['Name', 'Age'], it was df[['Name', 'Age']]. You'll definitely type it wrong all of the time. When things break with pandas it's probably because you forgot to put in a million brackets.
Describing your data
A powerful tool of pandas is being able to select a portion of your data, because who ordered all that data anyway.
End of explanation
"""
# Grab the POS column, and count the different values in it.
df['POS'].value_counts()
"""
Explanation: I want to know how many people are in each position. Luckily, pandas can tell me!
End of explanation
"""
#race
race_counts = df['Race'].value_counts()
race_counts
# Summary statistics for Age
df['Age'].describe()
df.describe()
# That's pretty good. Does it work for everything? How about the money?
df['2013 $'].describe()
#The result is the result, because the Money is a string.
"""
Explanation: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details.
But now I'm curious about numbers: how old is everyone? Maybe we could, I don't know, get some statistics about age? Some statistics to describe age?
End of explanation
"""
# Doing more describing
df['Ht (In.)'].describe()
"""
Explanation: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing.
End of explanation
"""
# Take another look at our inches, but only the first few
df['Ht (In.)'].head()
# Divide those inches by 12
#number_of_inches = 300
#number_of_inches / 12
df['Ht (In.)'].head() / 12
# Let's divide ALL of them by 12
df['Ht (In.)'] / 12
# Can we get statistics on those?
height_in_feet = df['Ht (In.)'] / 12
height_in_feet.describe()
# Let's look at our original data again
df.head(3)
"""
Explanation: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
Manipulating data
Oh wait there is, HA HA HA.
End of explanation
"""
# Store a new column
df['feet'] = df['Ht (In.)'] / 12
df.head()
"""
Explanation: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right?
End of explanation
"""
# Can't just use .replace
# Need to use this weird .str thing
# Can't just immediately replace the , either
# Need to use the .str thing before EVERY string method
# Describe still doesn't work.
# Let's convert it to an integer using .astype(int) before we describe it
# Maybe we can just make them millions?
# Unfortunately one is "n/a" which is going to break our code, so we can make n/a be 0
# Remove the .head() piece and save it back into the dataframe
"""
Explanation: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
End of explanation
"""
# This is just the first few guys in the dataset. Can we order it?
# Let's try to sort them, ascending value
df.sort_values('feet')
"""
Explanation: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
Sorting and sub-selecting
End of explanation
"""
# It isn't descending = True, unfortunately
df.sort_values('feet', ascending=False).head()
# We can use this to find the oldest guys in the league
df.sort_values('Age', ascending=False).head()
# Or the youngest, by taking out 'ascending=False'
df.sort_values('feet').head()
"""
Explanation: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
End of explanation
"""
# Get a big long list of True and False for every single row.
df['feet'] > 6.5
# We could use value counts if we wanted
above_or_below_six_five = df['feet'] > 6.5
above_or_below_six_five.value_counts()
# But we can also apply this to every single row to say whether YES we want it or NO we don't
# Instead of putting column names inside of the brackets, we instead
# put the True/False statements. It will only return the players above
# seven feet tall
df[df['feet'] > 6.5]
df['Race'] == 'Asian'
df[]
# Or only the guards
df['POS'] == 'G'.head()
#People below 6 feet
df['feet'] < 6.5
#Every column you ant to query needs parenthesis aroung it
#Guards that are higher than 6.5
#this is combination of both
df[(df['POS'] == 'G') & (df['feet'] < 6.5)].head()
#We can save stuff
centers = df[df['POS'] == 'C']
guards = df[df['POS'] == 'G']
centers['feet'].describe()
guards['feet'].describe()
# It might be easier to break down the booleans into separate variables
# We can save this stuff
# Maybe we can compare them to taller players?
"""
Explanation: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall.
First, we need to check out boolean things.
End of explanation
"""
# This will scream we don't have matplotlib.
"""
Explanation: Drawing pictures
Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay.
End of explanation
"""
# this will open up a weird window that won't do anything
# So instead you run this code
"""
Explanation: matplotlib is a graphing library. It's the Python way to make graphs!
End of explanation
"""
# Import matplotlib
# What's available?
# Use ggplot
# Make a histogram
# Try some other styles
"""
Explanation: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot.
End of explanation
"""
# Pass in all sorts of stuff!
# Most from http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html
# .range() is a matplotlib thing
"""
Explanation: That might look better with a little more customization. So let's customize it.
End of explanation
"""
# How does experience relate with the amount of money they're making?
# At least we can assume height and weight are related
# At least we can assume height and weight are related
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
# We can also use plt separately
# It's SIMILAR but TOTALLY DIFFERENT
"""
Explanation: I want more graphics! Do tall people make more money?!?!
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nims-kma/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-2', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
diging/tethne-notebooks | .ipynb_checkpoints/1. Working with data from the Web of Science-checkpoint.ipynb | gpl-3.0 | print "This is a code cell!"
"""
Explanation: Introduction to Tethne: Loading Data, part 1
In this notebook we will take our first steps with the Tethne Python package. We'll parse some bibliographic records from the ISI Web of Science, and take a look at the Corpus class and its various features. We'll then use some of the functions in tethne.networks to generate some simple networks from our bibliographic dataset.
Methods in Digital & Computational Humanities
This notebook is part of a cluster of learning resources developed by the Laubichler Lab and the Digital Innovation Group at Arizona State University as part of an initiative for digital and computational humanities (d+cH). For more information, see our evolving online methods course at https://diging.atlassian.net/wiki/display/DCH.
Getting Help
Development of the Tethne project is led by Erick Peirson. To get help, first check our issue tracking system on GitHub. There, you can search for questions and problems reported by other users, or ask a question of your own. You can also reach Erick via e-mail at erick.peirson@asu.edu.
Documentation & Tutorials
Additional documentation and tutorials for the Tethne Python package are available at ....
Using this notebook
This is an interactive Python notebook. Most of the content is just marked-down text, like this paragraph, that provides expository on some aspect of the Tethne package. Some of the cells are "code" cells, which look like this:
End of explanation
"""
from tethne.readers import wos
"""
Explanation: You can execute the code in a code cell by clicking on it and pressing Shift-Enter on your keyboard, or by clicking the right-arrow "Run" button in the toolbar at the top of the page. The cell below will automatically be selected, so you can run many cells in quick succession by repeatedly pressing Shift-Enter (or the "Run" button). It's a good idea to run all of the code cells in order, from the top of the tutorial, since many commands later in the tutorial will depend on earlier ones.
Play!!
As we work through the notebook, you'll need to modify certain values depending on where your data is located. You should also experiment! Try changing the parameters in the functions demonstrated below, and re-run the code-cell to see the result. That's what's great about iPython notebooks: you can play around with specific chunks of code without having to re-run the entire script.
Getting Bibliographic Data from the ISI Web of Science
The ISI Web of Science is a proprietary database owned by Thompson Reuters. It is one of the oldest and most comprehensive scientific bibliographic databases in existance. If you are affiliated with an academic institution, you may have access to this database via an institutional license.
For the purpose of this tutorial, you can download a practice dataset from (insert link to dataset). Move the downloaded zip to a place where you can find it, and uncompress its contents. You'll need the full path to the uncompressed dataset.
Perform a search for literature of interest using the interface provided.
Your search criteria will be informed by the objectives of your research
project. If you are attempting to characterize the development of a research
field, for example, you should choose terms that pick out that field as uniquely
as possible (consider using the Publication Name search field). You can also
pick out literatures originating from particular institutions, by using the
Organization-Enhanced search field.
Note also that you can restrict your research to one of three indexes in the Web
of Science Core Collection:
Science Citation Index Expanded is the largest index, containing scientific
publications from 1900 onward.
Social Sciences Citation Index covers 1956 onward.
Arts & Humanities Citation Index is the smallest index, containing
publications from 1975 onward.
Once you have found the papers that you are interested in, find the Send to:
menu at the top of the list of results. Click the small orange down-arrow, and
select Other File Formats.
A small in-browser window should open in the foreground. Specify the range of
records that you wish to download. Note that you can only download 500 records
at a time, so you may have to make multiple download requests. Be sure to
specify Full Record and Cited References in the Record Content field, and
Plain Text in the File Format field. Then click Send.
After a few moments, a download should begin. WoS usually returns a field-tagged
data file called savedrecs.txt. Put this in a location on your filesystem
where you can find it later; this is the input for Tethne's WoS reader methods.
Structure of the WoS Field-Tagged Data File
If you open the text file returned by the WoS database (usually named
'savedrecs.txt'), you should see a whole bunch of field-tagged data.
"Field-tagged" means that each metadata field is denoted by a "tag" (a
two-letter code), followed by values for that field. A complete list of WoS
field tags can be found here. For best results, you should avoid making changes
to the contents of WoS data files.
The metadata record for each paper in your data file should begin with:
PT J
...and end with:
ER
There are two author fields: the AU field is always provided, and values take
the form "Last, FI". AF is provided if author full-names are available, and
values take the form "Last, First Middle". For example:
AU Dauvin, JC
Grimes, S
Bakalem, A
AF Dauvin, Jean-Claude
Grimes, Samir
Bakalem, Ali
Citations are listed in the CR block. For example:
CR Airoldi L, 2007, OCEANOGR MAR BIOL, V45, P345
Alexander Vera, 2011, Marine Biodiversity, V41, P545, DOI 10.1007/s12526-011-0084-1
Arvanitidis C, 2002, MAR ECOL PROG SER, V244, P139, DOI 10.3354/meps244139
Bakalem A, 2009, ECOL INDIC, V9, P395, DOI 10.1016/j.ecolind.2008.05.008
Bakalem Ali, 1995, Mesogee, V54, P49
…
Zenetos A, 2005, MEDITERR MAR SCI, V6, P63
Zenetos A, 2004, CIESM ATLAS EXOTIC S, V3
More recent records also include the institutional affiliations of authors in the C1
block.
C1 [Wang, Changlin; Washida, Haruhiko; Crofts, Andrew J.; Hamada, Shigeki;
Katsube-Tanaka, Tomoyuki; Kim, Dongwook; Choi, Sang-Bong; Modi, Mahendra; Singh,
Salvinder; Okita, Thomas W.] Washington State Univ, Inst Biol Chem, Pullman, WA 99164
USA.
For more information about WoS field tags, see a list on the Thompson Reuters website,
here.
Parsing Web of Science Field-Tagged Data
The modules in the tethne.readers subpackage allow you to parse data from a few different databases. The readers for Web of Science, JSTOR DfR, and Zotero RDF datasets are the most rigorously tested. Request support for a new dataset on our GitHub project site.
| Database | module |
| ----------------------- |---------------------------|
| Web of Science | tethne.readers.wos |
| JSTOR Data-for-Research | tethne.readers.dfr |
| Zotero | tethne.readers.zotero |
You can load the tethne.readers.wos module by importing it from the tethne.readers subpackage:
End of explanation
"""
corpus = wos.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/wos/savedrecs.txt')
"""
Explanation: To parse data from a WoS dataset, use the read method. Each module in the tethne.readers subpackage should have a read method.
read can parse a single data file, or a directory full of data files, and returns a Corpus object. Just pass it a string containing the path to your data. First, try parsing a single WoS field-tagged data file.
End of explanation
"""
print 'Loaded %i records!' % len(corpus)
"""
Explanation: You can see how many records were loaded from your data file by evaluating the len of the Corpus.
End of explanation
"""
corpus = wos.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/wos/')
"""
Explanation: Reading more than one data file at a time
Often you'll be working with datasets comprised of multiple data files. The Web of Science database only allows you to download 500 records at a time (because they're dirty capitalists). You can use the read function to load a list of Papers from a directory containing multiple data files.
Instead of providing the path to a single data file, just provide the path to a directory containing several WoS field-tagged data files. The read function knows that your path is a directory and not a data file; it looks inside of that directory for WoS data files.
End of explanation
"""
print 'Loaded %i records!' % len(corpus)
"""
Explanation: We should have quite a few more records this time:
End of explanation
"""
corpus[500].__dict__ # [500] gets the 501st Paper, and __dict__ generates a
# key-value representation of the data in the Paper.
"""
Explanation: Corpus objects
A Corpus is a collection of Papers with superpowers. Each Paper represents one bibliographic record. Most importantly, the Corpus provides a consistent way of indexing bibliographic records. Indexing is important, because it sets the stage for all of the subsequent analyses that we may wish to do with our bibliographic data.
A Corpus behaves like a list of Papers. We can selecte a single Paper like this:
End of explanation
"""
corpus[500].title
"""
Explanation: There are several things to notice in the output above. First, each Paper should (generally) have a title:
End of explanation
"""
# corpus[500] gets a Paper, and ``.date`` gets the date attribute.
print 'Date:'.ljust(20), corpus[500].date
print 'Journal:'.ljust(20), corpus[500].journal
print 'WoS accession ID:'.ljust(20), corpus[500].wosid
print 'DOI:'.ljust(20), corpus[500].doi
"""
Explanation: Each Paper should also have a date, journal, and wosid (WoS accession ID). Many will also have dois. Note that we can access the attributes of each Paper using . notation:
End of explanation
"""
corpus[500].authors
"""
Explanation: Each Paper will also have authors. Tethne represents author names as "tuples" of the form (last, first). Depending on the record, first might be first and middle initials, or first and middle names.
End of explanation
"""
corpus[2].citedReferences
"""
Explanation: Unlike other bibliographic datasets, WoS data contain the cited references of each Paper. Each cited reference is represented as a Paper:
End of explanation
"""
corpus[2].citations
"""
Explanation: A "prettier" representation of the cited references is available in the citations attribute.
End of explanation
"""
corpus[2].ayjid
"""
Explanation: Each cited reference is represented by what we call an 'ayjid': it contains the author name, year of publication, and the journal in which it was published. Every Paper has an 'ayjid'.
End of explanation
"""
corpus.index_by
"""
Explanation: Indexing
The most important functionality of the Corpus is indexing. Indexing provides a way of looking up Papers by specific attributes, e.g. by the year in which they were published, or by author.
Each Corpus has a single "primary" index. For WoS data, the wosid field (WoS accession ID) is used by default, since every WoS record has one. You can see which field was used as the primary index by accessing the .index_by attribute of the Corpus.
End of explanation
"""
corpus.indexed_papers.items()[:10]
"""
Explanation: All of the Papers in the Corpus are stored by wosid in the indexed_papers attribute. The code cell below shows the first ten Papers with their indexing keys.
End of explanation
"""
corpus.indices.keys()
"""
Explanation: Additional indexes are located in the indices attribute. The code-cell below shows which fields are already indexed.
End of explanation
"""
for paper in corpus[('authors', ('MAIENSCHEIN', 'J'))]:
print paper.date, paper.title
"""
Explanation: We can look up Papers using the name of an indexed field and some value. For example, to see all of the Papers in which ('MAIENSCHEIN', 'J') is an author, we could do:
End of explanation
"""
corpus.index('date')
"""
Explanation: We can create a new index using the index() method. For example, to index Papers by date, we could do:
End of explanation
"""
corpus.indices.keys()
"""
Explanation: 'date' should now show up in the available indices...
End of explanation
"""
for paper in corpus[('date', 1985)]:
print paper.date, paper.title
"""
Explanation: ...and we can now look up all of the Papers published in 1985:
End of explanation
"""
from tethne import networks
"""
Explanation: Simple Networks
One of the core features of Tethne is a set of functions for building networks from bibliographic datasets. These functions are located in the tethne.networks subpackage. In this section, we'll build a coauthorship network.
The first step is to import the networks subpackage.
End of explanation
"""
coauthor_graph = networks.coauthors(corpus)
"""
Explanation: Now use the coauthors function to create the network. We need provide it only our Corpus:
End of explanation
"""
print coauthor_graph.order() # Number of nodes.
print coauthor_graph.size() # Number of edges.
"""
Explanation: Tethne uses a package called NetworkX to build networks. All of the network-building functions return NetworkX Graph objects. We can see how large our network is using the order() and size() methods:
End of explanation
"""
coauthor_graph.nodes()[:10] # [:10] just shows the first ten.
"""
Explanation: As you can see, historians of science don't collaborate much.
To see a list of nodes, use the nodes() method:
End of explanation
"""
coauthor_graph.edges(data=True)[:10] # [:10] just shows the first ten.
# data=True tells edges() to return details about each edge.
"""
Explanation: ...and edges() for edges:
End of explanation
"""
from tethne.writers.graph import to_graphml
"""
Explanation: For networks with anything more than a few nodes, it's hard to visualize what's going on in the iPython environment. So we'll expore the coauthor_graph and visualize it in a network analysis package called Cytoscape.
Cytoscape understands several network file formats. GraphML ((link here)) is probably the most versatile, so we'll use it to export our coauthor graph.
The tethne.writers.graph module has several functions for writing graphs to disk. We'll use to_graphml():
End of explanation
"""
to_graphml(coauthor_graph, '/Users/erickpeirson/Desktop/coauthors_graph.graphml')
"""
Explanation: to_graphml() accepts two arguments: the graph itself, and a string with the path to the output file (that will be created). In the example below, I just put the graph on my desktop.
End of explanation
"""
from tethne import bibliographic_coupling
"""
Explanation: If you were to open that file, the first few lines would look something like this:
```
<?xml version='1.0' encoding='utf-8'?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key attr.name="weight" attr.type="int" for="edge" id="weight" />
<key attr.name="documentCount" attr.type="int" for="node" id="documentCount" />
<key attr.name="count" attr.type="double" for="node" id="count" />
<graph edgedefault="undirected">
<node id="WEBER, BH">
<data key="count">1.0</data>
<data key="documentCount">1</data>
</node>
<node id="CAPELOTTI, PJ">
<data key="count">1.0</data>
<data key="documentCount">1</data>
</node>
<node id="GREENE, MOTT T">
<data key="count">3.0</data>
<data key="documentCount">3</data>
</node>
<node id="PRETE, FR">
<data key="count">3.0</data>
<data key="documentCount">3</data>
</node>
```
Everything in the graph is enclosed between the <graphml ...></graphml> tags. Each author is represented by a <node></node> element. Further down, relationships between authors are represented by <edge></edge> elements.
<edge source="TAUBER, AI" target="BALABAN, M">
<data key="weight">1</data>
</edge>
<edge source="TAUBER, AI" target="PODOLSKY, SH">
<data key="weight">1</data>
</edge>
<edge source="TAUBER, AI" target="CRIST, E">
<data key="weight">1</data>
</edge>
<edge source="RUPKE, N" target="HOSSFELD, U">
<data key="weight">1</data>
</edge>
<edge source="GAWNE, RICHARD" target="NICHOLSON, DANIEL J">
<data key="weight">1</data>
</edge>
Visualization in Cytoscape
Go ahead and load Cytoscape. After the application loads, you should see a splash screen like the one below. Click on "From network file", then select your graphml file and click OK.
Once the network loads, you'll see a jumble of nodes and edges. Click on the "Apply Preferred Layout" button (it looks like nodes with arrows pointing in various directions) at the top of the screen.
By default, this should apply a force-directed layout. After a few moments, your network should look something like the image below.
We can visualize attributes of the graph in the "Styles" menu. Click on "Styles" in the upper left. In the example below, I set node & height widths to be equal, and set node size as a continuous function of "count" (this is the number of papers written by each author.
We can set edge attributes, too. In the example below, I set edge width to be a function of "weight", which is the number of papers that the two connected authors wrote together.
You can zoom in and out to take a closer look at parts of the graph. If you click on the "network" tab in the upper left, you'll see a mini version of your network in the lower left, with a blue box showing which area you're currently viewing.
Bibliographic Coupling
Bibliographic coupling can be a useful and computationally cheap way to explore the
thematic topology of a large scientific literature.
Bibliographic coupling was first
proposed as a method for detecting latent topical affinities among research publications
by Myer M. Kessler at MIT in 1958. In 1972, J.C. Donohue suggested that bibliographic
coupling could be used to the map "research fronts" in science, and this method, along
with co-citation analysis and other citation-based clustering techniques, became a core
methodology of the science-mapping craze of the 1970s. Bibliographic coupling is still
employed in the context of both information-retrieval and science-studies.
Two papers are bibliographically coupled if they both cite at least some of the same
papers. The core assumption of bibliographic coupling analysis is that if two papers
cite similar literatures, then they must be topically related in some way. That is, they
are more likely to be related to each other than to papers with which they share no cited
references.
What we are aiming for is a graph model of our bibliographic
data that reveals thematically coherent and informative clusters of documents. We will use
Tethne's bibligraphic_coupling() function to generate such a network.
First we import the function:
End of explanation
"""
coupling_graph = bibliographic_coupling(corpus, min_weight=3, node_attrs=['date', 'title'])
"""
Explanation: We use this function just like the coauthors() function -- passing the Corpus are our first argument -- but we can also pass additional arguments. min_weight indicates that two Papers must share at least three cited references to be coupled. node_attrs tells the function to add additional information to each node; in this case, 'date' and 'title'.
End of explanation
"""
coupling_graph.order(), coupling_graph.size()
"""
Explanation: We can "tune" this function by increasing or decreasing min_weight to yield more or less dense graphs. order() (the number of nodes) and size() (the number of edges) give us a sense of the density of the graph.
End of explanation
"""
to_graphml(coupling_graph, '/Users/erickpeirson/Desktop/coupling_graph.graphml')
"""
Explanation: We can use the to_graphml() function once again to write the graph to disk, so that we can visualize it in Cytoscape.
End of explanation
"""
|
eds-uga/csci1360-fa16 | assignments/A9/A9_Q1.ipynb | mit | def read_book(f):
return open(f, "r").read()
try:
read_book
except:
assert False
else:
assert True
assert read_book("complete_shakspeare.txt") is None
assert read_book("queen_jean_bible.txt") is None
book1 = read_book("moby_dick.txt")
assert len(book1) == 1238567
book2 = read_book("war_and_peace.txt")
assert len(book2) == 3224780
"""
Explanation: Q1
In this question, you'll be doing some basic processing on four books: Moby Dick, War and Peace, The King James Bible, and The Complete Works of William Shakespeare. Each of these texts are available for free on the Project Gutenberg website.
A
Write a function, read_book, which takes the name of the text file containing the book's content, and returns a single string containing the content.
input: a single string indicating the name of the text file with the book
output: a single string of the entire book
Your function should be able to handle file-related errors gracefully; in this case, just return None.
End of explanation
"""
try:
word_counts
except:
assert False
else:
assert True
assert 0 == len(word_counts("").keys())
assert 1 == word_counts("hi there")["there"]
kj = word_counts(open("king_james_bible.txt", "r").read())
assert 23 == kj["devil"]
assert 4 == kj["leviathan"]
wp = word_counts(open("war_and_peace.txt", "r").read())
assert 30 == wp["devil"]
assert 86 == wp["soul"]
"""
Explanation: B
Write a function, word_counts, which takes a single string as input (containing an entire book), and returns as output a dictionary of word counts.
Don't worry about handling punctuation, but definitely handle whitespace (spaces, tabs, newlines). Also make sure to handle capitalization, and throw out any words with a length of 2 or less. No other "preprocessing" requirements outside these.
You are welcome to use the collections.defaultdict dictionary for tracking word counts, but no other built-in Python packages or functions for counting.
End of explanation
"""
try:
total_words
except:
assert False
else:
assert True
try:
words = total_words("")
except:
assert False
else:
assert words == 0
assert 11 == total_words("The brown fox jumped over the lazy cat.\nTwice.\nMMyep. Twice.")
assert 681216 == total_words(open("king_james_bible.txt", "r").read())
assert 729531 == total_words(open("complete_shakespeare.txt", "r").read())
"""
Explanation: C
Write a function, total_words, which takes as input a string containing the contents of a book, and returns as output the integer count of the total number of words (this is NOT unique words, but total words).
Same rules apply as in Part B with respect to what constitutes a "word" (capitalization, punctuation, splitting, etc), but you are welcome to use your Part B solution in answering this question!
End of explanation
"""
try:
unique_words
except:
assert False
else:
assert True
try:
words = total_words("")
except:
assert False
else:
assert words == 0
assert 9 == unique_words("The brown fox jumped over the lazy cat.\nTwice.\nMMyep. Twice.")
assert 31586 == unique_words(open("moby_dick.txt", "r").read())
assert 40021 == unique_words(open("war_and_peace.txt", "r").read())
"""
Explanation: D
Write a function, unique_words, which takes as input a string containing the full contents of a book, and returns an integer count of the number of unique words in the book.
Same rules apply as in Part B with respect to what constitutes a "word" (capitalization, punctuation, splitting, etc), but you are welcome to use your Part B solution in answering this question!
End of explanation
"""
try:
global_vocabulary
except:
assert False
else:
assert True
doc1 = "This is a sentence."
doc2 = "This is another sentence."
doc3 = "What is this?"
assert set(["another", "sentence.", "this", "this?", "what"]) == set(global_vocabulary(doc1, doc2, doc3))
assert 31586 == len(global_vocabulary(open("moby_dick.txt", "r").read()))
assert 40021 == len(global_vocabulary(open("war_and_peace.txt", "r").read()))
kj = open("king_james_bible.txt", "r").read()
wp = open("war_and_peace.txt", "r").read()
md = open("moby_dick.txt", "r").read()
cs = open("complete_shakespeare.txt", "r").read()
assert 118503 == len(global_vocabulary(kj, wp, md, cs))
"""
Explanation: E
Write a function, global_vocabulary, which takes a variable number of arguments: each argument is a string containing the contents of a book. The output of the function should be a list or set of unique words that comprise the full vocabulary of terms present across all the books that are passed to the function.
For example, if I have the following code:
```
doc1 = "This is a sentence."
doc2 = "This is another sentence."
doc3 = "What is this?"
vocabulary = global_vocabulary(doc1, doc2, doc3)
```
this should return a list or set containing the words ["a", "another", "is", "sentence.", "this", "this?", "what"]. The words should be in increasing lexicographic order (aka, standard alphabetical order), and all the preprocessing steps required in previous sections should be used. As such, you are welcome to use your word_counts function from Part B.
End of explanation
"""
try:
featurize
except:
assert False
else:
assert True
kj = open("king_james_bible.txt", "r").read()
wp = open("war_and_peace.txt", "r").read()
matrix = featurize(kj, wp)
assert 2 == matrix.shape[0]
assert 63889 == matrix.shape[1]
assert 2 == int(matrix[:, 836].sum())
assert 16 == int(matrix[:, 62655].sum())
kj = open("king_james_bible.txt", "r").read()
wp = open("war_and_peace.txt", "r").read()
md = open("moby_dick.txt", "r").read()
cs = open("complete_shakespeare.txt", "r").read()
matrix = featurize(kj, wp, md, cs)
assert 4 == matrix.shape[0]
assert 118503 == matrix.shape[1]
assert 3 == int(matrix[:, 103817].sum())
assert 1 == int(matrix[:, 71100].sum())
"""
Explanation: F
Write a function, featurize, which takes a variable number of arguments: each argument is a string with the contents of an entire book. The output of this function is a 2D NumPy array of counts, where the rows are the documents and the columns are the counts for all the words.
For instance, if I pass two input strings to featurize that collectively have 50 unique words between them, the output matrix should have shape (2, 50): the first row will be the respective counts of the words in that document, and same with the second row.
The rows (documents) should be in the same ordering as they're given in the function's argument list, and the columns (words) should be in increasing lexicographic order (aka alphabetic order). You are welcome to use your function from Part B, and from Part E.
End of explanation
"""
try:
probability
except:
assert False
else:
assert True
import numpy as np
matrix = np.load("lut.npy")
np.testing.assert_allclose(0.068569417725812987, probability(104088, matrix))
np.testing.assert_allclose(0.012485067486917144, probability(54096, matrix))
np.testing.assert_allclose(0.0073786475907416712, probability(21668, matrix))
np.testing.assert_allclose(0.0, probability(66535, matrix), rtol = 1e-6)
import numpy as np
matrix = np.load("lut.npy")
np.testing.assert_allclose(0.012404288801202555, probability(54096, matrix, 0))
np.testing.assert_allclose(0.0077914081371666744, probability(21668, matrix, 1))
np.testing.assert_allclose(0.0094279749592546449, probability(117297, matrix, 3))
"""
Explanation: G
Write a function, probability, which takes three arguments:
- an integer, indicating the index of the word we're interested in computing the probability for
- a 2D NumPy matrix of word counts, where the rows are the documents, and the columns are the words
- an optional integer, indicating the specific document in which we want to compute the probability of the word (default is all documents)
This function is the implementation of $P(w)$ for some word $w$. By default, this is probability of word $w$ over our entire dataset. However, by specifying an optional integer, we can specify a conditional probability $P(w | d)$. In this case, we're asking for the probability of word $w$ given some specific document $d$.
Your function should return the probability, a floating-point value between 0 and 1. It should be able to handle the case where the specified word index is out of bounds (resulting probability of 0), as well as the case where the document index is out of bounds (also a probability of 0).
End of explanation
"""
|
dmytroKarataiev/MachineLearning | student_intervention/student_intervention.ipynb | mit | # Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
"""
Explanation: Machine Learning Engineer Nanodegree
Supervised Learning
Project 2: Building a Student Intervention System
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Question 1 - Classification vs. Regression
Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?
Answer:
It is clearly a classification problem, because we need a clear discrete answer: yes or no.
Classification can answer to a question: do wee need to intervent - yes or no, while regression gives us a continuous answer, which clearly isn't required here.
Exploring the Data
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, 'passed', will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
End of explanation
"""
# Calculate number of students
n_students = len(student_data.index)
# Calculate number of features (minus 1, because "passed" is not a feature, but a target column)
n_features = len(student_data.columns) - 1
# Calculate passing students
n_passed = len(student_data[student_data['passed'] == 'yes'])
# Calculate failing students
n_failed = len(student_data[student_data['passed'] == 'no'])
# Calculate graduation rate
grad_rate = (n_passed / float(n_students)) * 100
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
"""
Explanation: Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, n_students.
- The total number of features for each student, n_features.
- The number of those students who passed, n_passed.
- The number of those students who failed, n_failed.
- The graduation rate of the class, grad_rate, in percent (%).
End of explanation
"""
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
"""
Explanation: Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
End of explanation
"""
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
"""
Explanation: Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
End of explanation
"""
# Import Cross Validation functionality to perform splitting the data
from sklearn import cross_validation
# Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# Calculate split percentage
splitPercentage = float(num_test) / (num_train + num_test)
print "Split percentage: {0:.2f}% ".format(splitPercentage * 100)
# Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X_all, y_all, test_size = splitPercentage, stratify = y_all, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
"""
Explanation: Implementation: Training and Testing Data Split
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a random_state for the function(s) you use, if provided.
- Store the results in X_train, X_test, y_train, and y_test.
End of explanation
"""
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
"""
Explanation: Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
Question 2 - Model Application
List three supervised learning models that are appropriate for this problem. What are the general applications of each model? What are their strengths and weaknesses? Given what you know about the data, why did you choose these models to be applied?
Answer:
As we need to get a binary answer, it is better to choose among methods which are used for binary classification more often. I've chosen three learning models: K Nearest Neighbors, Logistic Regression and Support Vector Machine.
K Nearest Neighbors is one the simplest ML algorithms:
Real-World Applications:
- Optical Character Recognition.
- One of the most popular algorithms for text categorization or text mining.
- Often used for simulating daily precipitations and other weather variables.
- Stock market forecasting.
Advantages:
- The cost of the learning process is zero.
- No assumptions about the characteristics of the concepts to learn have to be done.
- Complex concepts can be learned by local approximation using simple procedures.
Disadvantages:
- The model can not be interpreted (there is no description of the learned concepts).
- It is computationally expensive to find the k nearest neighbours when the dataset is very large.
- Performance depends on the number of dimensions that we have (curse of dimensionality).
Logistic Regression:
Real-World Applications:
- It is used widely in many fields, which includes predicting of passing exams, voting preferences, consumer likelihod to purchase a product.
Advantages:
- It is very efficient in terms of time and memory requirement.
- It is not heavily affected by noise in the data.
Disadvantages:
- If we have lots of features and a small dataset - doesn't perform well.
- It has problems with dealing with categorical features.
Support Vector Machine:
Real-World Applications:
- Text (and hypertext) categorization.
- Image classification.
- Bioinformatics (Protein classification, Cancer classification).
- Hand-written character recognition.
Advantages:
- It is a great algorithm for our dataset as it relies heavily on boundary cases to build the separating curve. Which means that it can handle missing data for some "obvious" cases.
- It can efficiently handle large feature space.
- Do not rely on entire data.
Disadvantages:
- Not really our case, but it is not very efficient with a large number of observations.
- It is hard to find appropriate kernel sometitems.
Reasons to choose these algorithms:
1. KNN: uses the least amount of computational cost, as we have a small dataset and a requirement to save money.
2. Logistic Regression: often used to perform exactly the same type of classification we need in our case. Also very efficient in time and memory.
3. SMV: it is great on a small dataset, because requires only boundary cases and can handle big feature space efficiently (our case).
Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- train_classifier - takes as input a classifier and training data and fits the classifier to the data.
- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
End of explanation
"""
# Import the three supervised learning models from sklearn
# Logistic Regression
from sklearn.linear_model import LogisticRegression
# Support Vector Machine
from sklearn.svm import SVC
# KNN classifier
from sklearn.neighbors import KNeighborsClassifier
# Random State for each model consistent with splitting Random State = 0
randState = 0
# Initialize the three models
clf_A = LogisticRegression(random_state = randState)
clf_B = SVC(random_state = randState)
clf_C = KNeighborsClassifier()
classifiers = [clf_A, clf_B, clf_C]
# Set up the training set sizes
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train
y_train_300 = y_train
trainingData = [(X_train_100, y_train_100), (X_train_200, y_train_200), (X_train_300, y_train_300)]
# Execute the 'train_predict' function for each classifier and each training set size
for each in range(len(classifiers)):
print classifiers[each]
print "-------------------"
for data in trainingData:
train_predict(classifiers[each], data[0], data[1], X_test, y_test)
print
"""
Explanation: Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in clf_A, clf_B, and clf_C.
- Use a random_state for each model you use, if provided.
- Note: Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- Do not reshuffle and resplit the data! The new training points should be drawn from X_train and y_train.
- Fit each model with each training set size and make predictions on the test set (9 in total).
Note: Three tables are provided after the following code cell which can be used to store your results.
End of explanation
"""
# Import 'GridSearchCV' and 'make_scorer'
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def gridSearch(clf, parameters):
# Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label="yes")
# Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf, parameters, scoring=f1_scorer)
# Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
return clf
# Create the parameters list you wish to tune
svcParameters = [
{'C': [1, 10, 100], 'kernel': ['rbf'], 'gamma': ['auto']},
{'C': [1, 10, 100, 1000], 'kernel': ['linear', 'rbf', 'poly']}
]
knnParams = {'n_neighbors': [2, 3, 4, 5],
'weights': ['uniform', 'distance']}
regresParams = {'C': [0.5, 1.0, 10.0, 100.0],
'max_iter': [100, 1000],
'solver': ['sag', 'liblinear']}
randState = 0
classifiers = [gridSearch(SVC(random_state = randState), svcParameters),
gridSearch(KNeighborsClassifier(), knnParams),
gridSearch(LogisticRegression(random_state = randState), regresParams)]
# Report the final F1 score for training and testing after parameter tuning
# I've tested all three of them just out of curiosity, I've chosen Logistic Regression initially.
for clf in classifiers:
print clf
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
print "-----------------"
"""
Explanation: Tabular Results
Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.
Classifer 1 - Logistic Regression
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0077 | 0.0012 | 0.8759 | 0.7500 |
| 200 | 0.0041 | 0.0005 | 0.8532 | 0.8085 |
| 300 | 0.0050 | 0.0002 | 0.8402 | 0.7770 |
Classifer 2 - Support Vector Machine
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0016 | 0.0008 | 0.8919 | 0.7737 |
| 200 | 0.0036 | 0.0019 | 0.8795 | 0.8182 |
| 300 | 0.0080 | 0.0029 | 0.8664 | 0.8289 |
Classifer 3 - K Nearest Neighbors
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0019 | 0.0024 | 0.8592 | 0.7910 |
| 200 | 0.0013 | 0.0016 | 0.8425 | 0.7941 |
| 300 | 0.0011 | 0.0039 | 0.8688 | 0.7714 |
Choosing the Best Model
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
Question 3 - Chosing the Best Model
Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
Answer:
After the data stratification we immediately get better results from the SVM, but it takes much longer to predict.
The best model in terms of cost and efficiency for this data is probably a Logistic Regression. Scores are consistent and the classificator works blazingly fast, while the difference in scores with the SVM is not huge.
Question 4 - Model in Layman's Terms
In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. For example if you've chosen to use a decision tree or a support vector machine, how does the model go about making a prediction?
Answer:
Logistic Regression takes the data and tries to calculate the probability of occurence of an event by fitting data on a logistic curve. It takes available information about previous students and if they passed or not, and then tries to calculate the probability of another student to pass the exams given the data.
Training:
As an example, if we wanted to predict success of students in their life, based on their scores, we would develop something called a model using training data.
We have scores (independent variable), we also know whether a person succeeded or not (dependent variable). We come up with some predictions and then look at how close our predictions were with the recorded data.
If we predicted 0.9 on someone (who succeded in his\her life), and in the same manner we are pretty close in all our predictions then we have a very developed and a good model. If we missed the correct answer, we would go about looking at various models (something called minimizing the squared error: we take a look at sum of errors for each prediction in each model and try to see if we can go even lower than it) and find out the model which fits very closely with our recorded data.
Predicting:
Then we can plug other people data scores into this model and it returns a number between 0 and 1. By looking at it, if it is greater than 0.5 -> a person is successful, or if it is lower than 0.5 -> person is unsuccessful.
Implementation: Model Tuning
Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import sklearn.grid_search.gridSearchCV and sklearn.metrics.make_scorer.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: parameters = {'parameter' : [list of values]}.
- Initialize the classifier you've chosen and store it in clf.
- Create the F<sub>1</sub> scoring function using make_scorer and store it in f1_scorer.
- Set the pos_label parameter to the correct value!
- Perform grid search on the classifier clf using f1_scorer as the scoring method, and store it in grid_obj.
- Fit the grid search object to the training data (X_train, y_train), and store it in grid_obj.
End of explanation
"""
|
DJCordhose/ai | notebooks/rl/berater-v11.ipynb | mit | !pip install git+https://github.com/openai/baselines >/dev/null
!pip install gym >/dev/null
"""
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/rl/berater-v11.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Berater Environment v11
Changes from v10
configure custom network allowing to train to almost perfection
score method for BaseLine
Installation (required for colab)
End of explanation
"""
import numpy as np
import random
import gym
from gym.utils import seeding
from gym import spaces
def state_name_to_int(state):
state_name_map = {
'S': 0,
'A': 1,
'B': 2,
'C': 3,
'D': 4,
'E': 5,
'F': 6,
'G': 7,
'H': 8,
'K': 9,
'L': 10,
'M': 11,
'N': 12,
'O': 13
}
return state_name_map[state]
def int_to_state_name(state_as_int):
state_map = {
0: 'S',
1: 'A',
2: 'B',
3: 'C',
4: 'D',
5: 'E',
6: 'F',
7: 'G',
8: 'H',
9: 'K',
10: 'L',
11: 'M',
12: 'N',
13: 'O'
}
return state_map[state_as_int]
class BeraterEnv(gym.Env):
"""
The Berater Problem
Actions:
There are 4 discrete deterministic actions, each choosing one direction
"""
metadata = {'render.modes': ['ansi']}
showStep = False
showDone = True
envEpisodeModulo = 100
def __init__(self):
# self.map = {
# 'S': [('A', 100), ('B', 400), ('C', 200 )],
# 'A': [('B', 250), ('C', 400), ('S', 100 )],
# 'B': [('A', 250), ('C', 250), ('S', 400 )],
# 'C': [('A', 400), ('B', 250), ('S', 200 )]
# }
self.map = {
'S': [('A', 300), ('B', 100), ('C', 200 )],
'A': [('S', 300), ('B', 100), ('E', 100 ), ('D', 100 )],
'B': [('S', 100), ('A', 100), ('C', 50 ), ('K', 200 )],
'C': [('S', 200), ('B', 50), ('M', 100 ), ('L', 200 )],
'D': [('A', 100), ('F', 50)],
'E': [('A', 100), ('F', 100), ('H', 100)],
'F': [('D', 50), ('E', 100), ('G', 200)],
'G': [('F', 200), ('O', 300)],
'H': [('E', 100), ('K', 300)],
'K': [('B', 200), ('H', 300)],
'L': [('C', 200), ('M', 50)],
'M': [('C', 100), ('L', 50), ('N', 100)],
'N': [('M', 100), ('O', 100)],
'O': [('N', 100), ('G', 300)]
}
max_paths = 4
self.action_space = spaces.Discrete(max_paths)
positions = len(self.map)
# observations: position, reward of all 4 local paths, rest reward of all locations
# non existing path is -1000 and no position change
# look at what #getObservation returns if you are confused
low = np.append(np.append([0], np.full(max_paths, -1000)), np.full(positions, 0))
high = np.append(np.append([positions - 1], np.full(max_paths, 1000)), np.full(positions, 1000))
self.observation_space = spaces.Box(low=low,
high=high,
dtype=np.float32)
self.reward_range = (-1, 1)
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.envReward = 0
self.envEpisodeCount = 0
self.envStepCount = 0
self.reset()
self.optimum = self.calculate_customers_reward()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def iterate_path(self, state, action):
paths = self.map[state]
if action < len(paths):
return paths[action]
else:
# sorry, no such action, stay where you are and pay a high penalty
return (state, 1000)
def step(self, action):
destination, cost = self.iterate_path(self.state, action)
lastState = self.state
customerReward = self.customer_reward[destination]
reward = (customerReward - cost) / self.optimum
self.state = destination
self.customer_visited(destination)
done = destination == 'S' and self.all_customers_visited()
stateAsInt = state_name_to_int(self.state)
self.totalReward += reward
self.stepCount += 1
self.envReward += reward
self.envStepCount += 1
if self.showStep:
print( "Episode: " + ("%4.0f " % self.envEpisodeCount) +
" Step: " + ("%4.0f " % self.stepCount) +
lastState + ' --' + str(action) + '-> ' + self.state +
' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) +
' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum)
)
if done and not self.isDone:
self.envEpisodeCount += 1
if BeraterEnv.showDone:
episodes = BeraterEnv.envEpisodeModulo
if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):
episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo
print( "Done: " +
("episodes=%6.0f " % self.envEpisodeCount) +
("avgSteps=%6.2f " % (self.envStepCount/episodes)) +
("avgTotalReward=% 3.2f" % (self.envReward/episodes) )
)
if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:
self.envReward = 0
self.envStepCount = 0
self.isDone = done
observation = self.getObservation(stateAsInt)
info = {"from": self.state, "to": destination}
return observation, reward, done, info
def getObservation(self, position):
result = np.array([ position,
self.getPathObservation(position, 0),
self.getPathObservation(position, 1),
self.getPathObservation(position, 2),
self.getPathObservation(position, 3)
],
dtype=np.float32)
all_rest_rewards = list(self.customer_reward.values())
result = np.append(result, all_rest_rewards)
return result
def getPathObservation(self, position, path):
source = int_to_state_name(position)
paths = self.map[self.state]
if path < len(paths):
target, cost = paths[path]
reward = self.customer_reward[target]
result = reward - cost
else:
result = -1000
return result
def customer_visited(self, customer):
self.customer_reward[customer] = 0
def all_customers_visited(self):
return self.calculate_customers_reward() == 0
def calculate_customers_reward(self):
sum = 0
for value in self.customer_reward.values():
sum += value
return sum
def modulate_reward(self):
number_of_customers = len(self.map) - 1
number_per_consultant = int(number_of_customers/2)
# number_per_consultant = int(number_of_customers/1.5)
self.customer_reward = {
'S': 0
}
for customer_nr in range(1, number_of_customers + 1):
self.customer_reward[int_to_state_name(customer_nr)] = 0
# every consultant only visits a few random customers
samples = random.sample(range(1, number_of_customers + 1), k=number_per_consultant)
key_list = list(self.customer_reward.keys())
for sample in samples:
self.customer_reward[key_list[sample]] = 1000
def reset(self):
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.modulate_reward()
self.state = 'S'
return self.getObservation(state_name_to_int(self.state))
def render(self):
print(self.customer_reward)
env = BeraterEnv()
print(env.reset())
print(env.customer_reward)
"""
Explanation: Environment
End of explanation
"""
BeraterEnv.showStep = True
BeraterEnv.showDone = True
env = BeraterEnv()
print(env)
observation = env.reset()
print(observation)
for t in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
print(observation)
"""
Explanation: Try out Environment
End of explanation
"""
from copy import deepcopy
import json
class Baseline():
def __init__(self, env, max_reward, verbose=1):
self.env = env
self.max_reward = max_reward
self.verbose = verbose
self.reset()
def reset(self):
self.map = self.env.map
self.rewards = self.env.customer_reward.copy()
def as_string(self, state):
# reward/cost does not hurt, but is useless, path obsucres same state
new_state = {
'rewards': state['rewards'],
'position': state['position']
}
return json.dumps(new_state, sort_keys=True)
def is_goal(self, state):
if state['position'] != 'S': return False
for reward in state['rewards'].values():
if reward != 0: return False
return True
def expand(self, state):
states = []
for position, cost in self.map[state['position']]:
new_state = deepcopy(state)
new_state['position'] = position
new_state['rewards'][position] = 0
reward = state['rewards'][position]
new_state['reward'] += reward
new_state['cost'] += cost
new_state['path'].append(position)
states.append(new_state)
return states
def search(self, root, max_depth = 25):
closed = set()
open = [root]
while open:
state = open.pop(0)
if self.as_string(state) in closed: continue
closed.add(self.as_string(state))
depth = len(state['path'])
if depth > max_depth:
if self.verbose > 0:
print("Visited:", len(closed))
print("Reached max depth, without reaching goal")
return None
if self.is_goal(state):
scaled_reward = (state['reward'] - state['cost']) / self.max_reward
state['scaled_reward'] = scaled_reward
if self.verbose > 0:
print("Scaled reward:", scaled_reward)
print("Perfect path", state['path'])
return state
expanded = self.expand(state)
open += expanded
# make this best first
open.sort(key=lambda state: state['cost'])
def find_optimum(self):
initial_state = {
'rewards': self.rewards.copy(),
'position': 'S',
'reward': 0,
'cost': 0,
'path': ['S']
}
return self.search(initial_state)
def benchmark(self, model, sample_runs=100):
self.verbose = 0
BeraterEnv.showStep = False
BeraterEnv.showDone = False
perfect_rewards = []
model_rewards = []
for run in range(sample_runs):
observation = self.env.reset()
self.reset()
optimum_state = self.find_optimum()
perfect_rewards.append(optimum_state['scaled_reward'])
state = np.zeros((1, 2*128))
dones = np.zeros((1))
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = self.env.step(actions[0])
if done:
break
model_rewards.append(env.totalReward)
return perfect_rewards, model_rewards
def score(self, model, sample_runs=100):
perfect_rewards, model_rewards = self.benchmark(model, sample_runs=100)
perfect_score_mean, perfect_score_std = np.array(perfect_rewards).mean(), np.array(perfect_rewards).std()
test_score_mean, test_score_std = np.array(model_rewards).mean(), np.array(model_rewards).std()
return perfect_score_mean, perfect_score_std, test_score_mean, test_score_std
"""
Explanation: Baseline
End of explanation
"""
!rm -r logs
!mkdir logs
!mkdir logs/berater
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
"""
Explanation: Train model
Estimation
* total cost when travelling all paths (back and forth): 2500
* all rewards: 6000
* but: rewards are much more sparse while routes stay the same, maybe expect less
* estimate: no illegal moves and between
* half the travel cost: (6000 - 1250) / 6000 = .79
* and full traval cost (6000 - 2500) / 6000 = 0.58
* additionally: the agent only sees very little of the whole scenario
* changes with every episode
* was ok when network can learn fixed scenario
End of explanation
"""
# copied from https://github.com/openai/baselines/blob/master/baselines/a2c/utils.py
def ortho_init(scale=1.0):
def _ortho_init(shape, dtype, partition_info=None):
#lasagne ortho init for tf
shape = tuple(shape)
if len(shape) == 2:
flat_shape = shape
elif len(shape) == 4: # assumes NHWC
flat_shape = (np.prod(shape[:-1]), shape[-1])
else:
raise NotImplementedError
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
q = u if u.shape == flat_shape else v # pick the one with the correct shape
q = q.reshape(shape)
return (scale * q[:shape[0], :shape[1]]).astype(np.float32)
return _ortho_init
def fc(x, scope, nh, *, init_scale=1.0, init_bias=0.0):
with tf.variable_scope(scope):
nin = x.get_shape()[1].value
w = tf.get_variable("w", [nin, nh], initializer=ortho_init(init_scale))
b = tf.get_variable("b", [nh], initializer=tf.constant_initializer(init_bias))
return tf.matmul(x, w)+b
# copied from https://github.com/openai/baselines/blob/master/baselines/common/models.py#L31
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
"""
Stack of fully-connected layers to be used in a policy / q-function approximator
Parameters:
----------
num_layers: int number of fully-connected layers (default: 2)
num_hidden: int size of fully-connected layers (default: 64)
activation: activation function (default: tf.tanh)
Returns:
-------
function that builds fully connected network with a given input tensor / placeholder
"""
def network_fn(X):
# print('network_fn called')
# Tensor("ppo2_model_4/Ob:0", shape=(1, 19), dtype=float32)
# Tensor("ppo2_model_4/Ob_1:0", shape=(512, 19), dtype=float32)
# print (X)
h = tf.layers.flatten(X)
for i in range(num_layers):
h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
# Tensor("ppo2_model_4/pi/Tanh_2:0", shape=(1, 500), dtype=float32)
# Tensor("ppo2_model_4/pi_2/Tanh_2:0", shape=(512, 500), dtype=float32)
# print(h)
return h
return network_fn
"""
Explanation: Step 1: Extract MLP builder from openai sources
End of explanation
"""
# first the dense layer
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=ortho_init(np.sqrt(2)))
# h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
# then initializer, relu activations
def mlp(num_layers=2, num_hidden=64, activation=tf.nn.relu, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=tf.initializers.glorot_uniform(seed=17))
if layer_norm:
# h = tf.layers.batch_normalization(h, center=True, scale=True)
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
%%time
# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py
# log_dir = logger.get_dir()
log_dir = '/content/logs/berater/'
import gym
from baselines import bench
from baselines import logger
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.common.vec_env.vec_monitor import VecMonitor
from baselines.ppo2 import ppo2
BeraterEnv.showStep = False
BeraterEnv.showDone = False
env = BeraterEnv()
wrapped_env = DummyVecEnv([lambda: BeraterEnv()])
monitored_env = VecMonitor(wrapped_env, log_dir)
# https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py
# https://github.com/openai/baselines/blob/master/baselines/common/models.py#L30
# https://arxiv.org/abs/1607.06450 for layer_norm
# lr linear from lr=1e-2 to lr=1e-4 (default lr=3e-4)
def lr_range(frac):
# we get the remaining updates between 1 and 0
start_lr = 1e-2
end_lr = 1e-4
diff_lr = start_lr - end_lr
lr = end_lr + diff_lr * frac
return lr
network = mlp(num_hidden=500, num_layers=3, layer_norm=True)
model = ppo2.learn(
env=monitored_env,
network=network,
lr=lr_range,
gamma=1.0,
ent_coef=0.05,
total_timesteps=1000000)
# model = ppo2.learn(
# env=monitored_env,
# network='mlp',
# num_hidden=500,
# num_layers=3,
# layer_norm=True,
# lr=lr_range,
# gamma=1.0,
# ent_coef=0.05,
# total_timesteps=500000)
# model.save('berater-ppo-v11.pkl')
monitored_env.close()
"""
Explanation: Step 2: Replace exotic parts
Steps:
1. Low level matmul replaced with dense layer (no need for custom code here)
* https://www.tensorflow.org/api_docs/python/tf/layers
* https://www.tensorflow.org/api_docs/python/tf/layers/Dense
initializer changed to best practice glorot uniform, but does not give reliable results, so use seed
use relu activations (should train faster)
standard batch normalization does not train with any configuration (no idea why), so we need to keep layer normalization
1.Dropout and L2 would be nice as well, but easy to do within the boundaries of the OpenAI framework: https://stackoverflow.com/questions/38292760/tensorflow-introducing-both-l2-regularization-and-dropout-into-the-network-do
Alternative: Using Keras API
Not done here, as no big benefit expected and would need to be integrated into surrounding low level tensorflow model. Need to reuse session. If you want to do this, be sure to check at least the first link
using Keras within TensorFlow model: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
https://stackoverflow.com/questions/46790506/calling-a-keras-model-on-a-tensorflow-tensor-but-keep-weights
https://www.tensorflow.org/api_docs/python/tf/get_default_session
https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_session
End of explanation
"""
# !ls -l $log_dir
from baselines.common import plot_util as pu
results = pu.load_results(log_dir)
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
plt.ylim(0, .75)
# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
"""
Explanation: Visualizing Results
https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb
End of explanation
"""
import numpy as np
observation = env.reset()
env.render()
baseline = Baseline(env, max_reward=6000)
state = np.zeros((1, 2*128))
dones = np.zeros((1))
BeraterEnv.showStep = True
BeraterEnv.showDone = False
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = env.step(actions[0])
if done:
print("Episode finished after {} timesteps, reward={}".format(t+1, env.totalReward))
break
env.close()
%time baseline.find_optimum()
"""
Explanation: Enjoy model
End of explanation
"""
baseline = Baseline(env, max_reward=6000)
perfect_score_mean, perfect_score_std, test_score_mean, test_score_std = baseline.score(model, sample_runs=100)
# perfect scores
perfect_score_mean, perfect_score_std
# test scores for our model
test_score_mean, test_score_std
"""
Explanation: Evaluation
End of explanation
"""
|
wesleybeckner/salty | scripts/molecular_dynamics/salty_traj_analysis.ipynb | mit | import numpy as np
import MDAnalysis
import sympy as sp
"""
Explanation:
End of explanation
"""
def gk_heat_current(kb, volume, temperature, integrated_heat_current):
gk_hc = 1 / (3 * volume * kb * temperature**2) * integrated_heat_current
return gk_hc
"""
Explanation: Micro to macroscopic heat transfer
Fourier's law relates the macroscopic heat current to the temperature gradient:
$J=-\lambda\nabla T$
where we find $\lambda$ using the Green-Kubo formula:
$\lambda= \frac{1}{3VK_{B}T^2} \int_{0}^{\infty} \langle j(0) \cdot j(t) \rangle \, \mathrm{d}t$
For the integration component, we're going to preprocess our trajectory into the time-stamped microscopic heat currents, $j$, and use a numerical method to perform the integration.
End of explanation
"""
def site_energy(mass, velocities, lj_potentials):
se = 0.5 * (mass * abs(velocities)**2 + np.sum(lj_potentials, axis=0))
return se
"""
Explanation: I'm using the lowercase $j$ to signify the microscopic heat current:
$j(t) = \frac{d}{dt} \sum_{i=1}^N r_i e_i $
$r_i$ is the atomic position and $e_i$ is the microscopic site energy. The microscopic site energy is a combination of kinetic and potential energies:
$e_i = \frac{1}{2} [ m_i |v_i|^2 + \sum_{j}{\phi(r_{ij})}]$
End of explanation
"""
def lj_potential(distance, epsilon, sigma):
lj = 4 * epsilon * ((sigma/distance)**12 - (sigma/distance)**6)
np.nan_to_num(lj, copy=False)
return lj
"""
Explanation: where $v_i$ is the velocity of particle $i$ and $\phi$ is the interatomic, Lennard Jones potential:
$\phi(r_{ij}) = 4 \epsilon [( \frac{\sigma}{r_{ij}} )^{12} - ( \frac{\sigma}{r_{ij}} )^6]$
End of explanation
"""
def microscopic_heat_current(site_energies, velocities, distances, forces):
micro_hc = np.sum(velocities * site_energies) + 0.5 * np.sum(np.sum(velocities *
np.nan_to_num(forces)) * distances)
return micro_hc
"""
Explanation: where $r_{ij}$ are the interatomic distances. This simplifies to the final experession:
$j(t) = \sum_{i}{v_{i} e_{i}} + \frac{1}{2} \sum_{i}\sum_{j>i}[{(v_{j}+v_{i}) \cdot F_{ij}]r_{ij}}$
where $F_{ij}$ is the force exerted on atom $i$ from its LJ interaction with atom $j$: $F_{ij} = -\frac{\partial \phi}{\partial r_{ij}}$
End of explanation
"""
# before we start lets define some variables
kb = 0.0083144621 #KJ/mol K
sigma = .34 #nm
epsilon = 0.99774 #KJ/mol
mass = 39.948 #AU
timestep = 0.002 # in ps
# should the timestep be the md timestep
# or should the xtc write frequency be
# taken into account?
u = MDAnalysis.Universe("argon.pdb", "traj.trr")
u.atoms[u.atoms.types == 'A'].masses = mass
pdbtrj = "argon.pdb"
heat_current = []
with MDAnalysis.Writer(pdbtrj, multiframe=True, bonds=None, n_atoms=u.atoms.n_atoms) as PDB:
i = 0
for ts in u.trajectory[:10]:
pairwise_distances = np.zeros((u.atoms.n_atoms,u.atoms.n_atoms))
pairwise_velocities = np.zeros((u.atoms.n_atoms,u.atoms.n_atoms))
for atom_a in range(u.atoms.n_atoms):
for atom_b in range(atom_a + 1 , u.atoms.n_atoms):
# calculate distances
distance = np.sqrt(np.sum((u.atoms[atom_a].position -
u.atoms[atom_b].position)**2))/10 #A to nm
pairwise_distances[atom_a,atom_b] = distance
# calculate d(r_ij) / dt
relative_velocity = np.sqrt(np.sum((u.atoms[atom_a].velocity -
u.atoms[atom_b].velocity)**2))
pairwise_velocities[atom_a,atom_b] = relative_velocity
#compute velocities
velocities = np.sqrt(np.sum(u.atoms.velocities**2, axis=1))
#compute forces
forces = np.sqrt(np.sum(u.atoms.forces**2, axis=1))
# 1) compute LJ
lj_potentials = lj_potential(pairwise_distances, epsilon, sigma)
# 2) compute site energy
site_energies = site_energy(mass, velocities, lj_potentials)
if i > 0:
pairwise_forces = -(lj_potentials -
previous_lj_potentials) / (pairwise_distances -
previous_pairwise_distances)
# 3) compute micro heat current
micro_hc = microscopic_heat_current(site_energies, pairwise_velocities,
pairwise_distances, pairwise_forces)
heat_current.append(micro_hc)
previous_lj_potentials = lj_potentials
previous_pairwise_distances = pairwise_distances
PDB.write(u.atoms)
i += 1
if i < 10:
print("Frame {}: "
"min/max velocities: {:.1f}...{:.1f} nm/ps".format(ts.frame,
velocities.min(),
velocities.max()))
print("Frame {}: "
"min/max lj_potentials: {:.1f}...{:.1f} KJ/mol".format(ts.frame,
np.nanmin(lj_potentials),
np.nanmax(lj_potentials)))
if 10 > i > 1:
print("Frame {}: "
"micro heat current: {:.1f} KJ".format(ts.frame, micro_hc))
print("Wrote PDB trajectory for Argon with distances in bfactor field")
"""
Explanation: Calculating in MDAnalysis
There are $\sum_{i=1}^{N-1}{i}$ pairwise distances and $N$ velocities we need to calculate to compute the microscopic energy of every atom in our system
End of explanation
"""
u.atoms[100].position
print(pairwise_velocities[:3])
print(pairwise_forces[:3])
print(pairwise_distances[:3])
# np.save("argon_heat_currents_2.npy", heat_current)
hc = np.load("argon_heat_currents_2.npy")
# hc = heat_current
import matplotlib.pyplot as plt
plt.plot(hc)
"""
Explanation: Unit Conversions
The following are from the GROMACS manual:
| Quanitity | Symbol | Units |
|---|---|---|
| velocity | v | $nm \space ps^{-1} \space or \space 1000 \space m s^{-1}$ |
| length | r | $10^{-9} m$ |
| mass | m | $ 1.660 538 921 x 10^{-27} kg $ |
| time | t | $ 10^{-12} s $ |
| temperature | K | $K$ |
| force | F | $KJ \space mol^{-1} \space nm^{-1}$ |
End of explanation
"""
# create timestep array (ps)
# timestep is dt * nstxout
# 0.002 * 100
dt_array = np.arange(0,1000,.2)
from scipy.integrate import simps
integrated_hc = simps(np.dot(hc[0], hc))#, dt_array)
box_length = 29.130 # A
volume = box_length**3 # A^3
temperature = 100 # K
# compute the thermal conductivity
# need to convert to W/M K
gk_heat_current(kb, volume, temperature, integrated_hc)
"""
Explanation: We have the heat currents, now we need to compute the integral
$\lambda= \frac{1}{3VK_{B}T^2} \int_{0}^{\infty} \langle j(0) \cdot j(t) \rangle \, \mathrm{d}t$
End of explanation
"""
|
amccaugh/phidl | docs/tutorials/routing.ipynb | mit | from phidl import Device, quickplot as qp
import phidl.geometry as pg
import phidl.routing as pr
# Use pg.compass() to make 2 boxes with North/South/East/West ports
D = Device()
c1 = D << pg.compass()
c2 = D << pg.compass().move([10,5]).rotate(15)
# Connect the East port of one box to the West port of the other
R = pr.route_quad(c1.ports['E'], c2.ports['W'],
width1 = None, width2 = None, # width = None means use Port width
layer = 2)
qp([R,D])
"""
Explanation: Routing
Often when creating a design, you need to connect geometries together with wires or waveguides. To help with that, PHIDL has the phidl.routing (pr) module, which can flexibly and quickly create routes between ports.
Simple quadrilateral routes
In general, a route is a polygon used to connect two ports. Simple routes are easy to create using pr.route_quad(). This function returns a quadrilateral route that directly connects two ports, as shown in this example:
End of explanation
"""
from phidl import Device, quickplot as qp
import phidl.geometry as pg
import phidl.routing as pr
# Use pg.compass() to make 4 boxes with North/South/East/West ports
D = Device()
smooth1 = D << pg.compass([4,15])
smooth2 = D << pg.compass([15,4]).move([35,35])
sharp1 = D << pg.compass([4,15]).movex(50)
sharp2 = D << pg.compass([15,4]).move([35,35]).movex(50)
# Connect the South port of one box to the West port of the other
R1 = pr.route_smooth(smooth1.ports['S'], smooth2.ports['W'], radius=8, layer = 2)
R2 = pr.route_sharp( sharp1.ports['S'], sharp2.ports['W'], layer = 2)
qp([D, R1, R2])
"""
Explanation: Automatic manhattan routing
In many cases, we need to draw wires or waveguides between two objects, and we'd prefer not to have to hand-calculate all the points. In these instances, we can use the automatic routing functions pr.route_smooth() and pr.route_sharp().
These functions allow you to route along a Path, and come with built-in options that let you control the shape of the path and how to extrude it. If you don't need detailed control over the path your route takes, you can let these functions create an automatic manhattan route by leaving the default path_type='manhattan'. Just make sure the ports you're to routing between face parallel or orthogonal directions.
End of explanation
"""
from phidl import CrossSection
import phidl.routing as pr
# Create input ports
port1 = D.add_port(name='smooth1', midpoint=(40, 0), width=5, orientation=180)
port2 = D.add_port(name='smooth2', midpoint=(0, -40), width=5, orientation=270)
# (left) Setting width to a constant
D1 = pr.route_smooth(port1, port2, width = 2, radius=10, layer = 0)
# (middle) Setting width to a 2-element list to linearly vary the width
D2 = pr.route_smooth(port1, port2, width = [7, 1.5], radius=10, layer = 1)
# (right) Setting width to a CrossSection
X = CrossSection()
X.add(width=1, layer=4)
X.add(width=2.5, offset = 3, layer = 5)
X.add(width=2.5, offset = -3, layer = 5)
D3 = pr.route_smooth(port1, port2, width = X, radius=10)
qp([D1, D2.movex(50), D3.movex(100)])
"""
Explanation: Customized widths / cross-sections
By default, route functions such as route_sharp() and route_smooth() will connect one port to another with polygonal paths that are as wide as the ports are. However, you can override this by setting the width parameter in the same way as extrude():
If set to a single number (e.g. width=1.7): makes a fixed-width extrusion
If set to a 2-element array (e.g. width=[1.8,2.5]): makes an extrusion whose width varies linearly from width[0] to width[1]
If set to a CrossSection: uses the CrossSection parameters for extrusion
End of explanation
"""
from phidl import CrossSection
import phidl.path as pp
D = Device()
port1 = D.add_port(name=1, midpoint=(40,0), width=5, orientation=180)
port2 = D.add_port(name=2, midpoint=(0, -40), width=5, orientation=270)
# Step 1: Calculate waypoint path
route_path = pr.path_manhattan(port1, port2, radius=10)
# Step 2: Smooth waypoint path
smoothed_path = pp.smooth(route_path, radius=10, use_eff=True)
# Step 3: Extrude path
D.add_ref(smoothed_path.extrude(width=5, layer=0))
qp([route_path,D])
"""
Explanation: Details of operation
The route_smooth() function works in three steps:
It calculates a waypoint Path using a waypoint path function -- such as pr.path_manhattan() -- set by the path_type.
It smooths out the waypoint Path using pp.smooth().
It extrudes the Path to create the route geometry.
The route_sharp() function works similarly, but it omits step 2 to create sharp bends. The extra smoothing makes route_smooth() particularly useful for photonic / microwave waveguides, whereas route_sharp() is typically more useful for electrical wiring.
To illustrate how these functions work, let's look at how you could manually implement a similar behaviour to route_smooth(path_type='manhattan'):
End of explanation
"""
D = Device()
port1 = D.add_port(name='smooth1', midpoint=(40,0), width=3, orientation=180)
port2 = D.add_port(name='smooth2', midpoint=(0, -40), width=3, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, radius=10, smooth_options={'corner_fun': pp.arc, 'num_pts': 16}))
qp(D)
"""
Explanation: We can even customize the bends produced by pr.route_smooth() using the smooth_options, which are passed to pp.smooth(), such as controlling the corner-smoothing function or changing the number of points that will be rendered:
End of explanation
"""
import phidl.geometry as pg
D = Device()
Obstacle = D.add_ref(pg.rectangle(size=(30,15), layer=1)).move((10, -25))
port1 = D.add_port(name=1, midpoint=(40,0), width=5, orientation=180)
port2 = D.add_port(name=2, midpoint=(0, -40), width=5, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, radius=10, path_type='manhattan'))
qp(D)
"""
Explanation: Customizing route paths
Avoiding obstacles
Sometimes, automatic routes will run into obstacles in your layout, like this:
End of explanation
"""
D = Device()
Obstacle = D.add_ref(pg.rectangle(size=(30,15), layer=1))
Obstacle.move((10, -25))
port1 = D.add_port(name=1, midpoint=(40,0), width=5, orientation=180)
port2 = D.add_port(name=2, midpoint=(0, -40), width=5, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, radius=10, path_type='J', length1=60, length2=20))
qp(D)
"""
Explanation: Example 1: Custom J paths. To avoid the other device, we need to customize the path our route takse. Luckily, PHIDL provides several waypoint path functions to help us do that quickly. Each of these waypoint path functions has a name of the form pr.path_*** (e.g. pr.path_L()), and generates a particular path type with its own shape. All the available path types are described in detail below and in the Geometry Reference. In this case, we want to connect two orthogonal ports, but the ports are positioned such that we can't connect them with a single 90-degree turn. A J-shaped path with four line segments and three turns is perfect for this problem. We can tell route_smooth to use pr.path_J() as its waypoint path function via the argument path_type='J'.
End of explanation
"""
D = Device()
Obstacle = D.add_ref(pg.rectangle(size=(20,30),layer=1))
Obstacle.move((10, -25))
port1 = D.add_port(name=1, midpoint=(0,0), width=5, orientation=180)
port2 = D.add_port(name=2, midpoint=(40, -5), width=5, orientation=0)
D.add_ref(pr.route_sharp(port1, port2, path_type='manhattan'))
qp(D)
"""
Explanation: For ease of use, the waypoint path functions are parameterized in terms of relative distances from ports. Above, we had to define the keyword arguments length1 and length2, which are passed to pr.route_J() for the waypoint path calculation. These arguments length1 and length2 define the lengths of the line segments that exit port1 and port2 respectively (i.e. the first and last sements in the path). Once those first and last segments are set, path_J() completes the waypoint path with two more 90-degree turns. Note that just knowing length1 and length2, along with the port positions and orientations, is enough to completely determine the waypoint path.
Example 2: Custom C paths. Now consider this routing problem:
End of explanation
"""
D = Device()
Obstacle = D.add_ref(pg.rectangle(size=(20,30),layer=1))
Obstacle.move((10, -25))
port1 = D.add_port(name=1, midpoint=(0,0), width=5, orientation=180)
port2 = D.add_port(name=2, midpoint=(40, -5), width=5, orientation=0)
D.add_ref(pr.route_sharp(port1, port2, path_type='C', length1=10, length2=10, left1=-10))
qp(D)
"""
Explanation: In this case, we want a C path. C paths have three parameters we need to define:
+ length1 and length2, which are the lengths of the segments that exit port1 and port2 (similar to the J path), as well as
+ left1, which is the length of the segment that turns left from port1.
In this case, we would actually prefer that the path turns right after it comes out of port1, so that our route avoids the other device. To make that happen, we can just set left1<0:
End of explanation
"""
D = Device()
Obstacle = D.add_ref(pg.rectangle(size=(30,15),layer=1)).move((0, -20))
Obstacle2 = D.add_ref(pg.rectangle(size=(15,20), layer=1))
Obstacle2.xmax,Obstacle2.ymin = Obstacle.xmax, Obstacle.ymax
port1 = D.add_port(name=1, midpoint=(5, 0), width=5, orientation=0)
port2 = D.add_port(name=2, midpoint=(50, 0), width=5, orientation=270)
manual_path = [ port1.midpoint,
(Obstacle2.xmin-5, port1.y),
(Obstacle2.xmin-5, Obstacle2.ymax+5),
(Obstacle2.xmax+5, Obstacle2.ymax+5),
(Obstacle2.xmax+5, port2.y-5),
(port2.x, port2.y-5),
port2.midpoint ]
D.add_ref(pr.route_sharp(port1, port2, path_type='manual', manual_path=manual_path))
qp(D)
"""
Explanation: Example 3: Custom manual paths. For even more complex route problems, we can use path_type='manual' to create a route along an arbitrary path. In the example below, we use a manual path to route our way out of a sticky situation:
End of explanation
"""
D = Device()
#straight path
port1 = D.add_port(name='S1', midpoint=(-50, 0), width=4, orientation=90)
port2 = D.add_port(name='S2', midpoint=(-50, 50), width=4, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, path_type='straight'))
#L path
port1 = D.add_port(name='L1', midpoint=(30,0), width=4, orientation=180)
port2 = D.add_port(name='L2', midpoint=(0, 50), width=4, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, path_type='L'))
#U path
port1 = D.add_port(name='U1', midpoint=(50, 50), width=2, orientation=270)
port2 = D.add_port(name='U2', midpoint=(80,50), width=4, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, radius=10, path_type='U', length1=50))
port1 = D.add_port(name='U3', midpoint=(50, 80), width=4, orientation=10)
port2 = D.add_port(name='U4', midpoint=(80, 130), width=4, orientation=190)
D.add_ref(pr.route_smooth(port1, port2, path_type='U', length1=20))
#J path
port1 = D.add_port(name='J1', midpoint=(100, 25), width=4, orientation=270)
port2 = D.add_port(name='J2', midpoint=(130, 50), width=4, orientation=180)
D.add_ref(pr.route_smooth(port1, port2, path_type='J', length1=25, length2=10))
port1 = D.add_port(name='J3', midpoint=(115, 105), width=5, orientation=270)
port2 = D.add_port(name='J4', midpoint=(131, 130), width=5, orientation=180)
D.add_ref(pr.route_smooth(port1, port2, path_type='J', length1=25, length2=30))
#C path
port1 = D.add_port(name='C1', midpoint=(180, 35), width=4, orientation=90)
port2 = D.add_port(name='C2', midpoint=(178, 15), width=4, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, path_type='C', length1=15, left1=30, length2=15))
port1 = D.add_port(name='C3', midpoint=(150, 105), width=4, orientation=90)
port2 = D.add_port(name='C4', midpoint=(180, 105), width=4, orientation=270)
D.add_ref(pr.route_smooth(port1, port2, path_type='C', length1=25, left1=-15, length2=25))
port1 = D.add_port(name='C5', midpoint=(150, 170), width=4, orientation=0)
port2 = D.add_port(name='C6', midpoint=(175, 170), width=4, orientation=0)
D.add_ref(pr.route_smooth(port1, port2, path_type='C', length1=10, left1=10, length2=10, radius=4))
#V path
port1 = D.add_port(name='V1', midpoint=(200,50), width=5, orientation=284)
port2 = D.add_port(name='V2', midpoint=(230, 50), width=5, orientation=270-14)
D.add_ref(pr.route_smooth(port1, port2, path_type='V'))
#Z path
port1 = D.add_port(name='Z1', midpoint=(280,0), width=4, orientation=190)
port2 = D.add_port(name='Z2', midpoint=(250, 50), width=3, orientation=-10)
D.add_ref(pr.route_smooth(port1, port2, path_type='Z', length1=30, length2=40))
qp(D)
"""
Explanation: Note that to manually route between two ports, the first and last points in the manual_path should be the midpoints of the ports.
List of routing path types
PHIDL provides the following waypoint path types for routing:
| Path type | Routing style| Segments | Useful for ... | Parameters |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| manhattan | Manhattan | 1-5 | parallel or orthogonal ports.| radius |
| straight | Manhattan | 1 | ports that point directly at each other.| -- |
| L | Manhattan | 2 | orthogonal ports that can be connected with one turn.| -- |
| U | Manhattan | 3 | parallel ports that face each other or same direction.| length1 |
| J | Manhattan | 4 | orthogonal ports that can't be connected with just one turn.| length1, length2 |
| C | Manhattan | 5 | parallel ports that face apart.| length1, length2, left1 |
| V | Free | 2 | ports at odd angles that face a common intersection point.| -- |
| Z | Free | 3 | ports at odd angles.| length1, length2 |
| manual | Free | -- | fully custom paths. | manual_path |
For more details on each path type, you can also look at the API Documentation or the Geometry Reference.
The path types can be classified by their routing style. Manhattan style routing uses only 90-degree turns, and thus requires that you route between ports that are orthogonal or parallel (note that the ports don't neccearrily have to point horizontally or vertically, though). For routing between ports at odd angles, you can use path types with a free routing style instead.
Most path types are named after letters that they resemble to help you remember them. However, as you'll see in the examples below, some of the more complicated paths can take a variety of shapes. One good way to identify which manhattan-style route type you need is to count the number of line segments and consult the above table.
End of explanation
"""
import numpy as np
set_quickplot_options(show_ports=False, show_subports=False)
D = Device()
pitch = 40
test_range=20
x_centers = np.arange(5)*pitch
y_centers = np.arange(3)*pitch
xoffset = np.linspace(-1*test_range, test_range, 5)
yoffset = np.linspace(-1*test_range, test_range, 3)
for xidx, x0 in enumerate(x_centers):
for yidx, y0 in enumerate(y_centers):
name = '{}{}'.format(xidx, yidx)
port1 = D.add_port(name=name+'1', midpoint=(x0, y0), width=5, orientation=0)
port2 = D.add_port(name=name+'2', midpoint=(x0+xoffset[xidx], y0+yoffset[yidx]),
width=5, orientation=90)
D.add_ref(pr.route_smooth(port1, port2, route_type='manhattan'))
qp(D)
"""
Explanation: The manhattan path type is bending-radius aware and can produce any route neccessary to connect two ports, as long as they are orthogonal or parallel.
End of explanation
"""
from phidl import Device, quickplot as qp
import phidl.routing as pr
import phidl.geometry as pg
# Create boxes with multiple North/South ports
D = Device()
c1 = D.add_ref( pg.compass_multi(ports={'N':3}) )
c2 = D.add_ref( pg.compass_multi(ports={'S':3}) ).move([6,6])
qp(D)
"""
Explanation: Simple XY wiring
Often one requires simple wiring between two existing objects with Ports. For this purpose, you can use the route_xy() function. It allows a simple string to specify the path the wiring will take. So the argument directions = 'yxy' means "go 1 part in the Y direction, 1 part in the X direction, then 1 more part in the X direction" with the understanding that 1 part X = (total X distance from port1 to port2)/(total number of 'x' in the directions)
As an example, say we have two objects with multiple ports, and we want to route multiple wires between them without overlapping
End of explanation
"""
D.add_ref( pr.route_xy(port1 = c1.ports['N1'], port2 = c2.ports['S1'],
directions = 'yyyxy', width = 0.1, layer = 2) )
D.add_ref( pr.route_xy(port1 = c1.ports['N2'], port2 = c2.ports['S2'],
directions = 'yyxy', width = 0.2, layer = 2) )
D.add_ref( pr.route_xy(port1 = c1.ports['N3'], port2 = c2.ports['S3'],
directions = 'yxy', width = 0.4, layer = 2) )
qp(D)
"""
Explanation: We can then route_xy() between them and use different directions arguments to prevent them from overlapping:
End of explanation
"""
|
fastai/course-v3 | nbs/dl2/09_optimizers.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_08 import *
"""
Explanation: Optimizer tweaks
End of explanation
"""
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs=128
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
"""
Explanation: Imagenette data
We grab the data from the previous notebook.
Jump_to lesson 11 video
End of explanation
"""
nfs = [32,64,128,256]
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
"""
Explanation: Then a model:
End of explanation
"""
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
run.fit(1, learn)
"""
Explanation: This is the baseline of training with vanilla SGD.
End of explanation
"""
class Optimizer():
def __init__(self, params, steppers, **defaults):
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
self.steppers = listify(steppers)
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
"""
Explanation: Refining the optimizer
In PyTorch, the base optimizer in torch.optim is just a dictionary that stores the hyper-parameters and references to the parameters of the model we want to train in parameter groups (different groups can have different learning rates/momentum/weight decay... which is what lets us do discriminative learning rates).
It contains a method step that will update our parameters with the gradients and a method zero_grad to detach and zero the gradients of all our parameters.
We build the equivalent from scratch, only ours will be more flexible. In our implementation, the step function loops over all the parameters to execute the step using stepper functions that we have to provide when initializing the optimizer.
Jump_to lesson 11 video
End of explanation
"""
#export
def sgd_step(p, lr, **kwargs):
p.data.add_(-lr, p.grad.data)
return p
opt_func = partial(Optimizer, steppers=[sgd_step])
"""
Explanation: To do basic SGD, this what a step looks like:
End of explanation
"""
#export
class Recorder(Callback):
def begin_fit(self): self.lrs,self.losses = [],[]
def after_batch(self):
if not self.in_train: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.loss.detach().cpu())
def plot_lr (self): plt.plot(self.lrs)
def plot_loss(self): plt.plot(self.losses)
def plot(self, skip_last=0):
losses = [o.item() for o in self.losses]
n = len(losses)-skip_last
plt.xscale('log')
plt.plot(self.lrs[:n], losses[:n])
class ParamScheduler(Callback):
_order=1
def __init__(self, pname, sched_funcs):
self.pname,self.sched_funcs = pname,listify(sched_funcs)
def begin_batch(self):
if not self.in_train: return
fs = self.sched_funcs
if len(fs)==1: fs = fs*len(self.opt.param_groups)
pos = self.n_epochs/self.epochs
for f,h in zip(fs,self.opt.hypers): h[self.pname] = f(pos)
class LR_Find(Callback):
_order=1
def __init__(self, max_iter=100, min_lr=1e-6, max_lr=10):
self.max_iter,self.min_lr,self.max_lr = max_iter,min_lr,max_lr
self.best_loss = 1e9
def begin_batch(self):
if not self.in_train: return
pos = self.n_iter/self.max_iter
lr = self.min_lr * (self.max_lr/self.min_lr) ** pos
for pg in self.opt.hypers: pg['lr'] = lr
def after_step(self):
if self.n_iter>=self.max_iter or self.loss>self.best_loss*10:
raise CancelTrainException()
if self.loss < self.best_loss: self.best_loss = self.loss
"""
Explanation: Now that we have changed the optimizer, we will need to adjust the callbacks that were using properties from the PyTorch optimizer: in particular the hyper-parameters are in the list of dictionaries opt.hypers (PyTorch has everything in the the list of param groups).
End of explanation
"""
sched = combine_scheds([0.3, 0.7], [sched_cos(0.3, 0.6), sched_cos(0.6, 0.2)])
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback, Recorder,
partial(ParamScheduler, 'lr', sched)]
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=opt_func)
%time run.fit(1, learn)
run.recorder.plot_loss()
run.recorder.plot_lr()
"""
Explanation: So let's check we didn't break anything and that recorder and param scheduler work properly.
End of explanation
"""
#export
def weight_decay(p, lr, wd, **kwargs):
p.data.mul_(1 - lr*wd)
return p
weight_decay._defaults = dict(wd=0.)
"""
Explanation: Weight decay
Jump_to lesson 11 video
By letting our model learn high parameters, it might fit all the data points in the training set with an over-complex function that has very sharp changes, which will lead to overfitting.
<img src="images/overfit.png" alt="Fitting vs over-fitting" width="600">
Weight decay comes from the idea of L2 regularization, which consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible.
Limiting our weights from growing too much is going to hinder the training of the model, but it will yield to a state where it generalizes better. Going back to the theory a little bit, weight decay (or just wd) is a parameter that controls that sum of squares we add to our loss:
python
loss_with_wd = loss + (wd/2) * (weights**2).sum()
In practice though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high school math, the derivative of p**2 with respect to p is 2*p. So adding that big sum to our loss is exactly the same as doing:
python
weight.grad += wd * weight
for every weight in our model, which in the case of vanilla SGD is equivalent to updating the parameters with:
python
weight = weight - lr*(weight.grad + wd*weight)
This technique is called "weight decay", as each weight is decayed by a factor lr * wd, as it's shown in this last formula.
This only works for standard SGD, as we have seen that with momentum, RMSProp and Adam, the update has some additional formulas around the gradient. In those cases, the formula that comes from L2 regularization:
python
weight.grad += wd * weight
is different than weight decay
python
new_weight = weight - lr * weight.grad - lr * wd * weight
Most libraries use the first one, but as it was pointed out in Decoupled Weight Regularization by Ilya Loshchilov and Frank Hutter, it is better to use the second one with the Adam optimizer, which is why fastai made it its default.
Weight decay is subtracting lr*wd*weight from the weights. We need this function to have an attribute _defaults so that we are sure there is an hyper-parameter of the same name in our Optimizer.
End of explanation
"""
#export
def l2_reg(p, lr, wd, **kwargs):
p.grad.data.add_(wd, p.data)
return p
l2_reg._defaults = dict(wd=0.)
"""
Explanation: L2 regularization is adding wd*weight to the gradients.
End of explanation
"""
#export
def maybe_update(os, dest, f):
for o in os:
for k,v in f(o).items():
if k not in dest: dest[k] = v
def get_defaults(d): return getattr(d,'_defaults',{})
"""
Explanation: Let's allow steppers to add to our defaults (which are the default values of all the hyper-parameters). This helper function adds in dest the key/values it finds while going through os and applying f when they was no key of the same name.
End of explanation
"""
#export
class Optimizer():
def __init__(self, params, steppers, **defaults):
self.steppers = listify(steppers)
maybe_update(self.steppers, defaults, get_defaults)
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
#export
sgd_opt = partial(Optimizer, steppers=[weight_decay, sgd_step])
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=sgd_opt)
"""
Explanation: This is the same as before, we just take the default values of the steppers when none are provided in the kwargs.
End of explanation
"""
model = learn.model
opt = sgd_opt(model.parameters(), lr=0.1)
test_eq(opt.hypers[0]['wd'], 0.)
test_eq(opt.hypers[0]['lr'], 0.1)
"""
Explanation: Before trying to train, let's check the behavior works as intended: when we don't provide a value for wd, we pull the corresponding default from weight_decay.
End of explanation
"""
opt = sgd_opt(model.parameters(), lr=0.1, wd=1e-4)
test_eq(opt.hypers[0]['wd'], 1e-4)
test_eq(opt.hypers[0]['lr'], 0.1)
"""
Explanation: But if we provide a value, it overrides the default.
End of explanation
"""
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback]
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=partial(sgd_opt, wd=0.01))
run.fit(1, learn)
"""
Explanation: Now let's fit.
End of explanation
"""
#export
class StatefulOptimizer(Optimizer):
def __init__(self, params, steppers, stats=None, **defaults):
self.stats = listify(stats)
maybe_update(self.stats, defaults, get_defaults)
super().__init__(params, steppers, **defaults)
self.state = {}
def step(self):
for p,hyper in self.grad_params():
if p not in self.state:
#Create a state for p and call all the statistics to initialize it.
self.state[p] = {}
maybe_update(self.stats, self.state[p], lambda o: o.init_state(p))
state = self.state[p]
for stat in self.stats: state = stat.update(p, state, **hyper)
compose(p, self.steppers, **state, **hyper)
self.state[p] = state
#export
class Stat():
_defaults = {}
def init_state(self, p): raise NotImplementedError
def update(self, p, state, **kwargs): raise NotImplementedError
"""
Explanation: This is already better than the baseline!
With momentum
Jump_to lesson 11 video
Momentum requires to add some state. We need to save the moving average of the gradients to be able to do the step and store this inside the optimizer state. To do this, we introduce statistics. Statistics are object with two methods:
- init_state, that returns the initial state (a tensor of 0. for the moving average of gradients)
- update, that updates the state with the new gradient value
We also read the _defaults values of those objects, to allow them to provide default values to hyper-parameters.
End of explanation
"""
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['grad_avg'].mul_(mom).add_(p.grad.data)
return state
"""
Explanation: Here is an example of Stat:
End of explanation
"""
#export
def momentum_step(p, lr, grad_avg, **kwargs):
p.data.add_(-lr, grad_avg)
return p
sgd_mom_opt = partial(StatefulOptimizer, steppers=[momentum_step,weight_decay],
stats=AverageGrad(), wd=0.01)
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=sgd_mom_opt)
run.fit(1, learn)
"""
Explanation: Then we add the momentum step (instead of using the gradients to perform the step, we use the average).
End of explanation
"""
x = torch.linspace(-4, 4, 200)
y = torch.randn(200) + 0.3
betas = [0.5, 0.7, 0.9, 0.99]
def plot_mom(f):
_,axs = plt.subplots(2,2, figsize=(12,8))
for beta,ax in zip(betas, axs.flatten()):
ax.plot(y, linestyle='None', marker='.')
avg,res = None,[]
for i,yi in enumerate(y):
avg,p = f(avg, beta, yi, i)
res.append(p)
ax.plot(res, color='red')
ax.set_title(f'beta={beta}')
"""
Explanation: Jump_to lesson 11 video for discussion about weight decay interaction with batch normalisation
Momentum experiments
What does momentum do to the gradients exactly? Let's do some plots to find out!
Jump_to lesson 11 video
End of explanation
"""
def mom1(avg, beta, yi, i):
if avg is None: avg=yi
res = beta*avg + yi
return res,res
plot_mom(mom1)
"""
Explanation: This is the regular momentum.
End of explanation
"""
#export
def lin_comb(v1, v2, beta): return beta*v1 + (1-beta)*v2
def mom2(avg, beta, yi, i):
if avg is None: avg=yi
avg = lin_comb(avg, yi, beta)
return avg, avg
plot_mom(mom2)
"""
Explanation: As we can see, with a too high value, it may go way too high with no way to change its course.
Another way to smooth noisy data is to do an exponentially weighted moving average. In this case, there is a dampening of (1-beta) in front of the new value, which is less trusted than the current average. We'll define lin_comb (linear combination) to make this easier (note that in the lesson this was named ewma).
End of explanation
"""
y = 1 - (x/3) ** 2 + torch.randn(200) * 0.1
y[0]=0.5
plot_mom(mom2)
"""
Explanation: We can see it gets to a zero-constant when the data is purely random. If the data has a certain shape, it will get that shape (with some delay for high beta).
End of explanation
"""
def mom3(avg, beta, yi, i):
if avg is None: avg=0
avg = lin_comb(avg, yi, beta)
return avg, avg/(1-beta**(i+1))
plot_mom(mom3)
"""
Explanation: Debiasing is here to correct the wrong information we may have in the very first batch. The debias term corresponds to the sum of the coefficient in our moving average. At the time step i, our average is:
$\begin{align}
avg_{i} &= \beta\ avg_{i-1} + (1-\beta)\ v_{i} = \beta\ (\beta\ avg_{i-2} + (1-\beta)\ v_{i-1}) + (1-\beta)\ v_{i} \
&= \beta^{2}\ avg_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \
&= \beta^{3}\ avg_{i-3} + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \
&\vdots \
&= (1-\beta)\ \beta^{i}\ v_{0} + (1-\beta)\ \beta^{i-1}\ v_{1} + \cdots + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i}
\end{align}$
and so the sum of the coefficients is
$\begin{align}
S &=(1-\beta)\ \beta^{i} + (1-\beta)\ \beta^{i-1} + \cdots + (1-\beta)\ \beta^{2} + (1-\beta)\ \beta + (1-\beta) \
&= (\beta^{i} - \beta^{i+1}) + (\beta^{i-1} - \beta^{i}) + \cdots + (\beta^{2} - \beta^{3}) + (\beta - \beta^{2}) + (1-\beta) \
&= 1 - \beta^{i+1}
\end{align}$
since all the other terms cancel out each other.
By dividing by this term, we make our moving average a true average (in the sense that all the coefficients we used for the average sum up to 1).
End of explanation
"""
#export
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def __init__(self, dampening:bool=False): self.dampening=dampening
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['mom_damp'] = 1-mom if self.dampening else 1.
state['grad_avg'].mul_(mom).add_(state['mom_damp'], p.grad.data)
return state
"""
Explanation: Adam and friends
In Adam, we use the gradient averages but with dampening (not like in SGD with momentum), so let's add this to the AverageGrad class.
Jump_to lesson 11 video
End of explanation
"""
#export
class AverageSqrGrad(Stat):
_defaults = dict(sqr_mom=0.99)
def __init__(self, dampening:bool=True): self.dampening=dampening
def init_state(self, p): return {'sqr_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, sqr_mom, **kwargs):
state['sqr_damp'] = 1-sqr_mom if self.dampening else 1.
state['sqr_avg'].mul_(sqr_mom).addcmul_(state['sqr_damp'], p.grad.data, p.grad.data)
return state
"""
Explanation: We also need to track the moving average of the gradients squared.
End of explanation
"""
#export
class StepCount(Stat):
def init_state(self, p): return {'step': 0}
def update(self, p, state, **kwargs):
state['step'] += 1
return state
"""
Explanation: We will also need the number of steps done during training for the debiasing.
End of explanation
"""
#export
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
"""
Explanation: This helper function computes the debias term. If we dampening, damp = 1 - mom and we get the same result as before. If we don't use dampening, (damp = 1) we will need to divide by 1 - mom because that term is missing everywhere.
End of explanation
"""
#export
def adam_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
p.data.addcdiv_(-lr / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps)
return p
adam_step._defaults = dict(eps=1e-5)
#export
def adam_opt(xtra_step=None, **kwargs):
return partial(StatefulOptimizer, steppers=[adam_step,weight_decay]+listify(xtra_step),
stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()], **kwargs)
learn,run = get_learn_run(nfs, data, 0.001, conv_layer, cbs=cbfs, opt_func=adam_opt())
run.fit(3, learn)
"""
Explanation: Then the Adam step is just the following:
End of explanation
"""
def lamb_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, wd, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps) + wd*p.data
r2 = step.pow(2).mean().sqrt()
p.data.add_(-lr * min(r1/r2,10), step)
return p
lamb_step._defaults = dict(eps=1e-6, wd=0.)
lamb = partial(StatefulOptimizer, steppers=lamb_step, stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()])
learn,run = get_learn_run(nfs, data, 0.003, conv_layer, cbs=cbfs, opt_func=lamb)
run.fit(3, learn)
"""
Explanation: LAMB
Jump_to lesson 11 video
It's then super easy to implement a new optimizer. This is LAMB from a very recent paper:
$\begin{align}
g_{t}^{l} &= \nabla L(w_{t-1}^{l}, x_{t}) \
m_{t}^{l} &= \beta_{1} m_{t-1}^{l} + (1-\beta_{1}) g_{t}^{l} \
v_{t}^{l} &= \beta_{2} v_{t-1}^{l} + (1-\beta_{2}) g_{t}^{l} \odot g_{t}^{l} \
m_{t}^{l} &= m_{t}^{l} / (1 - \beta_{1}^{t}) \
v_{t}^{l} &= v_{t}^{l} / (1 - \beta_{2}^{t}) \
r_{1} &= \|w_{t-1}^{l}\|{2} \
s{t}^{l} &= \frac{m_{t}^{l}}{\sqrt{v_{t}^{l} + \epsilon}} + \lambda w_{t-1}^{l} \
r_{2} &= \| s_{t}^{l} \|{2} \
\eta^{l} &= \eta * r{1}/r_{2} \
w_{t}^{l} &= w_{t}^{l-1} - \eta_{l} * s_{t}^{l} \
\end{align}$
End of explanation
"""
!python notebook2script.py 09_optimizers.ipynb
"""
Explanation: Other recent variants of optimizers:
- Large Batch Training of Convolutional Networks (LARS also uses weight statistics, not just gradient statistics. Can you add that to this class?)
- Adafactor: Adaptive Learning Rates with Sublinear Memory Cost (Adafactor combines stats over multiple sets of axes)
- Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Export
End of explanation
"""
|
ehongdata/Network-Analysis-Made-Simple | 4. Cliques, Triangles and Squares (Student).ipynb | mit | G = nx.Graph()
G.add_nodes_from(['a', 'b', 'c'])
G.add_edges_from([('a','b'), ('b', 'c')])
nx.draw(G, with_labels=True)
"""
Explanation: Cliques, Triangles and Squares
Let's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such:
End of explanation
"""
G.add_node('d')
G.add_edge('c', 'd')
G.add_edge('d', 'a')
nx.draw(G, with_labels=True)
"""
Explanation: Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like?
End of explanation
"""
# Load the network.
G = nx.read_gpickle('Synthetic Social Network.pkl')
nx.draw(G, with_labels=True)
"""
Explanation: The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square.
You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well.
If all of the triangles in the two small-scale networks were closed, then the graph would have represented cliques, in which everybody within that subgraph knows one another.
In this section, we will attempt to answer the following questions:
Can we identify cliques?
Can we identify potential cliques that aren't captured by the network?
Can we model the probability that two unconnected individuals know one another?
As usual, let's start by loading the synthetic network.
End of explanation
"""
# Example code that shouldn't be too hard to follow.
def in_triangle(G, node):
neighbors1 = G.neighbors(node) # neighbors of
neighbors2 = []
for n in neighbors1:
neighbors = G.neighbors(n)
if node in neighbors2:
neighbors2.remove(node)
neighbors2.extend(G.neighbors(n))
neighbors3 = []
for n in neighbors2:
neighbors = G.neighbors(n)
neighbors3.extend(G.neighbors(n))
if node in neighbors3:
return True
else:
return False
in_triangle(G, 3)
"""
Explanation: Cliques
In a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not.
The core idea is that if a node (say "A") is present in a triangle, then it's neighbor's neighbor's neighbor should include itself.
End of explanation
"""
nx.triangles(G, 3)
"""
Explanation: In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
End of explanation
"""
# Possible answer
def get_triangles(G, node):
neighbors = set(G.neighbors(node))
triangle_nodes = set()
"""
Fill in the rest of the code below.
"""
# Verify your answer with the following funciton call. Should return:
# {1, 2, 3, 6, 23}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
neighbors3 = G.neighbors(3)
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
"""
Explanation: Exercise
Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with?
Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship.
Hint: Python Sets may be of great use for this problem. https://docs.python.org/2/library/stdtypes.html#set
Verify your answer by drawing out the subgraph composed of those nodes.
End of explanation
"""
# Possible Answer, credit Justin Zabilansky (MIT) for help on this.
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
# Fill in code below.
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
# nodes = get_open_triangles(G, 2)
# for i, triplet in enumerate(nodes):
# fig = plt.figure(i)
# nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 2))
"""
Explanation: Friend Recommendation: Open Triangles
Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.
Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph.
Exercise
Can you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one?
Hint: You may still want to stick with set operations. Suppose we have the A-B-C triangle. If there are neighbors of C that are also neighbors of B, then those neighbors are in a triangle with B and C; consequently, if there are nodes for which C's neighbors do not overlap with B's neighbors, then those nodes are in an open triangle. The final implementation should include some conditions, and probably won't be as simple as described above.
End of explanation
"""
|
shashank14/Asterix | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | apache-2.0 | 7**4
"""
Explanation: Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
End of explanation
"""
s = "Hi there Sam!"
s.split()
"""
Explanation: Split this string:
s = "Hi there Sam!"
into a list.
End of explanation
"""
planet = "Earth"
diameter = 12742
print("The diameter of {} is {} kilometers.".format(planet,diameter))
"""
Explanation: Given the variables:
planet = "Earth"
diameter = 12742
Use .format() to print the following string:
The diameter of Earth is 12742 kilometers.
End of explanation
"""
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2]
"""
Explanation: Given this nested list, use indexing to grab the word "hello"
End of explanation
"""
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
"""
Explanation: Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
End of explanation
"""
# Tuple is immutable
na = "user@domain.com"
na.split("@")[1]
"""
Explanation: What is the main difference between a tuple and a list?
End of explanation
"""
def domainGet(name):
return name.split("@")[1]
domainGet('user@domain.com')
"""
Explanation: Create a function that grabs the email website domain from a string in the form:
user@domain.com
So for example, passing "user@domain.com" would return: domain.com
End of explanation
"""
def findDog(sentence):
x = sentence.split()
for item in x:
if item == "dog":
return True
findDog('Is there a dog here?')
"""
Explanation: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
End of explanation
"""
countDog('This dog runs faster than the other dog dude!')
"""
Explanation: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
End of explanation
"""
seq = ['soup','dog','salad','cat','great']
"""
Explanation: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:
seq = ['soup','dog','salad','cat','great']
should be filtered down to:
['soup','salad']
End of explanation
"""
def caught_speeding(speed, is_birthday):
if s_birthday == False:
if speed <= 60:
return "No ticket"
elif speed >= 61 and speed <=80:
return "small ticket"
elif speed >81:
return "Big ticket"
else:
return "pass"
caught_speeding(81,False)
caught_speeding(81,False)
lst = ["7:00","7:30"]
"""
Explanation: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases.
End of explanation
"""
lst
type(lst)
type(lst[1])
"""
Explanation: Great job!
End of explanation
"""
|
GoogleCloudPlatform/dataflow-sample-applications | timeseries-streaming/timeseries-python-applications/notebooks/Comparing_metrics_with_Pandas.ipynb | apache-2.0 | !conda install -c conda-forge google-cloud-bigquery google-cloud-bigquery-storage pyarrow pandas numpy matplotlib bokeh -y
"""
Explanation: Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License").
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
Introduction
This notebook compares the following time series metrics calculated using two different frameworks:
- 60 seconds Moving Average
- Standard Deviation over 60 second rolling window
Using one day worth of forex tick data, for the EUR/USD currency pair, extracted from a public API and resampled to 5 seconds periods.
The two frameworks compared are:
1. The new Apache Beam time series framework, which includes
the metrics library available out of the box in Java SDK
2. The Pandas and Numpy libraries available in Python SDK, as the defacto tools for data manipulation of time series data in vectorised form
Prerequisites
GCP project with the BigQuery API enabled to store results, and Dataflow API enabled to compute time series metrics at scale
Google Cloud SDK installed and configured
Setup
Assuming you are reading this from Github:
- Clone this repository locally with the correct tag:
git clone -b tsflow-0.3.2-sample https://github.com/GoogleCloudPlatform/dataflow-sample-applications
Install Java 1.8, Gradle and Python 3.x
Create Python Virtual or Conda Environment, e.g. for Conda: conda create --name timeseriesanalysis-is-cool python=3.7 jupyterlab
Run Jupyter notebook or lab to open this notebook
Execute the below cell to install the required dependencies
End of explanation
"""
!gradle -p ../../timeseries-java-applications forex_example --args='--resampleSec=5 --windowSec=60 --runner=DataflowRunner --workerMachineType=n1-standard-4 --project=<GCPPROJECT> --region=<REGION> --bigQueryTableForTSAccumOutputLocation=<GCPPROJECT>:<DATASET>.<TABLE> --gcpTempLocation=<GCPTMPLOCATION> --tempLocation=<TMPLOCATION> --inputPath=<INPUTDATASETLOCATION>'
"""
Explanation: Using the new Apache Beam time series java framework in Java to compute metrics at scale in GCP Dataflow
In this case we call a gradle command to execute the Java pipeline to read and compute the metrics in scope at scale in Dataflow, PLEASE: Replace <GCPPROJECT>, <REGION>, <DATASET>, <TABLE> , <GCPTMPLOCATION> and <TMPLOCATION> parameters accordingly:
End of explanation
"""
import google.auth
from google.cloud import bigquery
from google.cloud import bigquery_storage
import pandas as pd
# Explicitly create a credentials object. This allows you to use the same
# credentials for both the BigQuery and BigQuery Storage clients, avoiding
# unnecessary API calls to fetch duplicate authentication tokens.
credentials, your_project_id = google.auth.default(
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
# Make clients.
bqclient = bigquery.Client(credentials=credentials, project=your_project_id,)
bqstorageclient = bigquery_storage.BigQueryReadClient(credentials=credentials)
# Download results from BigQuery after the Beam pipeline finishes processing the time series data
query_string_60m = """
WITH BID_MA60_T AS (
SELECT lower_window_boundary, upper_window_boundary,
(SELECT dbl_data from UNNEST(data) WHERE metric = 'SIMPLE_MOVING_AVERAGE') AS BID_MA60_BQ
FROM `<GCPPROJECT>.<DATASET>.<TABLE>`
WHERE timeseries_minor_key = 'BID'
)
SELECT * from BID_MA60_T
"""
bid_ma60_df = (
bqclient.query(query_string_60m)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
).sort_values(by=['upper_window_boundary'])
bid_ma60_df.index = pd.to_datetime(bid_ma60_df['upper_window_boundary'])
bid_ma60_df.drop(columns=['upper_window_boundary', 'lower_window_boundary'],inplace=True)
query_string_stddev = """
WITH BID_STDDEV_T AS (
SELECT lower_window_boundary, upper_window_boundary,
(SELECT dbl_data from UNNEST(data) WHERE metric = 'STANDARD_DEVIATION') AS BID_STDDEV_BQ
FROM `<GCPPROJECT>.<DATASET>.<TABLE>`
WHERE timeseries_minor_key = 'BID'
)
SELECT * from BID_STDDEV_T
"""
bid_std_dev_df = (
bqclient.query(query_string_stddev)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
).sort_values(by=['upper_window_boundary'])
bid_std_dev_df.index = pd.to_datetime(bid_std_dev_df['upper_window_boundary'])
bid_std_dev_df.drop(columns=['upper_window_boundary', 'lower_window_boundary'],inplace=True)
# Showing Moving Average sample
bid_ma60_df
# Showing Moving Average sample
bid_std_dev_df
"""
Explanation: --resampleSec and --windowSec parameters indicate the sample period in seconds, e.g., 5, and the rolling window period to use to compute the metric in seconds, e.g., 60.
Reading results from BigQuery and loading in Pandas Dataframes
With the above command we submitted a Beam Java pipeline to Dataflow to calculate all the metrics currently implemented, including the ones in scope (Moving Average and Standard Deviation), in the next steps we would query BigQuery and load the results in a Pandas Dataframe for comparison.
We first need to be able to authenticate in GCP to read the data from BigQuery (Please ensure you have Google Cloud SDK installed and configured with the correct project)
End of explanation
"""
import pandas as pd
eurusd = pd.read_csv("../../timeseries-java-applications/Examples/src/main/resources/EURUSD-2020-05-11_2020-05-11.csv",
index_col=1, names=["Pair", "Timestamp", "Ask", "Bid", "Ask Volume", "Bid Volume"], header=None)
eurusd.index = pd.to_datetime(eurusd.index)
"""
Explanation: Using the Pandas Dataframe and Numpy APIs to compute the metrics
In this case we will leverage the data processing Python libraries directly, we first read the original dataset.
End of explanation
"""
eurusd_bid_ma60 = eurusd.rename(columns={"Bid": "Bid MA60 Pandas"})\
.resample("5S")\
.fillna(method='ffill')['Bid MA60 Pandas']\
.rolling(window=12, min_periods=1)\
.apply(lambda x: np.mean(x))
eurusd_bid_stddev = eurusd.rename(columns={"Bid": "Bid StdDev Pandas"})\
.resample("5S")\
.fillna(method='ffill')['Bid StdDev Pandas']\
.rolling(window=12, min_periods=1)\
.apply(lambda x: np.std(x))
"""
Explanation: In order to calculate the metrics in Pandas using the same constraints we need to:
1. resample() the time series datapoints to every 5 seconds
2. Do forward filling with the fillna(method='ffill') method to fill the gaps accordingly
3. Apply rolling() window
4. Aggregate by applying the Numpy mean() and std() methods respectively
End of explanation
"""
bid_ma60_df_utc = bid_ma60_df.tz_convert(None)
bid_ma60_df_utc_join = bid_ma60_df_utc.join(eurusd_bid_ma60)
bid_ma60_df_utc_join['Delta'] = bid_ma60_df_utc_join["Bid MA60 Pandas"] - bid_ma60_df_utc_join["BID_MA60_BQ"]
bid_ma60_df_utc_join[['BID_MA60_BQ','Bid MA60 Pandas']].plot(figsize=(15,6))
bid_std_dev_df_utc = bid_std_dev_df.tz_convert(None)
bid_std_dev_df_utc_join = bid_std_dev_df_utc.join(eurusd_bid_stddev)
bid_std_dev_df_utc_join['Delta'] = bid_std_dev_df_utc_join["Bid StdDev Pandas"] - bid_std_dev_df_utc_join["BID_STDDEV_BQ"]
bid_std_dev_df_utc_join[['BID_STDDEV_BQ','Bid StdDev Pandas']].plot(figsize=(15,6))
"""
Explanation: Once we have the moving average and standard deviation calculated in both frameworks we can do:
1. Convert the dataframes generated from BigQuery to UTC
2. Join the results from the two frameworks for comparisson
3. Create a Delta metric to visualise how close the values are
4. Quick visualization to compare the two metrics
Comparing results
End of explanation
"""
bid_ma60_df_utc_join
bid_std_dev_df_utc_join
"""
Explanation: We can also sample the joined dataframes to compare the two
End of explanation
"""
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates = np.array(bid_ma60_df_utc_join.index, dtype=np.datetime64)
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates[1500], dates[2500]))
p1.line(dates, bid_ma60_df_utc_join['BID_MA60_BQ'], color='#A6CEE3', legend_label='Bid MA60 BQ')
p1.line(dates, bid_ma60_df_utc_join['Bid MA60 Pandas'], color='#FB9A99', legend_label='Bid MA60 Pandas')
p1.yaxis.axis_label = 'Price MA60'
p2 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=p1.x_range)
p2.line(dates, bid_ma60_df_utc_join['Delta'], color='#A6CEE3', legend_label='Delta')
p2.yaxis.axis_label = 'Delta'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates, bid_ma60_df_utc_join['BID_MA60_BQ'], color='#A6CEE3')
select.line(dates, bid_ma60_df_utc_join['Bid MA60 Pandas'], color='#FB9A99')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, p2, select))
"""
Explanation: We can see at a first glance that the Standard Deviation metric is almost identical between the two frameworks, the moving average deviates slightly more.
This is caused by the round-off error between Java BigDecimals vs Python Float numerical data types. In the case of moving average the difference is higher as there are values at both sides of the decimal point, for more details The Perils of Floating Point.
We can also use the Bokeh library to enable interactive graphs, so we can interact with the results to find areas where the delta is bigger.
End of explanation
"""
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates = np.array(bid_std_dev_df_utc_join.index, dtype=np.datetime64)
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates[1500], dates[2500]))
p1.line(dates, bid_std_dev_df_utc_join['BID_STDDEV_BQ'], color='#A6CEE3', legend_label='Bid STDDEV BQ')
p1.line(dates, bid_std_dev_df_utc_join['Bid StdDev Pandas'], color='#FB9A99', legend_label='Bid StdDev Pandas')
p1.yaxis.axis_label = 'Standard Deviation'
p2 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=p1.x_range)
p2.line(dates, bid_std_dev_df_utc_join['Delta'], color='#A6CEE3', legend_label='Delta')
p2.yaxis.axis_label = 'Delta'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates, bid_std_dev_df_utc_join['BID_STDDEV_BQ'], color='#A6CEE3')
select.line(dates, bid_std_dev_df_utc_join['Bid StdDev Pandas'], color='#FB9A99')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, p2, select))
"""
Explanation:
End of explanation
"""
query_string_last = """
WITH LAST AS (
SELECT lower_window_boundary, upper_window_boundary,
(SELECT dbl_data from UNNEST(data) WHERE metric = 'LAST') AS LAST
FROM `<GCPPROJECT>.<DATASET>.<TABLE>`
WHERE timeseries_minor_key = 'BID'
)
SELECT * from LAST
"""
last_bq = (
bqclient.query(query_string_last)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
).sort_values(by=['upper_window_boundary'])
last_bq.index = pd.to_datetime(last_bq['upper_window_boundary'])
last_bq.drop(columns=['upper_window_boundary', 'lower_window_boundary'],inplace=True)
last_bq
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates_unsampled = np.array(eurusd.index, dtype=np.datetime64)
dates_resampled = np.array(last_bq.index, dtype=np.datetime64)
# We position the graph towards the end of day to show a quiet period when forward filling is necessary
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates_unsampled[dates_unsampled.size-100], dates_unsampled[dates_unsampled.size-1]))
p1.circle(dates_unsampled, eurusd['Bid'], color='#A6CEE3', legend_label='Bid unsampled')
p1.circle(dates_resampled, last_bq['LAST'], color='#FB9A99', legend_label='Bid downsampled')
p1.yaxis.axis_label = 'Downsampling'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates_unsampled, eurusd['Bid'], color='#A6CEE3')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, select))
# from bokeh.io import export_png
# export_png(column(p1, select), filename="img/FILLINGBQ.png")
"""
Explanation: Filling the gaps
As wrap up, let's check how the gap forward filling feature is applied in both frameworks before the metrics are calculated:
We start first with the results from the Time Series Beam framework stored in BigQuery already
End of explanation
"""
last_pandas = eurusd.resample("5S").fillna(method='ffill')['Bid']
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates_unsampled = np.array(eurusd.index, dtype=np.datetime64)
dates_resampled = np.array(last_pandas.index, dtype=np.datetime64)
# We position the graph towards the end of day to show a quiet period when forward filling is necessary
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates_unsampled[dates_unsampled.size-100], dates_unsampled[dates_unsampled.size-1]))
p1.circle(dates_unsampled, eurusd['Bid'], color='#A6CEE3', legend_label='Bid unsampled')
p1.circle(dates_resampled, last_pandas, color='#FB9A99', legend_label='Bid downsampled')
p1.yaxis.axis_label = 'Downsampling'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates_unsampled, eurusd['Bid'], color='#A6CEE3')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, select))
# from bokeh.io import export_png
# export_png(column(p1, select), filename="img/FILLINGPANDAS.png")
"""
Explanation: We can see how the unsampled data points are less frequent, so the resampled data points are forward filled correctly, in this case using the LAST measure from the samples.
Let's repeat the exercise with the Pandas resample this time
End of explanation
"""
|
uber/pyro | tutorial/source/gplvm.ipynb | apache-2.0 | import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch.nn import Parameter
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
import pyro.ops.stats as stats
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(1)
"""
Explanation: Gaussian Process Latent Variable Model
The Gaussian Process Latent Variable Model (GPLVM) is a dimensionality reduction method that uses a Gaussian process to learn a low-dimensional representation of (potentially) high-dimensional data. In the typical setting of Gaussian process regression, where we are given inputs $X$ and outputs $y$, we choose a kernel and learn hyperparameters that best describe the mapping from $X$ to $y$. In the GPLVM, we are not given $X$: we are only given $y$. So we need to learn $X$ along with the kernel hyperparameters.
We do not do maximum likelihood inference on $X$. Instead, we set a Gaussian prior for $X$ and learn the mean and variance of the approximate (gaussian) posterior $q(X|y)$. In this notebook, we show how this can be done using the pyro.contrib.gp module. In particular we reproduce a result described in [2].
End of explanation
"""
# license: Copyright (c) 2014, the Open Data Science Initiative
# license: https://www.elsevier.com/legal/elsevier-website-terms-and-conditions
URL = "https://raw.githubusercontent.com/sods/ods/master/datasets/guo_qpcr.csv"
df = pd.read_csv(URL, index_col=0)
print("Data shape: {}\n{}\n".format(df.shape, "-" * 21))
print("Data labels: {}\n{}\n".format(df.index.unique().tolist(), "-" * 86))
print("Show a small subset of the data:")
df.head()
"""
Explanation: Dataset
The data we are going to use consists of single-cell qPCR data for 48 genes obtained from mice (Guo et al., [1]). This data is available at the Open Data Science repository. The data contains 48 columns, with each column corresponding to (normalized) measurements of each gene. Cells differentiate during their development and these data were obtained at various stages of development. The various stages are labelled from the 1-cell stage to the 64-cell stage. For the 32-cell stage, the data is further differentiated into 'trophectoderm' (TE) and 'inner cell mass' (ICM). ICM further differentiates into 'epiblast' (EPI) and 'primitive endoderm' (PE) at the 64-cell stage. Each of the rows in the dataset is labelled with one of these stages.
End of explanation
"""
data = torch.tensor(df.values, dtype=torch.get_default_dtype())
# we need to transpose data to correct its shape
y = data.t()
"""
Explanation: Modelling
First, we need to define the output tensor $y$. To predict values for all $48$ genes, we need $48$ Gaussian processes. So the required shape for $y$ is num_GPs x num_data = 48 x 437.
End of explanation
"""
capture_time = y.new_tensor([int(cell_name.split(" ")[0]) for cell_name in df.index.values])
# we scale the time into the interval [0, 1]
time = capture_time.log2() / 6
# we setup the mean of our prior over X
X_prior_mean = torch.zeros(y.size(1), 2) # shape: 437 x 2
X_prior_mean[:, 0] = time
"""
Explanation: Now comes the most interesting part. We know that the observed data $y$ has latent structure: in particular different datapoints correspond to different cell stages. We would like our GPLVM to learn this structure in an unsupervised manner. In principle, if we do a good job of inference then we should be able to discover this structure---at least if we choose reasonable priors. First, we have to choose the dimension of our latent space $X$. We choose $dim(X)=2$, since we would like our model to disentangle 'capture time' ($1$, $2$, $4$, $8$, $16$, $32$, and $64$) from cell branching types (TE, ICM, PE, EPI). Next, when we set the mean of our prior over $X$, we set the first dimension to be equal to the observed capture time. This will help the GPLVM discover the structure we are interested in and will make it more likely that that structure will be axis-aligned in a way that is easier for us to interpret.
End of explanation
"""
kernel = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2))
# we clone here so that we don't change our prior during the course of training
X = Parameter(X_prior_mean.clone())
# we will use SparseGPRegression model with num_inducing=32;
# initial values for Xu are sampled randomly from X_prior_mean
Xu = stats.resample(X_prior_mean.clone(), 32)
gplvm = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)
"""
Explanation: We will use a sparse version of Gaussian process inference to make training faster. Remember that we also need to define $X$ as a Parameter so that we can set a prior and guide (variational distribution) for it.
End of explanation
"""
# we use `.to_event()` to tell Pyro that the prior distribution for X has no batch_shape
gplvm.X = pyro.nn.PyroSample(dist.Normal(X_prior_mean, 0.1).to_event())
gplvm.autoguide("X", dist.Normal)
"""
Explanation: We will use the autoguide() method from the Parameterized class to set an auto Normal guide for $X$.
End of explanation
"""
# note that training is expected to take a minute or so
losses = gp.util.train(gplvm, num_steps=4000)
# let's plot the loss curve after 4000 steps of training
plt.plot(losses)
plt.show()
"""
Explanation: Inference
As mentioned in the Gaussian Processes tutorial, we can use the helper function gp.util.train to train a Pyro GP module. By default, this helper function uses the Adam optimizer with a learning rate of 0.01.
End of explanation
"""
gplvm.mode = "guide"
X = gplvm.X # draw a sample from the guide of the variable X
"""
Explanation: After inference, the mean and standard deviation of the approximated posterior $q(X) \sim p(X | y)$ will be stored in the parameters X_loc and X_scale. To get a sample from $q(X)$, we need to set the mode of gplvm to "guide".
End of explanation
"""
plt.figure(figsize=(8, 6))
colors = plt.get_cmap("tab10").colors[::-1]
labels = df.index.unique()
X = gplvm.X_loc.detach().numpy()
for i, label in enumerate(labels):
X_i = X[df.index == label]
plt.scatter(X_i[:, 0], X_i[:, 1], c=[colors[i]], label=label)
plt.legend()
plt.xlabel("pseudotime", fontsize=14)
plt.ylabel("branching", fontsize=14)
plt.title("GPLVM on Single-Cell qPCR data", fontsize=16)
plt.show()
"""
Explanation: Visualizing the result
Let’s see what we got by applying GPLVM to our dataset.
End of explanation
"""
|
RaoUmer/lightning-example-notebooks | plots/histogram.ipynb | mit | import os
from lightning import Lightning
from numpy import random
"""
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Histogram plots in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
"""
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
"""
Explanation: Connect to server
End of explanation
"""
values = random.randn(100)
lgn.histogram(values, 10, zoom=False)
"""
Explanation: <hr> Make a histogram
We'll generate some normally distributed values and make a histogram
End of explanation
"""
lgn.histogram(values, 50, zoom=False)
"""
Explanation: Change the number of bins
End of explanation
"""
lgn.histogram(values, 25, zoom=True)
"""
Explanation: <hr> Zooming for dynamic bins
If we turn on zooming, we can change the number of bins dynamically by zooming in and out -- try it!
End of explanation
"""
|
eds-uga/cbio4835-sp17 | lectures/Lecture19.ipynb | mit | # Preliminary imports
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.integrate as sig # Here's the critical module!
import seaborn as sns
"""
Explanation: Lecture 19: Computational Modeling
CBIO (CSCI) 4835/6835: Introduction to Computational Biology
Overview and Objectives
So far, we've discussed Hidden Markov Models as way to encapsulate and represent something with a "hidden state" component. There are countless other computational and statistical models, a few of which we'll touch on here. By the end of this lecture, you should be able to:
Understand compartment models and how to design them
Relate ordinary differential equations (ODEs) to compartment models
Implement basic compartment models for population growth, disease spread, and competition
Part 1: Compartment Models
A compartment model is one of the simplest mechanistic representations of real-world phenomena.
All compartment models look something like this:
<img src="https://upload.wikimedia.org/wikipedia/commons/0/0e/Singlecell.PNG" />
The node(s) represent specific compartments in the model
The edge(s) represent flow of material from one compartment to another, as well as dependencies between compartments
Because of the fact that things are constantly moving around in compartment models, they are sometimes also referred to as dynamic models
There are lots of variations on this theme, including:
Closed models: Total amount of material within the model remains constant, simply shifting from one compartment to another
Open models: Total material can flow in and out of the model's compartments. This is referred to the model having a source (an external contributor of additional material) or a sink (a compartment where material effectively disappears from the model when it enters)
Cyclic models: Material can flow back and forth between mutually connected compartments
Or combinations of the above!
Compartment models can be discrete or continuous.
Discrete models consider the passage of time in discrete steps, e.g. integers.
<img src="https://upload.wikimedia.org/wikipedia/commons/0/0e/Singlecell.PNG" />
In this example, the input of the compartment $u(t)$ is dependent on time, where time is a discrete quantity.
Continuous models, on the other hand, shrink the change in time between events ($\delta t$) to 0.
<img src="Lecture19/continuous.png" />
We'll see some examples where this formulation may make more sense. Unfortunately, this is often much more difficult to derive for certain systems.
Compartment models can also be deterministic or stochastic.
Deterministic models give you the exact same outputs for any given input. This is what we'll see with models that use differential equations: for given initial values to the system, we always get the same final values.
Stochastic models introduce randomness into systems, simulating probabilities instead of explicit differential equations. These provide much more realistic looks into real-world systems, but are often much more difficult to analyze (e.g. for steady states), since a given input will not always (or ever!) give the same output.
An offshoot of stochastic models is the agent-based model, in which individual "agents" are allowed to act independently according to certain probabilities. This is a very powerful, but very compute-intensive, model.
Part 2: Common Dynamic Models
Enough vocabulary; let's look at a couple common dynamic models.
Population Growth
We'd like to model the growth of some population (humans, animals, bacteria, etc). There are two generally-accepted ways of doing this:
Exponential growth assumes that the population grows, well, exponentially. There are implicit assumptions here as well, most importantly that resources also grow with population to sustain its growth.
Logistic growth assumes a little more explicitly that the amount of some critical resource (e.g. food) is fixed, effectively providing an upper bound on the ultimate size of the population.
Let's take a look!
Exponential growth sounds a little misleading, since the equation doesn't, on initial inspection, look exponential.
Let's say your population can grow through birth, and shrink through death. At any given time $t$, the population is offset by the number added (birth) and removed (death).
With this information, can we build an equation for population as a function of time?
$n(t + 1) = n(t) + b - d$
Or perhaps, put another way, the change in population at any given time?
$\frac{dn}{dt} = bn(t) - dn(t)$
$b$ is the birth rate
$d$ is the death rate
$n(t)$ is the population at time $t$
You may notice both terms in the above equation have a common element that can be factored out.
$\frac{dn}{dt} = n(t) (b - d)$
The $(b - d)$ term even has a special name: the per capita rate of change. It essentially governs whether the population is increasing or decreasing at any given time, depending on whether the birth or death term dominates. It is typically represented as $r_c = b - d$, so we can rewrite the equation as simply:
$\frac{dn}{dt} = r_c n(t)$
Now that we've gone through the derivation of the differential equations, how about some nice pretty pictures?
<img src="Lecture19/growth-exp.png" width="40%" />
Compartment models lend themselves to these sorts of diagrams, which make setting up equations (and, eventually, transition matrices) a lot simpler.
So we have these equations; how do we run them and obtain some results?
Turns out, Python (specifically, SciPy) has a module for solving ordinary differential equations (ODEs).
End of explanation
"""
n0 = 10
rc1 = 0.01
rc2 = 0.1
rc3 = -0.2
"""
Explanation: Now, let's set an initial population $n_0$, pick a couple of different per capita rates of change $r_c$, and run them to see what happens.
End of explanation
"""
# Differential equation functions take two arguments: the variable that's changing, and time.
def diffeq(n, t):
return n * rc
"""
Explanation: The one critical part of the whole thing: you have to define the differential equations as Python functions, so the SciPy module knows what to solve. Let's do that here:
End of explanation
"""
t = np.linspace(0, 15, 1000) # time
rc = rc1
n1, oded = sig.odeint(diffeq, n0, t, full_output = True)
print(oded['message'])
rc = rc2
n2, oded = sig.odeint(diffeq, n0, t, full_output = True)
print(oded['message'])
rc = rc3
n3, oded = sig.odeint(diffeq, n0, t, full_output = True)
print(oded['message'])
plt.xlabel('time')
plt.ylabel('population')
plt.title('Exponential Growth')
plt.plot(t, n1, label = '$r_c = 0.01$')
plt.plot(t, n2, label = '$r_c = 0.5$')
plt.plot(t, n3, label = '$r_c = -0.2$')
plt.legend(loc = 0)
"""
Explanation: Now, let's create a bunch of time points and evaluate the ODE for different values of rc!
End of explanation
"""
# Same as before
n0 = 10
rc1 = 0.01
rc2 = 0.1
rc3 = -0.2
K = 100 # The new term introduced by this method--known as "Carrying Capacity"
"""
Explanation: Logistic growth is a slightly different approach. It takes into account the fact that populations usually can't just keep growing without bound. In fact, their growth rate is directly related to their current size.
The model looks something like this:
<img src="Lecture19/growth-log.png" width="40%" />
You still see some of the usual suspects--population $n(t)$ as a function of time, and birth and death rates, but notice the latter two are also now functions of the current population instead of simply constants.
To come up with a bounded model of population growth, we need to add a couple of things to our original equation.
Think of it this way: when the population is small, we want it to behave more or less like it did before--exponential growth. But when the population is large, we want it slow down or even stop growing.
$\frac{dn}{dt} = r_c n(t) (1 - \frac{n(t)}{K})$
Let's look at this more closely:
- We still see the same exponential growth equation as before in the first part
- There's a second part, though: $(1 - \frac{n(t)}{K})$
- Consider the equation when $n(t)$ is small: the $\frac{n(t)}{K}$ number is close to 0, which means $1 -$ that number is pretty much 1, so the equation reduces to $r_c n(t)$, exactly what we had before!
- When $n(t)$ is large--say, very close to whatever $K$ is--the fraction $\frac{n(t)}{K}$ is very close to 1, and $1 - 1 = 0$, which sets the entire equation to 0. In other words, growth stops completely!
So that's cool. Let's plot it out with Python! Remember to first set up the variables and rates:
End of explanation
"""
def logistic_growth(n, t):
exp_term = n * rc # same as before
limit_term = 1 - (n / K) # the limiting term
return exp_term * limit_term
"""
Explanation: Now we need to write the function that implements the differential equation.
End of explanation
"""
t = np.linspace(0, 100, 2000) # time
rc = rc1
n1, oded = sig.odeint(logistic_growth, n0, t, full_output = True)
print(oded['message'])
rc = rc2
n2, oded = sig.odeint(logistic_growth, n0, t, full_output = True)
print(oded['message'])
rc = rc3
n3, oded = sig.odeint(logistic_growth, n0, t, full_output = True)
print(oded['message'])
plt.xlabel('time')
plt.ylabel('population')
plt.title('Logistic Growth with $K = 100$')
plt.plot(t, n1, label = '$r_c = 0.01$')
plt.plot(t, n2, label = '$r_c = 0.5$')
plt.plot(t, n3, label = '$r_c = -0.2$')
plt.legend(loc = 0)
"""
Explanation: Now we simulate it! The only difference is, this time, we feed the function name logistic_growth to the odeint() solver:
End of explanation
"""
a = 1.0 # prey growth rate
b = 0.1 # predation rate (prey death rate)
c = 0.075 # predator growth rate
d = 1.0 # predator death rate
"""
Explanation: Models of Competition
The population growth models we looked at are great, but they're unrealistic for many reasons, not the least of which is: populations don't exist in a vacuum!
Populations have to coexist with restrictions such as food, water, resources, mating and fertility rates, environmental factors, and numerous others.
Lotka-Volterra models build on the idea of logistic population growth, but with the added constraint of an additional population species that specifically preys on the other.
Consider a model of 2 species with the following parameters:
Populations $n_1$ and $n_2$
Intrinsic growth rates $r_1$ and $r_2$
Carrying capacities $K_1$ and $K_2$
Assumptions
(always important to list these out!)
The prey population finds ample food at all times, whereas the predator population's food depends solely on the prey population
Related: the predator has an unlimited appetite (i.e. the amount of food consumed by predators is dependent only on the population size of the prey)
The rate of change in both populations is a function of the sizes of the populations
The environment the two populations reside in doesn't change, and genetics / adaptation don't play a role
How do we set up the competing differential equations?
Start with the exponential growth from before!
Prey growth: $\frac{dx}{dt} = \alpha x$
But we want to include a negative dependence on the predator population, too.
This negative dependence has its own rate, $\beta$.
Predation rate is not only dependent on the predator population $y$, but also the prey population $x$.
So the negative term is composed of three elements.
Prey: $\frac{dx}{dt} = \alpha x - \beta x y$
How about the predator equations?
(Hint: the part of the prey equation that kills off prey is what contributes to predator growth)
Predator growth: $\frac{dy}{dt} = \gamma x y$
That's the growth term for predators. How about its own negative term?
Predator: $\frac{dy}{dt} = \gamma x y - \delta y$
Let's model these equations in Python!
First, we have parameter values we need to set up:
End of explanation
"""
def pred_prey(X, t):
# Remember: X is a two-element NumPy array
ax = a * X[0]
bxy = b * X[0] * X[1]
cxy = c * X[0] * X[1]
dy = d * X[1]
# Return value is also a two-element array
retval = np.array([ax - bxy, cxy - dy])
return retval
"""
Explanation: Next, we need to code up one step of the differential equation, in the form of a Python function:
End of explanation
"""
t = np.linspace(0, 15, 1000) # time
X0 = np.array([10, 5]) # initials conditions: 10 prey, 5 predators
X, oded = sig.odeint(pred_prey, X0, t, full_output = True)
print(oded['message'])
prey, pred = X.T
plt.xlabel('time')
plt.ylabel('population')
plt.title('Lotka-Volterra Model')
plt.plot(t, prey, 'r-', label = 'Prey')
plt.plot(t, pred , 'b-', label = 'Predators')
plt.legend(loc = 0)
"""
Explanation: How does it look?
End of explanation
"""
beta = 0.3 # infection rate
theta = 10.0 # birth rate
sigma = 0.5 # de-immunization rate
rho = 0.9 # recovery rate
delta = 0.5 # death rate from infection
mu = 0.05 # death rate from susceptibility or recovery
# Initial populations.
S0 = 100
I0 = 5
R0 = 0
X0 = np.array([S0, I0, R0])
"""
Explanation: Epidemiological Models
There is an entire class of compartment models dedicated to capturing the characteristics of epidemiological systems, the most popular of which is easily the SIR model.
SIR models, or Susceptible-Infected-Recovered models, represent three distinct populations and how people move from one of these populations to another in response to infectious diseases.
<img src="Lecture19/sir-overview.png" width="50%" />
Let's create a diagram of the process, just as before, showing the relevant variables, parameters, constraints, and interactions between variables.
To start, we need to list out our background knowledge of the problem, encoded as assumptions:
Infection can be transmitted from infected to susceptible individuals
Recovered individuals become immune for a period of time
Probability of death is increased in infected patients
Can we sketch out the diagram?
<img src="Lecture19/sir-diagram.png" width="70%" />
Next step: convert the diagram into equations or rules (we've used differential equations so far), one for each population.
Susceptible population:
$\frac{dS}{dt} = \theta + \sigma R(t) - \beta S(t) I(t) - \sigma S(t)$
Infected population:
$\frac{dI}{dt} = \beta S(t) I(t) - \rho I(t) - \delta I(t)$
Recovered population:
$\frac{dR}{dt} = \rho I(t) - \sigma R(t) - \mu R(t)$
Aside
We're leaving out for the moment how exactly to come up with values for all these parameters; it's more obvious with SIR parameters, since there are a ton of them.
Research papers using the model will detail out the values used and how they were determined (often through simulation or experiment).
Let's see if we can simulate this model!
End of explanation
"""
def diff_sir(X, t):
s = X[0]
i = X[1]
r = X[2]
# Now, compute each equation.
ds = theta + (sigma * r) - (beta * s * i) - (mu * s)
di = (beta * s * i) - (rho * i) - (delta * i)
dr = (rho * i) - (sigma * r) - (mu * r)
# Return the numbers as an array, in the same order as the input.
return np.array([ds, di, dr])
"""
Explanation: Now we need to code up the differential equations in terms of Python functions.
End of explanation
"""
t = np.linspace(0, 50, 1000) # time
Y, oded = sig.odeint(diff_sir, X0, t, full_output = True)
print(oded['message'])
S, I, R = Y.T
plt.xlabel('time')
plt.ylabel('population')
plt.title('SIR Model')
plt.plot(t, S, 'b-', label = 'S')
plt.plot(t, I, 'r-', label = 'I')
plt.plot(t, R, 'g-', label = 'R')
plt.legend(loc = 0)
"""
Explanation: Finally, we'll solve the equation.
End of explanation
"""
|
compsocialscience/summer-institute | 2018/materials/boulder/day5-causal-inference/Day 5 - Case Study.ipynb | mit | sb.factorplot(x='HOUR',y='ST_CASE',hue='WEEKDAY',data=counts_df,
aspect=2,order=range(24),palette='nipy_spectral',dodge=.5)
sb.factorplot(x='MONTH',y='ST_CASE',hue='WEEKDAY',data=counts_df,
aspect=2,order=range(1,13),palette='nipy_spectral',dodge=.5)
"""
Explanation: Exploratory data analysis
End of explanation
"""
annual_state_counts_df = counts_df.groupby(['STATE','YEAR']).agg({'ST_CASE':np.sum,'DRUNK_DR':np.sum,'FATALS':np.sum}).reset_index()
annual_state_counts_df = pd.merge(annual_state_counts_df,population_estimates,
left_on=['STATE','YEAR'],right_on=['State','Year']
)
annual_state_counts_df = annual_state_counts_df[['STATE','YEAR','ST_CASE','DRUNK_DR','FATALS','Population']]
annual_state_counts_df.head()
"""
Explanation: Changes in fatal accidents following legalization?
One interesting source of exogenous variation Colorado and Washington's legalization of cannabis in 2014. If cannabis usage increased following legalization and this translated into more impaired driving, then there should be an increase in the number of fatal auto accidents in these states after 2014.
End of explanation
"""
_cols = ['ST_CASE','DRUNK_DR','FATALS']
annual_co_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "Colorado") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
annual_wa_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "Washington") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
annual_nm_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "New Mexico") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
annual_or_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "Oregon") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
# Make the figures
f,axs = plt.subplots(3,1,figsize=(10,6),sharex=True)
# Plot the cases
annual_co_counts.plot.line(y='ST_CASE',c='blue',ax=axs[0],legend=False,lw=3)
annual_wa_counts.plot.line(y='ST_CASE',c='green',ax=axs[0],legend=False,lw=3)
annual_nm_counts.plot.line(y='ST_CASE',c='red',ls='--',ax=axs[0],legend=False,lw=3)
annual_or_counts.plot.line(y='ST_CASE',c='orange',ls='--',ax=axs[0],legend=False,lw=3)
axs[0].set_ylabel('Fatal Incidents')
# Plot the drunk driving cases
annual_co_counts.plot.line(y='DRUNK_DR',c='blue',ax=axs[1],legend=False,lw=3)
annual_wa_counts.plot.line(y='DRUNK_DR',c='green',ax=axs[1],legend=False,lw=3)
annual_nm_counts.plot.line(y='DRUNK_DR',c='red',ls='--',ax=axs[1],legend=False,lw=3)
annual_or_counts.plot.line(y='DRUNK_DR',c='orange',ls='--',ax=axs[1],legend=False,lw=3)
axs[1].set_ylabel('Drunk Driving')
# Plot the fatalities
annual_co_counts.plot.line(y='FATALS',c='blue',ax=axs[2],legend=False,lw=3)
annual_wa_counts.plot.line(y='FATALS',c='green',ax=axs[2],legend=False,lw=3)
annual_nm_counts.plot.line(y='FATALS',c='red',ls='--',ax=axs[2],legend=False,lw=3)
annual_or_counts.plot.line(y='FATALS',c='orange',ls='--',ax=axs[2],legend=False,lw=3)
axs[2].set_ylabel('Total Fatalities')
# Plot 2014 legalization
for ax in axs:
ax.axvline(x=2014,c='r')
# Stuff for legend
b = mlines.Line2D([],[],color='blue',label='Colorado',linewidth=3)
g = mlines.Line2D([],[],color='green',label='Washington',linewidth=3)
r = mlines.Line2D([],[],color='red',linestyle='--',label='New Mexico',linewidth=3)
o = mlines.Line2D([],[],color='orange',linestyle='--',label='Oregon',linewidth=3)
axs[2].legend(loc='lower center',ncol=4,handles=[b,g,r,o],fontsize=12,bbox_to_anchor=(.5,-.75))
f.tight_layout()
annual_state_counts_df['Treated'] = np.where(annual_state_counts_df['STATE'].isin(['Colorado','Washington']),1,0)
annual_state_counts_df['Time'] = np.where(annual_state_counts_df['YEAR'] >= 2014,1,0)
annual_state_counts_df = annual_state_counts_df[annual_state_counts_df['YEAR'] >= 2010]
annual_state_counts_df.query('STATE == "Colorado"')
"""
Explanation: Visualize accident statistics for Colorado, Washington, New Mexico (similar to Colorado), and Oregon (similar to Washington).
End of explanation
"""
m_cases = smf.ols(formula = 'ST_CASE ~ Treated*Time + Population',
data = annual_state_counts_df).fit()
print(m_cases.summary())
m_cases = smf.ols(formula = 'FATALS ~ Treated*Time + Population',
data = annual_state_counts_df).fit()
print(m_cases.summary())
m_cases = smf.ols(formula = 'DRUNK_DR ~ Treated*Time + Population',
data = annual_state_counts_df).fit()
print(m_cases.summary())
"""
Explanation: We'll specify a difference-in-differences design with "Treated" states (who legalized) and "Time" (when they legalized), while controlling for differences in population. The Treated:Time interaction is the Difference-in-Differences estimate, which is not statistically significant. This suggests legalization did not increase the risk of fatal auto accidents in these states.
End of explanation
"""
counts_df.head()
population_estimates.head()
# Select only data after 2004 in the month of April
april_df = counts_df.query('MONTH == 4 & YEAR > 2004').set_index(['STATE','HOUR','MONTH','DAY','YEAR'])
# Re-index the data to fill in missing dates
ix = pd.MultiIndex.from_product([state_codings.values(),range(0,24),[4],range(1,31),range(2005,2017)],
names = ['STATE','HOUR','MONTH','DAY','YEAR'])
april_df = april_df.reindex(ix).fillna(0)
april_df.reset_index(inplace=True)
# Add in population data
april_df = pd.merge(april_df,population_estimates,
left_on=['STATE','YEAR'],right_on=['State','Year'])
april_df = april_df[[i for i in april_df.columns if i not in ['Year','State']]]
# Inspect
april_df.head()
# Calculate whether day is a Friday, Saturday, or Sunday
april_df['Weekday'] = pd.to_datetime(april_df[['YEAR','MONTH','DAY']]).apply(lambda x:x.weekday())
april_df['Weekend'] = np.where(april_df['Weekday'] >= 4,1,0)
# Treated days are on April 20
april_df['Fourtwenty'] = np.where(april_df['DAY'] == 20,1,0)
april_df['Legal'] = np.where((april_df['STATE'].isin(['Colorado','Washington'])) & (april_df['YEAR'] >= 2014),1,0)
# Examine data for a week before and after April 20
april_df = april_df[april_df['DAY'].isin([13,20,27])]
# Inspect Colorado data
april_df.query('STATE == "Colorado"').sort_values(['YEAR','DAY'])
"""
Explanation: Changes in fatal accidents on 4/20?
There's another exogenous source of variation in this car crash data we can exploit: the unofficial cannabis enthusiast holiday of April 20. If consumption increases on this day compared to a week before or after (April 13 and 27), does this explain changes in fatal auto accidents?
End of explanation
"""
sb.factorplot(x='DAY',y='ST_CASE',data=april_df,kind='bar',palette=['grey','green','grey'])
"""
Explanation: Inspect the main effect of cases on April 20 are compared to the week before and after.
End of explanation
"""
m_cases_420 = smf.ols(formula = 'ST_CASE ~ Fourtwenty*Legal + YEAR + Weekend + Population',
data = april_df).fit()
print(m_cases_420.summary())
m_drunk_420 = smf.ols(formula = 'DRUNK_DR ~ Fourtwenty*Legal + YEAR + Weekend + Population',
data = april_df).fit()
print(m_drunk_420.summary())
m_fatal_420 = smf.ols(formula = 'FATALS ~ Fourtwenty*Legal + YEAR + Weekend + Population',
data = april_df).fit()
print(m_fatal_420.summary())
"""
Explanation: Estimate the models. The Fourtwenty and Legal dummy variables (and their interaction) capture whether fatal accidents increased on April 20 compared to the week beforehand and afterwards, while controlling for state legality, year, whether it's a weekend, and state population. We do not observe a statistically signidicant increase in the number of incidents, alcohol-involved incidents, and total fatalities on April 20.
End of explanation
"""
ca2016 = pd.read_csv('wikipv_ca_2016.csv',encoding='utf8',parse_dates=['timestamp']).set_index('timestamp')
ca2018 = pd.read_csv('wikipv_ca_2018.csv',encoding='utf8',parse_dates=['timestamp']).set_index('timestamp')
ca2016.head()
"""
Explanation: Case study: Wikipedia pageview dynamics
On November 8, 2016, California voters passed Proposition 64 legalizing recreational use of cannabis. On January 1, 2018, recreational sales began. The following two files capture the daily pageview data for the article Cannabis in California as well as the daily pageview for all the other pages it links to.
End of explanation
"""
f,axs = plt.subplots(2,1,figsize=(10,5))
ca2016['Cannabis in California'].plot(ax=axs[0],color='red',lw=3)
ca2018['Cannabis in California'].plot(ax=axs[1],color='blue',lw=3)
axs[0].axvline(pd.Timestamp('2016-11-08'),c='k',ls='--')
axs[1].axvline(pd.Timestamp('2018-01-01'),c='k',ls='--')
for ax in axs:
ax.set_ylabel('Pageviews')
f.tight_layout()
"""
Explanation: Here are two plots of the pageview dynamics for the seed "Cannabis in California" article.
End of explanation
"""
g = nx.read_gexf('wikilinks_cannabis_in_california.gexf')
f,ax = plt.subplots(1,1,figsize=(10,10))
g_pos = nx.layout.kamada_kawai_layout(g)
nx.draw(G = g,
ax = ax,
pos = g_pos,
with_labels = True,
node_size = [dc*(len(g) - 1)*10 for dc in nx.degree_centrality(g).values()],
font_size = 7.5,
font_weight = 'bold',
node_color = 'tomato',
edge_color = 'grey'
)
"""
Explanation: Here is a visualization of the local hyperlink network around "Cannabis in California."
End of explanation
"""
all_accident_df = pd.read_csv('accidents.csv',encoding='utf8',index_col=0)
all_accident_df.head()
"""
Explanation: What kinds of causal arguments could you make from these pageview data and the hyperlink networks?
Appendix 1: Cleaning NHTSA FARS Data
"accidents.csv" is a ~450MB file after concatenating the raw annual data from NHTSA FARS.
End of explanation
"""
population_estimates = pd.read_csv('census_pop_estimates.csv')
_cols = [i for i in population_estimates.columns if "POPESTIMATE" in i] + ['NAME']
population_estimates_stacked = population_estimates[_cols].set_index('NAME').unstack().reset_index()
population_estimates_stacked.rename(columns={'level_0':'Year','NAME':'State',0:'Population'},inplace=True)
population_estimates_stacked['Year'] = population_estimates_stacked['Year'].str.slice(-4).astype(int)
population_estimates_stacked = population_estimates_stacked[population_estimates_stacked['State'].isin(state_codings.values())]
population_estimates_stacked.dropna(subset=['Population'],inplace=True)
population_estimates_stacked.to_csv('population_estimates.csv',encoding='utf8')
"""
Explanation: The state population estimates for 2010-2017 from the U.S. Census Buureau.
End of explanation
"""
gb_vars = ['STATE','HOUR','DAY','MONTH','YEAR']
agg_dict = {'ST_CASE':len,'DRUNK_DR':np.sum,'FATALS':np.sum}
counts_df = all_accident_df.groupby(gb_vars).agg(agg_dict).reset_index()
counts_df['STATE'] = counts_df['STATE'].map(state_codings)
counts_df = counts_df.query('YEAR > 1999')
counts_df.to_csv('accident_counts.csv',encoding='utf8',index=False)
counts_df.head()
"""
Explanation: Groupby-aggregate the data by state, month, day, and year counting the number of cases, alcohol-involved deaths, and total fatalities. Save the data as "accident_counts.csv".
End of explanation
"""
from datetime import datetime
import requests, json
from bs4 import BeautifulSoup
from urllib.parse import urlparse, quote, unquote
import networkx as nx
def get_page_outlinks(page_title,lang='en',redirects=1):
"""Takes a page title and returns a list of wiki-links on the page. The
list may contain duplicates and the position in the list is approximately
where the links occurred.
page_title - a string with the title of the page on Wikipedia
lang - a string (typically two letter ISO 639-1 code) for the language
edition, defaults to "en"
redirects - 1 or 0 for whether to follow page redirects, defaults to 1
Returns:
outlinks_per_lang - a dictionary keyed by language returning a dictionary
keyed by page title returning a list of outlinks
"""
# Replace spaces with underscores
page_title = page_title.replace(' ','_')
bad_titles = ['Special:','Wikipedia:','Help:','Template:','Category:','International Standard','Portal:','s:','File:','Digital object identifier','(page does not exist)']
# Get the response from the API for a query
# After passing a page title, the API returns the HTML markup of the current article version within a JSON payload
req = requests.get('https://{2}.wikipedia.org/w/api.php?action=parse&format=json&page={0}&redirects={1}&prop=text&disableeditsection=1&disabletoc=1'.format(page_title,redirects,lang))
# Read the response into JSON to parse and extract the HTML
json_string = json.loads(req.text)
# Initialize an empty list to store the links
outlinks_list = []
if 'parse' in json_string.keys():
page_html = json_string['parse']['text']['*']
# Parse the HTML into Beautiful Soup
soup = BeautifulSoup(page_html,'lxml')
# Remove sections at end
bad_sections = ['See_also','Notes','References','Bibliography','External_links']
sections = soup.find_all('h2')
for section in sections:
if section.span['id'] in bad_sections:
# Clean out the divs
div_siblings = section.find_next_siblings('div')
for sibling in div_siblings:
sibling.clear()
# Clean out the ULs
ul_siblings = section.find_next_siblings('ul')
for sibling in ul_siblings:
sibling.clear()
# Delete tags associated with templates
for tag in soup.find_all('tr'):
tag.replace_with('')
# For each paragraph tag, extract the titles within the links
for para in soup.find_all('p'):
for link in para.find_all('a'):
if link.has_attr('title'):
title = link['title']
# Ignore links that aren't interesting or are redlinks
if all(bad not in title for bad in bad_titles) and 'redlink' not in link['href']:
outlinks_list.append(title)
# For each unordered list, extract the titles within the child links
for unordered_list in soup.find_all('ul'):
for item in unordered_list.find_all('li'):
for link in item.find_all('a'):
if link.has_attr('title'):
title = link['title']
# Ignore links that aren't interesting or are redlinks
if all(bad not in title for bad in bad_titles) and 'redlink' not in link['href']:
outlinks_list.append(title)
return outlinks_list
def get_pageviews(page_title,lang='en',date_from='20150701',date_to=str(datetime.today().date()).replace('-','')):
"""Takes Wikipedia page title and returns a all the various pageview records
page_title - a string with the title of the page on Wikipedia
lang - a string (typically two letter ISO 639-1 code) for the language edition,
defaults to "en"
datefrom - a date string in a YYYYMMDD format, defaults to 20150701
dateto - a date string in a YYYYMMDD format, defaults to today
Returns:
revision_list - a DataFrame indexed by date and multi-columned by agent and access type
"""
quoted_page_title = quote(page_title, safe='')
df_list = []
#for access in ['all-access','desktop','mobile-app','mobile-web']:
#for agent in ['all-agents','user','spider','bot']:
s = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{1}.wikipedia.org/{2}/{3}/{0}/daily/{4}/{5}".format(quoted_page_title,lang,'all-access','user',date_from,date_to)
json_response = requests.get(s).json()
if 'items' in json_response:
df = pd.DataFrame(json_response['items'])
df_list.append(df)
concat_df = pd.concat(df_list)
concat_df['timestamp'] = pd.to_datetime(concat_df['timestamp'],format='%Y%m%d%H')
concat_df = concat_df.set_index(['timestamp','agent','access'])['views'].unstack([1,2]).sort_index(axis=1)
concat_df[('page','page')] = page_title
return concat_df
else:
print("Error on {0}".format(page_title))
pass
"""
Explanation: Appendix 2: Get page outlinks
Load libraries and define two functions:
get_page_outlinks - Get all of the outlinks from the current version of the page.
get_pageviews - Get all of the pageviews for an article over a time range
End of explanation
"""
ca_links = get_page_outlinks('Cannabis in California') + ['Cannabis in California']
link_d = {}
for l in list(set(ca_links)):
link_d[l] = get_page_outlinks(l)
"""
Explanation: Get all of the links from "Cannabis in California", and add the seed article itself.
End of explanation
"""
g = nx.DiGraph()
seed_edges = [('Cannabis in California',l) for l in link_d['Cannabis in California']]
#g.add_edges_from(seed_edges)
for page,links in link_d.items():
for link in links:
if link in link_d['Cannabis in California']:
g.add_edge(page,link)
print("There are {0:,} nodes and {1:,} edges in the network.".format(g.number_of_nodes(),g.number_of_edges()))
nx.write_gexf(g,'wikilinks_cannabis_in_california.gexf')
"""
Explanation: Make a network object of these hyperlinks among articles.
End of explanation
"""
pvs_2016 = {}
pvs_2018 = {}
pvs_2016['Cannabis in California'] = get_pageviews('Cannabis in California',
date_from='20160801',date_to='20170201')
pvs_2018['Cannabis in California'] = get_pageviews('Cannabis in California',
date_from='20171001',date_to='20180301')
for page in list(set(ca_links)):
pvs_2016[page] = get_pageviews(page,date_from='20160801',date_to='20170201')
pvs_2018[page] = get_pageviews(page,date_from='20171001',date_to='20180301')
"""
Explanation: Get the pageviews for the articles linking from "Cannabis in California".
End of explanation
"""
def cleanup_pageviews(pv_dict):
_df = pd.concat(pv_dict.values())
_df.reset_index(inplace=True)
_df.columns = _df.columns.droplevel(0)
_df.columns = ['timestamp','pageviews','page']
_df = _df.set_index(['timestamp','page']).unstack('page')['pageviews']
return _df
cleanup_pageviews(pvs_2016).to_csv('wikipv_ca_2016.csv',encoding='utf8')
cleanup_pageviews(pvs_2018).to_csv('wikipv_ca_2018.csv',encoding='utf8')
"""
Explanation: Define a function cleanup_pageviews to make a rectangular DataFrame with dates as index, pages as columns, and pageviews as values.
End of explanation
"""
|
googledatalab/notebooks | samples/Conversion Analysis with Google Analytics Data.ipynb | apache-2.0 | import google.datalab.bigquery as bq
"""
Explanation: Conversion Analysis with Google Analytics Data
This sample notebook demonstrates working with Google Analytics page views and session data exported to Google BigQuery.
Google Analytics offers BigQuery export as part of its premium offering. If you're a premium user, you have the ability to export any of your analytics views to a BigQuery dataset that you own. If you're not, you can use the Analytics API to retrieve and import the data used to generate the default Analytics dashboards.
The sample data used in this notebook shares the same schema as the Google Analytics BigQuery export, but it is from a sample, pubicly available account. It is also small in size. This notebook demonstrates one possible custom analytics scenario, and is not based upon actual data.
Related Links:
BigQuery
Google Analytics
Google Charting API for data visualization
End of explanation
"""
%%bq tables describe -n "google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910"
"""
Explanation: Understanding the Hits Data
It's helpful to inspect the schema and a sample of the data we're working with.
End of explanation
"""
%%bq query -n sessions
SELECT fullVisitorId, visitId, hit.hitNumber as hitNumber, hit.page.pagePath as path
FROM `google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`
CROSS JOIN UNNEST(hits) as hit
ORDER BY visitStartTime, hitNumber
%bq execute --query sessions
"""
Explanation: The Google Analytics dataset has a large schema. It should be interesting to inspect some of the data in important columns.
End of explanation
"""
%%bq query -n hits
SELECT hit.page.pagePath as path, COUNT(visitId) as hitCount
FROM `google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`
CROSS JOIN UNNEST(hits) as hit
GROUP BY path
ORDER BY hitCount DESC
%%bq execute -q hits
"""
Explanation: The data is organized as a set of visits (or sessions), with each visit containing a set of hits (or page views), in succession. Each hit has a URL path associated with it. Here is another query that shows paths and the number of hits across sessions.
End of explanation
"""
%%bq query -n conversions
WITH
AnnotatedVisits AS (
SELECT
visitId,
hit.page.pagePath AS path,
hit.hitNumber AS hitNumber,
'/confirm.html' IN (SELECT page.pagePath FROM UNNEST(hits)) AS transacted
FROM `google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`
CROSS JOIN UNNEST(hits) AS hit
ORDER BY visitStartTime, hitNumber)
SELECT
IF (path = '/', 'home', 'product') AS start,
IF (transacted, 'completed', 'abandoned') AS outcome,
COUNT(*) AS count
FROM AnnotatedVisits
WHERE hitNumber = 1
GROUP BY start, outcome
ORDER BY outcome, start
%%bq execute -q conversions
"""
Explanation: Producing Conversion Data
For the purposes of this sample, the question to be answered is "Which path leads to higher conversion ratio: users landing on the landing page (path = '/') or users landing on a product page (eg. '/vests/yellow.html')?" "Conversion" is defined as the user loading the '/confirm/' page within a single session.
End of explanation
"""
%%chart sankey --data conversions
{
"sankey": {
"node": {
"colors": [ "black", "red", "black", "green" ]
}
}
}
"""
Explanation: Visualizing the Conversion Path
The matrix tells us a bit about completed visits vs. abandoned visits, depending on the starting point. However, this is more easily seen in a sankey diagram, which is provided by the Google Charting API.
End of explanation
"""
|
bakanchevn/DBCourseMirea2017 | Неделя 3/Работа в классе/Лабораторная 3-1-Решение.ipynb | gpl-3.0 | def task1():
cursor = db.cursor()
cursor.execute('''
select distinct ar.Name
from tracks t
inner join albums al
on t.albumid = al.albumid
inner join artists ar
on al.artistid = ar.artistid
inner join genres g
on t.genreid = g.genreid
where g.name = 'Rock'
''')
ar = cursor.fetchall()
return [x[0] for x in ar]
task1()
"""
Explanation: Задание 1
Написать функцию на языке Python, формирующую список всех исполнителей композиций жанра Рок. Список должен быть упорядочен в порядке убывания.
End of explanation
"""
def task2():
cursor=db.cursor()
cursor.execute('''
DROP TABLE IF EXISTS students''')
cursor.execute('''
CREATE TABLE Students(id INTEGERE PRIMARY KEY, name TEXT, gpa NUMBER(10,2))''')
db.commit()
task2()
"""
Explanation: Задание 2
Написать функцию на языке Python, создающую таблицу Студентов Students(id, name, gpa). Ключ - id.
End of explanation
"""
%%sql
select *
from students
"""
Explanation: Проверим, что таблица создана
End of explanation
"""
%%sql
select coalesce(max(id)+1, 1) as new_id from students
def task3(l_students):
cursor = db.cursor()
cursor.execute( '''
SELECT COALESCE(MAX(ID)+1, 1) AS new_id FROM students''')
new_id = cursor.fetchone()[0]
for i, student in enumerate(l_students):
cursor.execute('''
INSERT INTO Students(id, name, gpa) VALUES(?,?,?)''', (new_id + i, student[0], student[1]))
db.commit()
task3([['Ivanov', 3.2], ['Petrov', 4.2]])
%%sql
SELECT *
FROM Students
"""
Explanation: Задание 3
Для созданной выше функции реализовть возможность добавления списка студентов вида [['Ivanov', 1.2], ['Petrov', 2.3]].
ID новых студентов должно начинаться с максимального ID в таблице + 1. (Например, если в таблице максимальный ID - 10, то у Петрова должно быть - 11, у Иванова - 12). Функция должна предполагать вставки списка любой ограниченной длины.
Получаем max(id) + 1
End of explanation
"""
def task4():
cursor = db.cursor()
cursor.execute('''DROP TABLE IF EXISTS faculties''')
cursor.execute('''CREATE TABLE faculties(fac_id INTEGER PRIMARY KEY, name TEXT)''')
cursor.execute('''ALTER TABLE students ADD fac_id INTEGER REFERENCES faculties(fac_id)''')
db.commit()
task4()
%%sql
select *
from faculties
%%sql
select *
from Students
"""
Explanation: Задание 4
Добавить таблицу Факультетов Faculties(fac_id, name). Для таблицы Students добавить новое поле fac_id с внещним ключом на таблицу факультетов.
End of explanation
"""
%%sql
INSERT INTO faculties(fac_id, name)
VALUES (1, 'IT'), (2, 'KIB'), (3, 'Math')
%%sql
select *
from faculties
a = input('1 {}', '2')
def task5():
cursor = db.cursor()
cursor.execute('Select id, name, gpa from Students')
a = cursor.fetchall()
for x in a:
print("Введите факультет для студента {} с id = {} и gpa = {}".format(x[1], x[0], x[2]))
fac_name = input()
cursor.execute("SELECT fac_id from faculties where name = ?", (fac_name, ))
# Проверяем есть ли такая запись
try:
fac_id = cursor.fetchone()[0]
except TypeError:
continue
cursor.execute("Update students set fac_id = ? where id = ?", (fac_id, x[0],))
db.commit()
task5()
%%sql
SELECT *
FROM students
task5()
%%sql
SELECT *
FROM Students
"""
Explanation: Задание 5
Написать функцию, осуществляющую обновления всех факультетов у каждого студента. Функция должна выводить информацию о студенту, приглашение на вход для обновления факультета и обновление факультета. При возникновение вставки функция должна обрабатывать исключение, и продолжать работу.
Для начала добавим в таблицу факультетов пару записей
End of explanation
"""
def task6(fac_name, l_id):
cursor = db.cursor()
cursor.execute( '''
SELECT COALESCE(MAX(fac_id)+1, 1) AS new_fac_id FROM faculties''')
new_id = cursor.fetchone()[0]
cursor.execute('''
INSERT INTO faculties(fac_id, name) VALUES(?,?)''', (new_id, fac_name,))
for x in l_id:
cursor.execute('''
Update students set fac_id = ? where id = ?''', (new_id, x, ))
db.commit()
task6('Hist', [1])
%%sql
select *
from students
"""
Explanation: Задание 6
Написать функцию, осущетсвляющую перевод части учеников на новый факультет. На входе: Имя факультета, Список студентов для перехода на новый факультет. На выходе Добавление новой записи в таблицу факультетов, Обновление записей в таблице студентов.
End of explanation
"""
|
analysiscenter/dataset | examples/experiments/squeeze_and_excitation/squeeze_and_excitation.ipynb | apache-2.0 | import sys
import numpy as np
import tensorflow as tf
from tqdm import tqdm_notebook as tqn
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-poster')
plt.style.use('ggplot')
sys.path.append('../../..')
from batchflow import B, V
from batchflow.opensets import MNIST
from batchflow.models.tf import ResNet
sys.path.append('../../utils')
import utils
"""
Explanation: Squeeze and Excitation
Today’s experiment is based on the Jie Hu, Li Shen, Gang Sun "Squeeze-and-Excitation Networks".
We will compare the usual ResNet to ResNet with squeeze and excitation blocks.
To begin with let's figure out what SE block is.
It improves channel interdependencies at almost no computational cost. The main idea is to add the parameters to each channel of a convolutional block so that the network can adaptively adjust the weighting of each feature map.
<img src='_images/se_block.png'>
Let’s take a closer look at the structure of the block:
Squeeze block
Firstly, each channel's feature maps are squeezed into a single numeric value. This results in a vector of size С, where С is the number of channels.
<img src='_images/squeeze.png'>
Excitation block
Afterwards, this value is fed through a tiny two-layer neural network, which outputs a vector of the same size C. These values can now be used as weights for the original feature maps, thus scaling each channel based on its importance.
<img src='_images/exictation.png'>
A more detailed explanation is presented in the blog post on Medium.
End of explanation
"""
dset = MNIST()
"""
Explanation: As always, let's create a dataset with MNIST data
End of explanation
"""
ResNet_config = {
'inputs':{'images': {'shape': (28, 28, 1)},
'labels': {'classes': 10,
'transform': 'ohe',
'dtype': 'int32',
'name': 'targets'}},
'input_block/inputs': 'images',
'body/num_blocks': [1, 1, 1, 1],
'body/filters': [64, 128, 256, 512],
'body/block/bottleneck': True,
'body/block/post_activation': tf.nn.relu,
'body/block/layout': 'cna cna cn',
'loss': 'ce',
'optimizer': 'Adam',
}
SE_config = {
**ResNet_config,
'body/block/se_block': True,
}
"""
Explanation: We will use the standard ResNet from the BatchFlow models.
For comparison, we will create classic ResNet and ResNet with SE blocks. Both models have the same number of blocks. SE block is added to the model by specifying True value for the key 'body/block/se_block' in the config as shown below.
End of explanation
"""
res_train_ppl = (dset.train.p
.init_model('dynamic',
ResNet,
'resnet',
config=ResNet_config)
.train_model('resnet',
feed_dict={'images': B('images'),
'labels': B('labels')}))
res_test_ppl = (dset.test.p
.init_variable('resloss', init_on_each_run=list)
.import_model('resnet', res_train_ppl)
.predict_model('resnet',
fetches='loss',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('resloss'),
mode='a'))
"""
Explanation: Now create pipelines with the given configurations for a simple ResNet model
End of explanation
"""
se_train_ppl = (dset.train.p
.init_model('dynamic',
ResNet,
'se_block',
config=SE_config)
.train_model('se_block',
feed_dict={'images': B('images'),
'labels': B('labels')}))
se_test_ppl = (dset.test.p
.init_variable('seloss', init_on_each_run=list)
.import_model('se_block', se_train_ppl)
.predict_model('se_block',
fetches='loss',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('seloss'),
mode='a'))
"""
Explanation: And now the model with SE blocks
End of explanation
"""
for i in tqn(range(500)):
res_train_ppl.next_batch(300, n_epochs=None, shuffle=2)
res_test_ppl.next_batch(300, n_epochs=None, shuffle=2)
se_train_ppl.next_batch(300, n_epochs=None, shuffle=2)
se_test_ppl.next_batch(300, n_epochs=None, shuffle=2)
"""
Explanation: After that, train our models
End of explanation
"""
ResNet_loss = res_test_ppl.get_variable('resloss')
SE_loss = se_test_ppl.get_variable('seloss')
utils.draw(ResNet_loss, 'ResNet', SE_loss, 'Squeeze and excitation')
"""
Explanation: It’s time to show the entire learning process
End of explanation
"""
utils.draw(ResNet_loss, 'ResNet', SE_loss, 'Squeeze and excitation', bound=[300, 500, 0, 0.3])
"""
Explanation: On this plot, it is very difficult to see the difference between them. Let’s look at the chart closer to see the last 200 iterations.
End of explanation
"""
utils.draw(ResNet_loss, 'ResNet', SE_loss, 'Squeeze and excitation', window=50, bound=[300, 500, 0, 0.3])
"""
Explanation: Because of the large variance, it is again impossible to tell which model is better. We can try to smooth out and see how the error will behave.
End of explanation
"""
def get_maps(graph, ppl, sess):
operations = graph.get_operations()
head_operations = [oper for oper in operations if 'head' in oper.name]
oper_name = head_operations[1].name + ':0'
next_batch = ppl.next_batch()
maps = sess.run(oper_name,
feed_dict={
'ResNet/inputs/images:0': next_batch.images,
'ResNet/inputs/labels:0': next_batch.labels,
'ResNet/globals/is_training:0': False
})
return maps, next_batch.labels
"""
Explanation: It's clearer now that squeeze and excitation block on average gives better quality than simple ResNet. And SE ResNet has approximately the same number of parameters:
* SE ResNet - 23994378
* classic ResNet - 23495690.
While SE blocks have been empirically shown to improve network performance, let's understand how the self-gating excitation mechanism operates in practice. To provide a clearer picture of the behavior of SE blocks, we will draw the values of activations from our SE ResNet and examine their distribution with respect to different classes. In this case, the distribution is the difference between the activation values for examples with different classes.
End of explanation
"""
res_sess = res_test_ppl.get_model_by_name("resnet").session
res_graph = res_sess.graph
se_sess = se_test_ppl.get_model_by_name('se_block').session
se_graph = se_sess.graph
res_maps, res_answers = get_maps(res_graph, res_test_ppl, res_sess)
se_maps, se_answers = get_maps(se_graph, se_test_ppl, se_sess)
"""
Explanation: Loading our maps and answers
End of explanation
"""
def draw_avgpooling(maps, answers, model=True):
import seaborn as sns
from pandas import ewma
col = sns.color_palette("Set2", 8) + sns.color_palette(["#9b59b6", "#3498db"])
indices = np.array([np.where(answers == i)[0] for i in range(10)])
filters = np.array([np.mean(maps[indices[i]], axis=0).reshape(-1) for i in range(10)])
for i in range(10):
plt.plot(ewma(filters[i], span=350, adjust=False), color=col[i], label=str(i))
plt.title("Distribution of average pooling in "+("SE ResNet" if model else 'simple ResNet'))
plt.legend(fontsize=16, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('Activation value', fontsize=18)
plt.xlabel('Future map index', fontsize=18)
plt.axis([0, 2060, 0., 1.])
plt.show()
draw_avgpooling(se_maps, se_answers)
draw_avgpooling(res_maps, res_answers, False)
"""
Explanation: Draw a plot of the distribution of card activations after GAP for individual classes. Each line is the distribution of one class.
End of explanation
"""
|
Kaggle/learntools | notebooks/feature_engineering/raw/tut1.ipynb | apache-2.0 | #$HIDE_INPUT$
import pandas as pd
ks = pd.read_csv('../input/kickstarter-projects/ks-projects-201801.csv',
parse_dates=['deadline', 'launched'])
ks.head(6)
"""
Explanation: Introduction
In this course, you will learn a practical approach to feature engineering. You'll be able to apply what you learn to Kaggle competitions and other machine learning applications.
Load the data
We'll work with data from Kickstarter projects. The first few rows of the data looks like this:
End of explanation
"""
print('Unique values in `state` column:', list(ks.state.unique()))
"""
Explanation: The state column shows the outcome of the project.
End of explanation
"""
# Drop live projects
ks = ks.query('state != "live"')
# Add outcome column, "successful" == 1, others are 0
ks = ks.assign(outcome=(ks['state'] == 'successful').astype(int))
"""
Explanation: Using this data, how can we use features such as project category, currency, funding goal, and country to predict if a Kickstarter project will succeed?
Prepare the target column
First we'll convert the state column into a target we can use in a model. Data cleaning isn't the current focus, so we'll simplify this example by:
Dropping projects that are "live"
Counting "successful" states as outcome = 1
Combining every other state as outcome = 0
End of explanation
"""
ks = ks.assign(hour=ks.launched.dt.hour,
day=ks.launched.dt.day,
month=ks.launched.dt.month,
year=ks.launched.dt.year)
"""
Explanation: Convert timestamps
Next, we convert the launched feature into categorical features we can use in a model. Since we loaded the columns as timestamp data, we access date and time values through the .dt attribute on the timestamp column.
Note: If you're not familiar with categorical features and label encoding, please check out this lesson from the Intermediate Machine Learning course.
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
cat_features = ['category', 'currency', 'country']
encoder = LabelEncoder()
# Apply the label encoder to each column
encoded = ks[cat_features].apply(encoder.fit_transform)
"""
Explanation: Prep categorical variables
Now for the categorical variables -- category, currency, and country -- we'll need to convert them into integers so our model can use the data. For this we'll use scikit-learn's LabelEncoder. This assigns an integer to each value of the categorical feature.
End of explanation
"""
# Since ks and encoded have the same index and I can easily join them
data = ks[['goal', 'hour', 'day', 'month', 'year', 'outcome']].join(encoded)
data.head()
"""
Explanation: We collect all of these features in a new dataframe that we can use to train a model.
End of explanation
"""
valid_fraction = 0.1
valid_size = int(len(data) * valid_fraction)
train = data[:-2 * valid_size]
valid = data[-2 * valid_size:-valid_size]
test = data[-valid_size:]
"""
Explanation: Create training, validation, and test splits
We need to create data sets for training, validation, and testing. We'll use a fairly simple approach and split the data using slices. We'll use 10% of the data as a validation set, 10% for testing, and the other 80% for training.
End of explanation
"""
import lightgbm as lgb
feature_cols = train.columns.drop('outcome')
dtrain = lgb.Dataset(train[feature_cols], label=train['outcome'])
dvalid = lgb.Dataset(valid[feature_cols], label=valid['outcome'])
param = {'num_leaves': 64, 'objective': 'binary'}
param['metric'] = 'auc'
num_round = 1000
bst = lgb.train(param, dtrain, num_round, valid_sets=[dvalid], early_stopping_rounds=10, verbose_eval=False)
"""
Explanation: Train a model
For this course we'll be using a LightGBM model. This is a tree-based model that typically provides the best performance, even compared to XGBoost. It's also relatively fast to train.
We won't do hyperparameter optimization because that isn't the goal of this course. So, our models won't be the absolute best performance you can get. But you'll still see model performance improve as we do feature engineering.
End of explanation
"""
from sklearn import metrics
ypred = bst.predict(test[feature_cols])
score = metrics.roc_auc_score(test['outcome'], ypred)
print(f"Test AUC score: {score}")
"""
Explanation: Make predictions & evaluate the model
Finally, let's make predictions on the test set with the model and see how well it performs. An important thing to remember is that you can overfit to the validation data. This is why we need a test set that the model never sees until the final evaluation.
End of explanation
"""
|
dalek7/umbrella | Python/randomtest.ipynb | mit | np.random.seed(0)
p = np.array([0.1, 0.0, 0.6, 0.3])
print(p)
print(p.ravel())
v =[0,0,0,0]
ntest = 1000
for i in range(ntest):
idx = np.random.choice([0, 1, 2, 3], p = p.ravel())
v[idx] += 1
#print(i, idx)
print(v)
v = np.array(v)
print(v / float(ntest))
"""
Explanation: np.random.choice
python
np.random.seed(0)
p = np.array([0.1, 0.0, 0.7, 0.2])
index = np.random.choice([0, 1, 2, 3], p = p.ravel())
This means that you will pick the index according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
End of explanation
"""
m1 = np.random.permutation(5)
print(m1)
X = np.array([1,2,3,4,5])
m = X.shape[0]
permutation = list(np.random.permutation(m))
shuffled_X = X[permutation]
print(X)
print(shuffled_X)
print(permutation)
"""
Explanation: np.random.permutation
End of explanation
"""
x1 = [2, 4, 8, 10, 20]
x2 = [0.2, 0.4, -0.8, 1.0, -2.0]
X = np.transpose(np.vstack((x1, x2)))
print(X)
print(X.shape)
permutation = np.random.permutation(X.shape[0])
X_shuffle= X[permutation]
print(X_shuffle)
"""
Explanation: np.random.permutation #2
End of explanation
"""
x1 = [2, 4, 8, 10, 20]
x2 = [0.2, 0.4, -0.8, 1.0, -2.0]
X = np.transpose(np.vstack((x1, x2)))
print(X)
np.take(X, np.random.permutation(X.shape[0]), axis=0, out=X)
"""
Explanation: 다른 방법
End of explanation
"""
y1 = [2, 4, 8, 10, 20]
y2 = [0.2, 0.4, -0.8, 1.0, -2.0]
Y = np.transpose(np.vstack((y1, y2)))
print(Y, Y.shape)
np.take(Y, np.random.permutation(Y.shape[0]), axis=0, out=Y)
# https://stackoverflow.com/a/35647011
print(Y)
"""
Explanation: 다른 방법
End of explanation
"""
def tolist(a):
try:
return list(tolist(i) for i in a)
except TypeError:
return a
z1 = [2, 4, 8, 10, 20]
z2 = [0.2, 0.4, -0.8, 1.0, -2.0]
Z = np.transpose(np.vstack((z1, z2)))
print(Z)
Z1 = tolist(Z)
print(Z1)
import random
random.shuffle(Z1)
print(Z1)
print(np.array(Z1))
"""
Explanation: 또 다른 방법
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nuist/cmip6/models/sandbox-1/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-1', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
GEMScienceTools/rmtk | notebooks/vulnerability/model_generator/DBELA_approach/DBELA.ipynb | agpl-3.0 | from rmtk.vulnerability.model_generator.DBELA_approach import DBELA
from rmtk.vulnerability.common import utils
%matplotlib inline
"""
Explanation: Generation of capacity curves using DBELA
This notebook enables the user to generate capacity curves (in terms of spectral acceleration vs. spectral displacement) using the Displacement-based Earthquake Loss Assessment (DBELA) approach. The DBELA methodology permits the calculation of the displacement capacity of a collection of structures at a number of limit states (which could be structural or non-structural). These displacements are derived based on the capacity of an equivalent SDOF structure, following the principles of structural mechanics (Crowley et al., 2004; Bal et al., 2010; Silva et al., 2013). The figure below illustrates various capacity curves generated using this approach.
<img src="../../../../figures/synthethic_capacity_curves.png" width="350" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
building_model_file = "../../../../../rmtk_data/DBELA/bare_frames.csv"
damage_model_file = "../../../../../rmtk_data/DBELA/damage_model_dbela_low_code.csv"
"""
Explanation: Load geometric and material properties
In order to use this methodology it is necessary to define a building model, which specifies the probabilistic distribution of the geometrical and material properties. These models need to be defined according to the format described in the RMTK manual. Please specify below the paths for the input files containing the building model and damage model:
End of explanation
"""
no_assets = 100
"""
Explanation: Number of samples
The parameter no_assets below controls the number of synthetic structural models or assets (each one with unique geometrical and material properties) that will be generated using a Monte Carlo sampling process:
End of explanation
"""
building_class_model = DBELA.read_building_class_model(building_model_file)
assets = DBELA.generate_assets(building_class_model, no_assets)
damage_model = utils.read_damage_model(damage_model_file)
capacity_curves = DBELA.generate_capacity_curves(assets, damage_model)
"""
Explanation: Generate the capacity curves
End of explanation
"""
utils.plot_capacity_curves(capacity_curves)
"""
Explanation: Plot the capacity curves
End of explanation
"""
gamma = 1.2
yielding_point_index = 1.0
capacity_curves = utils.add_information(capacity_curves, "gamma", "value", gamma)
capacity_curves = utils.add_information(capacity_curves, "yielding point", "point", yielding_point_index)
"""
Explanation: Include additional information
Additional information can be added to the capacity curves generated using the above method. For instance, by setting appropriate values for the parameters gamma and yielding_point_index in the cell below, the add_information function can be used to include this data in the previously generated capacity curves.
End of explanation
"""
output_file = "../../../../../rmtk_data/capacity_curves_dbela.csv"
utils.save_SdSa_capacity_curves(capacity_curves, output_file)
"""
Explanation: Save capacity curves
Please specify below the path for the output file to save the capacity curves:
End of explanation
"""
|
cosmostatschool/MACSS2017 | Projects/mcmc/first_day.ipynb | mit | import numpy as np
import scipy.integrate as integrate
"""
Explanation: MCMC from scratch
Here we will write a simple python program that will perform the Metropolis algorithm. In order to sample the posterior of the probability function given supernovae data.
Loglike computation
First we need to be able to compute the likelihood of the data given some parameters. We will actually compute the logarithm of the likelihood as it is easier to handle for the computer.
First let's import some basic libraries:
End of explanation
"""
def E(z,OmDE):
"""
This function computes the integrand for the computation of the luminosity distance for a flat universe
z -> float
OmDE -> float
gives
E -> float
"""
return 1/np.sqrt((1-OmDE)*(1+z)**3+OmDE)
def dl(z,OmDE,h=0.7):
"""
This function computes the luminosity distance
z -> float
OmDE -> float
h ->float
returns
dl -> float
"""
inte=integrate.quad(E,0,z,args=(OmDE))
# Velocidad del sonido en km/s
c = 299792.458
# Factor de Hubble
Ho = 100*h
return c*(1+z)/Ho * inte[0]
"""
Explanation: Now let's use a scipy integrator in order to obtain the luminosity distance
End of explanation
"""
zandmu = np.loadtxt('../data/SCPUnion2.1_mu_vs_z.txt', skiprows=5,usecols=(1,2))
covariance = np.loadtxt('../data/SCPUnion2.1_covmat_sys.txt')
type(zandmu)
np.shape(zandmu[1])
"""
Explanation: We now have to load the supernovae data in order to compare it with the theoretical data
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sb
"""
Explanation: How does this data look like?
End of explanation
"""
plt.plot(zandmu[:,0],zandmu[:,1],'o')
heat = np.log(abs(covariance))
sb.set()
sb.heatmap(heat)
yerr = np.loadtxt('../data/SCPUnion2.1_mu_vs_z.txt', skiprows=5,usecols=[3])
plt.errorbar(zandmu[:,0],zandmu[:,1], yerr=yerr, fmt='o')
"""
Explanation: The supernovae
End of explanation
"""
dl = np.vectorize(dl)
def loglike(params,h=0.7):
"""
This function computes the logarithm of the likelihood. It recieves a vector
params-> vector with one component (Omega Dark Energy)
"""
OmDE = params[0]
# Ahora quiero calcular la diferencia entre el valor reportado y el calculado
delta = 5.*np.log10(dl(zandmu[:,0],OmDE,h))+25-zandmu[:,1]
chisquare=np.dot(delta,np.dot(np.linalg.inv(covariance),delta))
return -chisquare/2
table_omega = np.arange(0.,1.,0.01)
tableprob=[loglike([hola]) for hola in table_omega]
plt.plot(table_omega,tableprob)
loglike([0.6])
"""
Explanation: Now we can compute the log like.
End of explanation
"""
jla_dataset = np.genfromtxt('../data/jla/data/jla_lcparams.txt', names=True, dtype=None)
"""
Explanation: All for today!
You can try jla data
From http://supernovae.in2p3.fr/sdss_snls_jla/ReadMe.html download jla_likelihood_v6.tgz and covmat_v6.tgz.
The supernovae info is contained in the file jla_lcparams.txt go that file and remove the "#" from the first line
Load it using genfromtxt
End of explanation
"""
jla_dataset['zcmb']
"""
Explanation: This function is very clever and you can use the names of the variables instead of the number of the corresponding column, like that:
End of explanation
"""
jla_dataset.dtype.names
plt.plot(jla_dataset['zcmb'],jla_dataset['mb'],'o')
"""
Explanation: You can ask which are the different columns or info in that file:
End of explanation
"""
import pyfits
import glob
def mu_cov(alpha, beta):
""" Assemble the full covariance matrix of distance modulus
See Betoule et al. (2014), Eq. 11-13 for reference
"""
Ceta = sum([pyfits.getdata(mat) for mat in glob.glob('../data/jla/covmat/C*.fits')])
Cmu = np.zeros_like(Ceta[::3,::3])
for i, coef1 in enumerate([1., alpha, -beta]):
for j, coef2 in enumerate([1., alpha, -beta]):
Cmu += (coef1 * coef2) * Ceta[i::3,j::3]
# Add diagonal term from Eq. 13
sigma = np.loadtxt('../data/jla/covmat/sigma_mu.txt')
sigma_pecvel = (5 * 150 / 3e5) / (np.log(10.) * sigma[:, 2])
Cmu[np.diag_indices_from(Cmu)] += sigma[:, 0] ** 2 + sigma[:, 1] ** 2 + sigma_pecvel ** 2
return Cmu
"""
Explanation: From http://supernovae.in2p3.fr/sdss_snls_jla/ReadMe.html download covmat_v6.tgz and use the example.py to compute the covariance matrix for a particular alpha, beta.
- Copy the contents of example.py here
- Change the names of the files according to your own file order
- Change all the "numpy" for "np"
End of explanation
"""
Cmu = mu_cov(0.13, 3.1)
np.shape(Cmu)
"""
Explanation: Now you can compute the covariance matrix but it will depend on the nuisanse parammeters alfa and beta
End of explanation
"""
def loglike_jla(params,h=0.7):
"""
This function computes the logarithm of the likelihood. It recieves a vector
params-> vector with three components (Omega Dark Energy, alpha, beta and M_b)
"""
OmDE = params[0]
alpha = params[1]
beta = params[2]
MB = params[3]
covariance = mu_cov(alpha, beta)
inv_covariance=np.linalg.inv(covariance)
# Ahora quiero calcular la diferencia entre el valor reportado y el calculado
mu_obs = jla_dataset['mb']-(MB-alpha*jla_dataset['x1']+beta*jla_dataset['color'])
mu_teo = 5.*np.log10(dl(jla_dataset['zcmb'],OmDE,h))+25
delta = mu_teo - mu_obs
chisquare = np.dot(delta,np.dot(inv_covariance,delta))
return -chisquare/2
param = [0.65,0.13,3.1,-20]
loglike_jla(param)
param = [0.7,0.13,3.1,-19]
loglike_jla(param)
"""
Explanation: Now I can compute the loglike in the same way as for the union2.1 data. Just copy the loglike function from before, add as params
- OmDE
- alpha
- beta
- M_b
For the covariance matrix use the function from before, and instead of the "zandmu" variable use jla_dataset
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/notebook_eleves/2018-2019/2018-10-09_ensemble_gradient_boosting.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: 2018-10-09 Ensemble, Gradient, Boosting...
Le noteboook explore quelques particularités des algorithmes d'apprentissage pour expliquer certains résultats numériques. L'algoithme AdaBoost surpondère les exemples sur lequel un modèle fait des erreurs.
End of explanation
"""
import numpy, numpy.random
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import confusion_matrix
N = 1000
res = []
for n in [1, 2, 5, 10, 20, 50, 80, 90, 100, 110]:
print("n=", n)
for k in range(10):
X = numpy.zeros((N, 2))
X[:, 0] = numpy.random.randint(0, 2, (N,))
X[:, 1] = numpy.random.randint(0, n+1, (N,))
Y = X[:, 0] + X[:, 1] + numpy.random.normal(size=(N,)) / 2
Y[Y < 1.5] = 0
Y[Y >= 1.5] = 1
X_train, X_test, y_train, y_test = train_test_split(X, Y)
stat = dict(N=N, n=n, ratio_train=y_train.sum()/y_train.shape[0],
k=k, ratio_test=y_test.sum()/y_test.shape[0])
for model in [LogisticRegression(solver="liblinear"),
MLPClassifier(max_iter=500),
RandomForestClassifier(n_estimators=10),
AdaBoostClassifier(DecisionTreeClassifier(), n_estimators=10)]:
obs = stat.copy()
obs["model"] = model.__class__.__name__
if obs["model"] == "AdaBoostClassifier":
obs["model"] = "AdaB-" + model.base_estimator.__class__.__name__
try:
model.fit(X_train, y_train)
except ValueError as e:
obs["erreur"] = str(e)
res.append(obs)
continue
sc = model.score(X_test, y_test)
obs["accuracy"] = sc
conf = confusion_matrix(y_test, model.predict(X_test))
try:
obs["Error-0|1"] = conf[0, 1] / conf[0, :].sum()
obs["Error-1|0"] = conf[1, 0] / conf[1, :].sum()
except Exception:
pass
res.append(obs)
from pandas import DataFrame
df = DataFrame(res)
df = df.sort_values(['n', 'model', 'model', "k"]).reset_index(drop=True)
df["diff_ratio"] = (df["ratio_test"] - df["ratio_train"]).abs()
df.head(n=5)
df.tail(n=5)
"""
Explanation: Skewed split train test
Lorsqu'une classe est sous représentée, il est difficile de prédire les résultats d'un modèle de machine learning.
End of explanation
"""
df[df.n==100][["n", "ratio_test", "ratio_train"]].head(n=10)
#df.to_excel("data.xlsx")
columns = ["n", "N", "model"]
agg = df.groupby(columns, as_index=False).mean().sort_values(["n", "model"]).reset_index(drop=True)
agg.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(10,4))
agg.plot(x="n", y="diff_ratio", ax=ax[0])
agg.plot(x="n", y="ratio_train", ax=ax[1])
agg.plot(x="n", y="ratio_test", ax=ax[1])
ax[0].set_title("Maximum difference between\nratio of first class on train and test")
ax[1].set_title("Ratio of first class on train and test")
ax[0].legend();
"""
Explanation: La répartition train/test est loin d'être statisfaisante lorsqu'il existe une classe sous représentée.
End of explanation
"""
agg2 = agg.copy()
agg2["ratio_test2"] = agg2["ratio_test"] + agg2["n"] / 100000
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 3, figsize=(14,4))
agg2.pivot("ratio_test2", "model", "accuracy").plot(ax=ax[0])
agg2.pivot("ratio_test2", "model", "Error-0|1").plot(ax=ax[1])
agg2.pivot("ratio_test2", "model", "Error-1|0").plot(ax=ax[2])
ax[0].plot([0.5, 1.0], [0.5, 1.0], '--', label="constant")
ax[0].set_title("Accuracy")
ax[1].set_title("Error-0|1")
ax[2].set_title("Error-1|0")
ax[0].legend();
agg2.pivot("ratio_test2", "model", "Error-0|1")
"""
Explanation: Une astuce pour éviter les doublons avant d'effecturer un pivot.
End of explanation
"""
from sklearn.datasets import load_diabetes
boston = load_diabetes()
X, y = boston.data, boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.ensemble import RandomForestRegressor
model = None
res = []
for i in range(0, 20):
if model is None:
model = RandomForestRegressor(n_estimators=1, warm_start=True)
else:
model.set_params(**dict(n_estimators=model.n_estimators+1))
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
res.append(dict(n_estimators=model.n_estimators, score=score))
df = DataFrame(res)
df.head()
ax = df.plot(x="n_estimators", y="score")
ax.set_title("Apprentissage continu\nmesure de la performance à chaque itération");
"""
Explanation: Le modèle AdaBoost construit 10 arbres tout comme la forêt aléatoire à ceci près que le poids associé à chacun des arbres des différents et non uniforme.
Apprentissage continu
Apprendre une forêt aléatoire, puis ajouter un arbre, encore un tout en gardant le résultat des apprentissages précédents.
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-all-ph-12d.ipynb | mit | ph_sel_name = "all-ph"
data_id = "12d"
# ph_sel_name = "all-ph"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:37:14 2017
Duration: 9 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
"""
Explanation: Data folder:
End of explanation
"""
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Check that the folder exists:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
file_list
## Selection for POLIMI 2012-12-6 dataset
# file_list.pop(2)
# file_list = file_list[1:-2]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for P.E. 2012-12-6 dataset
# file_list.pop(1)
# file_list = file_list[:-1]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
"""
Explanation: List of data files in data_dir:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
from mpl_toolkits.axes_grid1 import AxesGrid
import lmfit
print('lmfit version:', lmfit.__version__)
assert d.dir_ex == 0
assert d.leakage == 0
d.burst_search(m=10, F=6, ph_sel=ph_sel)
print(d.ph_sel, d.num_bursts)
ds_sa = d.select_bursts(select_bursts.naa, th1=30)
ds_sa.num_bursts
"""
Explanation: Burst search and selection
End of explanation
"""
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)
ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)
ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)
ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)
ds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
ds_sas.num_bursts
dx = ds_sas0
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas2
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas3
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
plt.title('(nd + na) for A-only population using different S cutoff');
dx = ds_sa
alex_jointplot(dx);
dplot(ds_sa, hist_S)
"""
Explanation: Preliminary selection and plots
End of explanation
"""
dx = ds_sa
bin_width = 0.03
bandwidth = 0.03
bins = np.r_[-0.2 : 1 : bin_width]
x_kde = np.arange(bins.min(), bins.max(), 0.0002)
## Weights
weights = None
## Histogram fit
fitter_g = mfit.MultiFitter(dx.S)
fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_hist_orig = fitter_g.hist_pdf
S_2peaks = fitter_g.params.loc[0, 'p1_center']
dir_ex_S2p = S_2peaks/(1 - S_2peaks)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)
## KDE
fitter_g.calc_kde(bandwidth=bandwidth)
fitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak = fitter_g.kde_max_pos[0]
dir_ex_S_kde = S_peak/(1 - S_peak)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));
## 2-Asym-Gaussian
fitter_ag = mfit.MultiFitter(dx.S)
fitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))
#print(fitter_ag.fit_obj[0].model.fit_report())
S_2peaks_a = fitter_ag.params.loc[0, 'p1_center']
dir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_ag, ax=ax[1])
ax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));
"""
Explanation: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods:
- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians
(an asymmetric Gaussian has right- and left-side of the peak
decreasing according to different sigmas).
- KDE maximum
In the following we apply these methods using different selection
or weighting schemes to reduce amount of FRET population and make
fitting of the A-only population easier.
Even selection
Here A-only and FRET population are evenly selected.
End of explanation
"""
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)
fitter = bext.bursts_fitter(dx, 'S')
fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))
S_1peaks_th = fitter.params.loc[0, 'center']
dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)
mfit.plot_mfit(fitter)
plt.xlim(-0.1, 0.6)
"""
Explanation: Zero threshold on nd
Select bursts with:
$$n_d < 0$$.
End of explanation
"""
dx = ds_sa
## Weights
weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])
weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0
## Histogram fit
fitter_w1 = mfit.MultiFitter(dx.S)
fitter_w1.weights = [weights]
fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']
dir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)
## KDE
fitter_w1.calc_kde(bandwidth=bandwidth)
fitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w1 = fitter_w1.kde_max_pos[0]
dir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)
def plot_weights(x, weights, ax):
ax2 = ax.twinx()
x_sort = x.argsort()
ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)
ax2.set_ylabel('Weights');
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w1, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))
mfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));
"""
Explanation: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
End of explanation
"""
## Weights
sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]
weights = dx.naa[0] - abs(sizes)
weights[weights < 0] = 0
## Histogram
fitter_w4 = mfit.MultiFitter(dx.S)
fitter_w4.weights = [weights]
fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']
dir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)
## KDE
fitter_w4.calc_kde(bandwidth=bandwidth)
fitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w4 = fitter_w4.kde_max_pos[0]
dir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w4, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))
mfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));
"""
Explanation: Selection 2
Bursts are here weighted using weights $w$:
$$w = n_{aa} - |n_a + n_d|$$
End of explanation
"""
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
print(ds_saw.num_bursts)
dx = ds_saw
## Weights
weights = None
## 2-Gaussians
fitter_w5 = mfit.MultiFitter(dx.S)
fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']
dir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)
## KDE
fitter_w5.calc_kde(bandwidth=bandwidth)
fitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w5 = fitter_w5.kde_max_pos[0]
S_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr
dir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)
## 2-Asym-Gaussians
fitter_w5a = mfit.MultiFitter(dx.S)
fitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))
S_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']
dir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)
#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))
print('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)
fig, ax = plt.subplots(1, 3, figsize=(19, 4.5))
mfit.plot_mfit(fitter_w5, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))
mfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));
mfit.plot_mfit(fitter_w5a, ax=ax[2])
mfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)
ax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));
"""
Explanation: Selection 3
Bursts are here selected according to:
$$n_{aa} - |n_a + n_d| > 30$$
End of explanation
"""
sample = data_id
n_bursts_aa = ds_sas.num_bursts[0]
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '
'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '
'S_2peaks_w5 S_2peaks_w5_fiterr\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
zizouvb/deeplearning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# Dictionary to go from the words to an id, we'll call vocab_to_int
vocab_to_int = {word: i for i,word in enumerate(set(text))}
# Dictionary to go from the id to word, we'll call int_to_vocab
int_to_vocab = {i:word for i, word in enumerate(set(text))}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {'.':'||Period||',
',':'||Comma||',
'"':'||Quotation_Mark||',
';':'||Semicolon||',
'?':'||Question_mark||',
'!':'||Exclamation_mark||',
'(':'||Left_Parentheses||',
')':'||Right_Parentheses||',
'--':'||Dash||',
'\n':'||Return||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
# Input text placeholder named "input" using the TF Placeholder name parameter.
Input = tf.placeholder(tf.int32,shape=[None, None],name="input")
# Targets placeholder
Targets = tf.placeholder(tf.int32,shape=[None, None],name="targets")
# Learning Rate placeholder
LearningRate = tf.placeholder(tf.float32,name="learning_rate")
return Input, Targets, LearningRate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm)
Cell = tf.contrib.rnn.MultiRNNCell([lstm]*1)
InitialState = Cell.zero_state(batch_size,tf.float32)
InitialState = tf.identity(InitialState, name = "initial_state")
return Cell, InitialState
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform([vocab_size, embed_dim]))
embed = tf.nn.embedding_lookup(embedding,input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
# Build the RNN using the tf.nn.dynamic_rnn()
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
#Apply the name "final_state" to the final state using tf.identity()
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
# Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
embed = get_embed(input_data, vocab_size, embed_dim)
# Build RNN using cell and your build_rnn(cell, inputs) function.
rnn, FinalState = build_rnn(cell, embed)
# Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Logits = tf.contrib.layers.fully_connected(
inputs = rnn,
num_outputs = vocab_size,
activation_fn = None)
return Logits, FinalState
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
# Compute nb of batchs
num_batches = int(len(int_text) / batch_size / seq_length)
#print(len(int_text), batch_size, seq_length,num_batches)
# Extract input_data and target_data
input_vector = np.array(int_text[:num_batches * batch_size * seq_length])
target_vector = np.array(int_text[1:(num_batches * batch_size * seq_length)+1])
#print(len(input_vector))
# Notice that the last target value in the last batch is the first input value of the first batch.
target_vector[-1] = input_vector[0]
# reshape to batch size
inputs = input_vector.reshape(batch_size, -1)
targets = target_vector.reshape(batch_size, -1)
#print(inputs.shape)
# split secquences
batch_inputs = np.array(np.split(inputs, num_batches, 1))
batch_targets = np.array(np.split(targets, num_batches, 1))
#print(batch_inputs[0].shape)
# concatenate inputs and targets batches
Batches = np.array(list(zip(batch_inputs, batch_targets)))
return Batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 5
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name(name = "input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name(name = "initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name(name = "final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name(name = "probs:0")
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
# First "bad" idea
#return int_to_vocab[np.argmax(probabilities)]
# Use of slight bit of randomness is helpful when predicting the next word. Otherwise, the predictions might fall into a loop of the same words.
word_idx = np.random.choice(len(probabilities), size=1, p=probabilities)[0]
pred_word = int_to_vocab[word_idx]
return pred_word
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_brainstorm_phantom_elekta.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
from mayavi import mlab
print(__doc__)
"""
Explanation: Brainstorm Elekta phantom tutorial dataset
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
"""
data_path = bst_phantom_elekta.data_path()
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
"""
Explanation: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
are read to construct instances of :class:mne.io.Raw.
End of explanation
"""
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG2421']
"""
Explanation: Data channel array consisted of 204 MEG planor gradiometers,
102 axial magnetometers, and 3 stimulus channels. Let's get the events
for the phantom, where each dipole (1-32) gets its own event:
End of explanation
"""
raw.plot_psd(tmax=60., average=False)
"""
Explanation: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
to save memory:
End of explanation
"""
raw.fix_mag_coil_types()
raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
"""
Explanation: Let's use Maxwell filtering to clean the data a bit.
Ideally we would have the fine calibration and cross-talk information
for the site of interest, but we don't, so we just do:
End of explanation
"""
raw.filter(None, 40., fir_design='firwin')
raw.plot(events=events)
"""
Explanation: We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
End of explanation
"""
tmin, tmax = -0.1, 0.1
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),
decim=3, preload=True)
epochs['1'].average().plot()
"""
Explanation: Now we epoch our data, average it, and look at the first dipole response.
The first peak appears around 3 ms. Because we low-passed at 40 Hz,
we can also decimate our data to save memory.
End of explanation
"""
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
mne.viz.plot_alignment(raw.info, subject='sample',
meg='helmet', bem=sphere, dig=True,
surfaces=['brain'])
"""
Explanation: Let's use a sphere head geometry model and let's see the coordinate
alignement and the sphere location. The phantom is properly modeled by
a single-shell sphere with origin (0., 0., 0.).
End of explanation
"""
cov = mne.compute_covariance(epochs, tmax=0)
data = []
for ii in event_id:
evoked = epochs[str(ii)].average()
idx_peak = np.argmax(evoked.copy().pick_types(meg='grad').data.std(axis=0))
t_peak = evoked.times[idx_peak]
evoked.crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs, raw
dip = fit_dipole(evoked, cov, sphere, n_jobs=1)[0]
"""
Explanation: Let's do some dipole fits. We first compute the noise covariance,
then do the fits for each event_id taking the time instant that maximizes
the global field power.
End of explanation
"""
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %s' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1)))
print('mean(angle error) = %s' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel('Angle error (rad)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %s' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
"""
Explanation: Now we can compare to the actual locations, taking the difference in mm:
End of explanation
"""
def plot_pos_ori(pos, ori, color=(0., 0., 0.)):
mlab.points3d(pos[:, 0], pos[:, 1], pos[:, 2], scale_factor=0.005,
color=color)
mlab.quiver3d(pos[:, 0], pos[:, 1], pos[:, 2],
ori[:, 0], ori[:, 1], ori[:, 2],
scale_factor=0.03,
color=color)
mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces=[])
# Plot the position and the orientation of the actual dipole
plot_pos_ori(actual_pos, actual_ori, color=(1., 0., 0.))
# Plot the position and the orientation of the estimated dipole
plot_pos_ori(dip.pos, dip.ori, color=(0., 0., 1.))
"""
Explanation: Let's plot the positions and the orientations of the actual and the estimated
dipoles
End of explanation
"""
|
hparik11/Deep-Learning-Nanodegree-Foundation-Repository | Project1/first-neural-network/dlnd-your-first-neural-network.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error * 1.0
## propagate errors to hidden layer
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
# TODO: Calculate the error term for the hidden layer
hidden_error_term = hidden_error * (hidden_outputs * (1.0- hidden_outputs))
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None]
# Weight step (hidden to output)
#print(hidden_outputs.shape)
#print(output_error_term.shape)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
import sys
### Set the hyperparameters here ###
iterations = 50000
learning_rate = 0.05
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
rusucosmin/courses | ml/ex01/template/taskB.ipynb | mit | np.random.seed(10)
p, q = (np.random.rand(i, 2) for i in (4, 5))
p_big, q_big = (np.random.rand(i, 80) for i in (100, 120))
print(p, "\n\n", q)
"""
Explanation: Data Generation
End of explanation
"""
def naive(p, q):
x = []
for i in range(len(p)):
x.append([])
for j in range(len(q)):
x.append(np.sqrt((p[i][0] - q[i][0]) + (p[i][1] - q[i][1])))
return x
"""
Explanation: Solution
End of explanation
"""
rows, cols = np.indices((p.shape[0], q.shape[0]))
print(rows, end='\n\n')
print(cols)
print(p[rows.ravel()], end='\n\n')
print(q[cols.ravel()])
def with_indices(p, q):
diff = p[rows.ravel()] - q[cols.ravel()]
return np.sqrt(diff.dot(diff.T))
"""
Explanation: Use matching indices
Instead of iterating through indices, one can use them directly to parallelize the operations with Numpy.
End of explanation
"""
from scipy.spatial.distance import cdist
def scipy_version(p, q):
return cdist(p, q)
"""
Explanation: Use a library
scipy is the equivalent of matlab toolboxes and have a lot to offer. Actually the pairwise computation is part of the library through the spatial module.
End of explanation
"""
def tensor_broadcasting(p, q):
return np.sqrt(np.sum((p[:,np.newaxis,:]-q[np.newaxis,:,:])**2, axis=2))
"""
Explanation: Numpy Magic
End of explanation
"""
methods = [naive, with_indices, scipy_version, tensor_broadcasting]
timers = []
for f in methods:
r = %timeit -o f(p_big, q_big)
timers.append(r)
plt.figure(figsize=(10,6))
plt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=False) # Set log to True for logarithmic scale
plt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)
plt.xlabel('Method')
plt.ylabel('Time (ms)')
plt.show()
"""
Explanation: Compare methods
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/sandbox-3/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
astroumd/GradMap | notebooks/Lectures2018/Lecture4/.ipynb_checkpoints/Lecture4-2BodyProblem-Student-NEW-checkpoint.ipynb | gpl-3.0 | #Physical Constants (SI units)
G=6.67e-11
AU=1.5e11 #meters. Distance between sun and earth.
daysec=24.0*60*60 #seconds in a day
"""
Explanation: Welcome to your first numerical simulation! The 2 Body Problem
Many problems in statistical physics and astrophysics requiring solving problems consisting of many particles at once (sometimes on the order of thousands or more!). This can't be done by the traditional pen and paper techniques you are all learning in your physics classes. Instead, we must impliment numerical solutions to these problems.
Today, you will create your first of many numerical simulation for a simple problem is that solvable by pen and paper already, the 2 body problem in 2D. In this problem, we will describe the motion between two particles that share a force between them (such as Gravity). We'll design the simulation from an astronomer's mindset with their astronomical units in mind. This simulation will be used to confirm the general motion of the earth around the Sun, and later will be used to predict the motion between two stars within relatively close range.
<br>
<br>
<br>
We will guide you through the physics and math required to create this simulation. The problem here is designed to use the knowledge of scientific python you have been developing this week.
Like any code in python, The first thing we need to do is import the libraries we need. Go ahead and import Numpy and Pyplot below as np and plt respectfully. Don't forget to put matplotlib inline to get everything within the notebook.
Now we will define the physical constants of our system, which will also establish the unit system we have chosen. We'll use SI units here. Below, I've already created the constants. Make sure you understannd what they are before moving on.
End of explanation
"""
#####run specfic constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Intial conditions (posistion [m] and velocities [m/s] in x,y,z coorindates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
"""
Explanation: Next, we will need parameters for the simulation. These are known as intial condititons. For a 2 body gravitation problem, we'll need to know the masses of the two objects, the starting posistions of the two objects, and the starting velocities of the two objects.
Below, I've included the intial conditions for the earth (a) and the Sun (b) at the average distance from the sun and the average velocity around the sun. We also need a starting time, and ending time for the simulation, and a "time-step" for the system. Feel free to adjust all of these as you see fit once you have built the system!
<br>
<br>
<br>
<br>
a note on dt:
As already stated, numeric simulations are approximations. In our case, we are approximating how time flows. We know it flows continious, but the computer cannot work with this. So instead, we break up our time into equal chunks called "dt". The smaller the chunks, the mroe accurate you will become, but at the cost of computer time.
End of explanation
"""
#Function to compute the force between the two objects
def FG(xa,xb,ya,yb):
#Computer rx and ry between Ma and Mb
rx=xb-xa
ry=#Write it in
#compute r^3
r3=#Write in r^3 using the equation above. Make use of np.sqrt()
#Compute the force in Newtons. Use the equations above as a Guide!
fx=#Write it in
fy=-#Write it in
return #What do we return?
"""
Explanation: It will be nice to create a function for the force between Ma and Mb. Below is the physics for the force of Ma on Mb. How the physics works here is not important for the moment. Right now, I want to make sure you can transfer the math shown into a python function. I'll show a picture on the board the physics behind this math for those interested.
$$\vec{F_g}=\frac{-GM_aM_b}{r^3}\vec{r}$$
and
- $$\vec{r}=(x_b-x_a)\hat{x}+ (y_b-y_a)\hat{y}$$
- $$r^3=((x_b-x_a)^2+(y_b-y_a)^2)^{3/2}$$
If we break Fg into the x and y componets we get:
$$Fx=\frac{-GM_aM_b}{r^3}x$$
$$Fy=\frac{-GM_aM_b}{r^3}x$$
<br><br>So, $Fg$ will only need to be a function of xa, xb, ya, and yb. The velocities of the bodies will not be needed. Create a function that calculates the force between the bodies given the posistions of the bodies. My recommendation here will be feed the inputs as seperate componets and also return the force in terms of componets (say, fx and fy). This will make your code easier to make and easier to read.
End of explanation
"""
#Run a loop for the simulation. Keep track of Ma and Mb posistions
#Intialize vectors
xaAr=np.array([])
yaAr=np.array([])
xbAr=#Write it in for Particle B
ybAr=#Write it in for Particle B
"""
Explanation: Now that we have our function, we need to prepare a loop. Before we do, we need to intialize the loop and choose a loop type, for or while. Below is the general outline for how each type of loop can gp.
<br>
<br>
<br>
For loop:
intialize posistions and velocities arrays with np.zeros or np.linspace for the amount of steps needed to go through the simulation (which is numSteps=(tend-t)/dt the way we have set up the problem). The for loop condition is based off time and should read rough like: for i in range(numSteps)
<br>
<br>
<br>
While loop:
intialize posistions and velocities arrays with np.array([]) and use np.append() to tact on new values at each step like so, xaArray=np.append(xaArray,NEWVALUE). The while condition should read, while t<tend
My preference here is While since it keeps my calculations and appending seperate. But, feel free to use which ever feels best for you!
End of explanation
"""
#Your loop here
#using while loop method with appending. Can also be done with for loops
while #What is our condition for ending?:
#Compute current force on Ma and Mb. Ma recieves the opposite force of Mb
fx,fy=Fg(xa,ya,xb,yb)
#Update the velocities and posistions of the particles
vxa=vxa-fx*dt/Ma
vya=#Write it in for y
vxb=#Write it in
vyb=#Write it in
xa=xa+vxa*dt
ya=#Wite it in
xb=#Write it in
yb=#Write it in
#Save data to lists
xaAr=np.append(xaAr,xa)
yaAr=#How will I save it for yaAr?
xbAr=np.append(xbAr,xb)
ybAr=np.append(ybAr,yb)
#update the time by one time step, dt
#How do I update the time?
"""
Explanation: Now for the actual simulation. This is the hardest part to code in. The general idea behind our loop is that as we step through time, we calculate the force, then calculate the new velocity, then the new posistion for each particle. At the end, we must update our arrays to reflect the new changes and update the time of the system. The time is super important! If we don't (say in a while loop), the simulation would never end and we would never get our result.
Outline for the loop (order matters here)
Calculate the force with the last known posistions (use your function!)
Calculate the new velocities using the approximation: vb = vb + dt*fg/Mb and va= va - dt*fg/Ma Note the minus sign here, and the need to do this for the x and y directions!
Calculate the new posistions using the approximation: xb = xb + dt*Vb (same for a and for y's. No minus problem here)
Update the arrays to reflect our new values
Update the time using t=t+dt
<br>
<br>
<br>
<br>
Now when the loop closes back in, the cycle repeats in a logical way. Go one step at a time when creating this loop and use comments to help guide yourself. Ask for help if it gets tricky!
End of explanation
"""
from IPython.display import Image
Image("Earth-Sun-averageResult.jpg")
#Your plot here
plt.plot(#Particle A plot
plt.plot(#Partcile B plot
"""
Explanation: Now for the fun part (or not so fun part if your simulation had an issue), plot your results! This is something well covered in previous lectures. Show me a plot of (xa,ya) and (xb,yb). Does it look sort of familiar? Hopfully you get something like the below image (in units of AU).
End of explanation
"""
|
ptosco/rdkit | Docs/Notebooks/RGroupDecomposition-DummyCores.ipynb | bsd-3-clause | from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
from rdkit.Chem import rdRGroupDecomposition
from IPython.display import HTML
from rdkit import rdBase
rdBase.DisableLog("rdApp.debug")
from rdkit.Chem import PandasTools
import pandas as pd
from rdkit.Chem import PandasTools
core = Chem.MolFromSmarts("*1****1-*2***2")
core
smiles = ["C1CCCC1-C2CCC2Cl", "N1CCCC1-C2CCC2Cl", "O1CCCC1-C2CCC2Cl", "N1OCCC1-C2CCC2Cl", "N1OCSC1-C2CCC2Cl"]
mols = [Chem.MolFromSmiles(smi) for smi in smiles]
from rdkit.Chem import Draw
Draw.MolsToGridImage(mols)
"""
Explanation: This example shows how dummy cores will expand into new cores in the output.
End of explanation
"""
rgroups = rdRGroupDecomposition.RGroupDecomposition(core)
for i,m in enumerate(mols):
rgroups.Add(m)
if i == 10:
break
"""
Explanation: Make the RGroup decomposition!
End of explanation
"""
rgroups.Process()
groups = rgroups.GetRGroupsAsColumns()
frame = pd.DataFrame(groups)
PandasTools.ChangeMoleculeRendering(frame)
"""
Explanation: To finalize the rgroups, we need to call process after all molecules are added.
End of explanation
"""
HTML(frame.to_html())
"""
Explanation: The dummy cores are expanding into the cores found during the decomposition.
End of explanation
"""
|
Vettejeep/Data-Analysis-and-Data-Science-Projects | ROC Curve and the UCI German Credit Data Set.ipynb | gpl-3.0 | %matplotlib inline
import pandas as pd
import numpy as np
import itertools
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
"""
Explanation: ROC Curve and the UCI German Credit Data Set.
Kevin Maher
<span style="color:blue">Vettejeep365@gmail.com</span>
This is a classification problem. The goal is to predict which loans will be good and which ones will default. It tends to be difficult to obtain high accuracy, especially since the problem of predicting loans as good which later default is considered to be a more serious error. When we did this with tree based models in a graduate course at Regis where we were studying tree-based models, all of us in the class had accuracy issues with this data set. Here I have used logistic regression. This has the advantage of reducing the number of failed predictions on predicted good loans. The class weight parameter of the logistic regression model can be tuned to reduce the number of failed loans from those predicted as good loans; though at the cost of predicting more failed loans that turn out to not default.
Mostly I chose this data set and model because it offers the opportunity to show a Receiver Operator Characteristic (ROC) curve for a model that is only a moderately good predictor. The ROC curve is a plot of the true positive rate against the false positive rate and the greater the area under the curve the more accurate the predictive model is.
Imports needed for the script. Uses Python 2.7.13, numpy 1.11.3, pandas 0.19.2, sklearn 0.18.1, matplotlib 2.0.0.
End of explanation
"""
column_names = ['CkgAcct', 'Duration', 'CreditHistory', 'Purpose', 'CreditAmt', 'Savings', 'EmploymentTime',
'PctIncome', 'PersStatus', 'OtherDebts', 'ResidenceSince', 'Property',
'AgeYrs', 'OtherLoan', 'Housing', 'NumCredits', 'Job', 'FamSize', 'Telephone',
'Foriegn', 'LoanGood']
df = pd.read_table('german.data.txt', names=column_names, sep=' ')
print df.head()
"""
Explanation: Import the data. The data file is from:
https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29
End of explanation
"""
df['Foriegn'] = df['Foriegn'].eq('A202').mul(1)
df['Telephone'] = df['Telephone'].eq('A192').mul(1)
"""
Explanation: Make binary factors into 1/0.
End of explanation
"""
def get_dummies(source_df, dest_df, col):
dummies = pd.get_dummies(source_df[col], prefix=col)
print 'Quantities for %s column' % col
for col in dummies:
print '%s: %d' % (col, np.sum(dummies[col]))
print
dest_df = dest_df.join(dummies)
return dest_df
"""
Explanation: For multi-level features, make a function to convert to dummy variables.
End of explanation
"""
ohe_features = ['CkgAcct', 'CreditHistory', 'Purpose', 'Savings', 'EmploymentTime',
'PersStatus', 'OtherDebts', 'Property', 'OtherLoan', 'Housing', 'Job']
for feature in ohe_features:
df = get_dummies(df, df, feature)
df.drop(ohe_features, axis=1, inplace=True)
"""
Explanation: Convert multi-level features to dummy variables, print the quantities for each level. Drop the original features since they have been converted.
End of explanation
"""
drop_dummies = ['CkgAcct_A14', 'CreditHistory_A32', 'Purpose_A43', 'Savings_A61', 'EmploymentTime_A73',
'PersStatus_A93', 'OtherDebts_A101', 'Property_A123', 'OtherLoan_A143', 'Housing_A152',
'Job_A173']
df.drop(drop_dummies, axis=1, inplace=True)
"""
Explanation: "Leave one out", n-1 dummy variables fully describe the categorical feature.
End of explanation
"""
y = df['LoanGood']
X = df.drop('LoanGood', axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=245)
"""
Explanation: Set up for machine learning. 'X' is the data and 'y' is the true classifications from the data set. X_train and y_train are for model training, X_test and y_test are for model testing - proving the model on data unseen during training.
End of explanation
"""
clf = LogisticRegression(C=1.0, class_weight={1: 1.0, 2: 2.0})
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print 'Model: %.2f%% accurate' % (metrics.accuracy_score(y_test, pred) * 100.0)
"""
Explanation: Set up the logistic regression. High accuracy is hard to obtain with this data set. I tended to get the best balance of accuracy and prediction for predicted good loans not defaulting by setting the class weight for the default class to 2.0. Higher values for the class weight reduce the number of defaults in predicted good loans but increase the number of predicted bad loans that are good and decrease overall model accuracy.
End of explanation
"""
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
plt.close()
"""
Explanation: Create a function to plot the confusion matrix. Code from:
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
End of explanation
"""
confusion = metrics.confusion_matrix(y_test, pred)
class_names = ['Loan Good', 'Loan Default']
plot_confusion_matrix(confusion, classes=class_names, title='Confusion matrix')
"""
Explanation: Plot the confusion matrix.
End of explanation
"""
pred_proba = clf.predict_proba(X_test)
y_test_expanded = label_binarize(y_test, classes=[1, 2, 3])[:, 0:2]
"""
Explanation: Get the probabilities so that the ROC curve can be plotted. Also, expand the true value matrix so its size matches the probabilities matrix. the SKLearn function label binarize needs to be fooled into expanding by adding a dummy class, since for binary problems like this one it produces a matrix with one column. So, a dummy column is added then sliced off. See the Scikit Learn documentation for "label_binarize".
End of explanation
"""
n_classes = 2
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test_expanded[:, i], pred_proba[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test_expanded.ravel(), pred_proba.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
"""
Explanation: Compute ROC curve parameters. ROC code from:
http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html#sphx-glr-auto-examples-model-selection-plot-roc-py
End of explanation
"""
plt.figure()
lw = 2
plt.plot(fpr[0], tpr[0], color='darkorange',
lw=lw, label='ROC curve (area = %0.3f)' % roc_auc[0])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
"""
Explanation: Plot the ROC curve. The area under the curve is not bad at 0.815, but a bank would probably want much better predictions of loans that will eventually default. The data in this data set is likely to be inadequate for that, and either more data, or more data features are needed.
End of explanation
"""
|
johntfoster/1DPDpy | PD1D.ipynb | mit | from PD1D import PD_Problem
"""
Explanation: Everything needed to reproduce this work including this file itself and the 1-dimensional peridynamics code (which can be run in stand-alone mode as well) can be found in my Github repository. To clone:
bash
git clone git@github.com:johntfoster/1DPDpy.git
The user will need the Python modules numpy, scipy, and matplotlib installed to run PD1D.py and ipython notebook installed to reproduce this notebook.
Comments on the "patch test" for computational peridynamics
In Finite Element Analysis (FEA) it is common to check the validity of an element formulation as well as to verify a computational implementation with a simple test known as the "patch test". The "patch test" is usually performed by creating a small group or "patch" of irregularly shaped elements, constraining only the rigid body modes, and applying a uniform displacement field at the boundaries. The patch of elements passes the test if the nodal displacements match the exact solution, and the strain fields conform to the expected strain fields based on the element formulation. For example, a patch test of constant strain plane elasticty elements should reproduce a constant strain field when a homogenous deformation is prescribed.
When considering a patch test for the common peridynamic strong-form particle integration scheme, the first thing to remember is that the solution to the peridynamic conservation of momentum equation is not the solution to the partial differential equatoin of the local theory. It is also not the same solution as one would expect to get from a traditional FEA computation. Both <cite data-cite="silling2003dpb">(Silling et al., 2003)</cite> and <cite data-cite="weckner2009">(Weckner et al., 2009)</cite> have developed analytic solutions in 1D in the former and 3D in the later to very special classes of peridynamic problems and constitutive models. The Figure below taken from <cite data-cite="weckner2009">(Weckner et al., 2009)</cite> shows a family of displacement field solutions for an infinite peridynamic bar under a self-equalibriating far-field body force distribution that converges to the classical local elasticity solution in the limit of $\delta \to 0$.
<br>
<img src="files/figs/weckner_disp.png" width="400">
<br>
<cite data-cite="bobaru2009convergence">(Bobaru et al., 2009)</cite> showed that 1D discrete solutions of the strong-form peridynamic momentum equation converge to analytic solutions of a bond-based peridynamic material. This work also showed examples of adaptive and irregular discritizations. Solutions to the peridynamic momentum equation show features not present in the classical theory, namely decaying oscillations in the displacement field and progressively weakening discontinuities that propagate outside of the loading region. Therefore, a correct patch test of any discretized peridynamic solution would have to compare to an analytic solution given the same constitutive model. Unfortunately, these analytic soluitons are near impossible for non-trivial constitutive models in two- and three-dimensions, and while possible, time consuming for formulate in one-dimension. In this document what we will attempt to show will really be consistancy between state-based formulations and not a true patch test.
Constitutive models
For reference, let's recall that the peridynamic conservation of momentum equation has the form
$$
\rho \mathbf{\ddot{u}}[\mathbf{x}] = \int_{\mathcal{H}} \underline{\mathbf{T}}[\mathbf{x}] - \underline{\mathbf{T}}[\mathbf{x'}] {\rm d}V_{\mathbf{x'}}
$$
where an explicit dependence on time has been suprressed in the equation and the consitutive response of the material is contained in the integrand. The simplest form of a constitutive model that could be considered in peridynamics would be one in which the force in each bond pair of $\mathbf{x'}$ and $\mathbf{x}$ was independent of all others. This is the so-called bond-based model that was introduced in the original paper on peridynamics <cite data-cite="silling2000ret">(Silling, 2000)</cite>. An improvement to this model would be a model in which the force in each bond pair is influenced by the totality of deformations of all bonds. If the resulting direction of action of such a force acts along the vector $\mathbf{x'}-\mathbf{x}$ it is called an ordinary state-based material. An example of such a material model is
$$
\underline{\mathbf{T}} = \underline{t} \underline{\mathbf{M}}
$$
where $\underline{\mathbf{M}}$ is called a unit vector-state and is defined as unit vector pointing from the deformed position of $\mathbf{x}$ to the deformed position of $\mathbf{x'}$, and $\underline{t}$ is called a force scalar-state. An example of a force scalar-state that has an equivalent elastic strain energy density to that of a local elasticity model.
$$
\underline{t} = \frac{3 k \theta}{m} \underline{\omega} \, \underline{x} + \alpha \underline{\omega} \, \underline{e}^d
$$
Please refer to <cite data-cite="silling2007psa">(Silling et al., 2007)</cite> for meaning of the notation. This type of state-based consitutive model will be refered to as a linear peridynamic solid or LPS constititive model in the sequal to distingish it from correspondence models. Notice there are no derivatives appearing in this model.
Correspondence models
As a convenience, there is a technique for incorporating classical stress-strain models into a peridynamic formulation by establishing an equivalence or correspondence of strain-energy density functionals. It is important to note that this is still a non-local formulation and will only be exactly equivalent to the classical theory in the limit of vanishing horizon. Omitting the details (refer to <cite data-cite="silling2007psa">(Silling et al., 2007)</cite> or <cite data-cite="foster2010visco">(Foster et al., 2010)</cite>) the correspondence formulation arrives at a force vector state
$$
\underline{\mathbf{T}} = \underline{\omega} \boldsymbol{\sigma}(\bar{\mathbf{F}}) \mathbf{K}^{-1} (\mathbf{x'-x})
$$
where $\boldsymbol{\sigma}$ is the First Piola-Kirchoff stress derived from an approximation of the deformation gradient $\bar{\mathbf{F}}$
$$
\bar{F}{ij} = \int\mathcal{H} (\mathbf{u[x'] - u[x] - x' + x}){i} (\mathbf{x'-x}){j} \rm{d}V_{\mathbf{x'}} K_{ij}^{-1}
$$
and $\mathbf{K}$ is a shape tensor defined by
$$
K_{ij} = \int_\mathcal{H} (\mathbf{x'-x}){i} (\mathbf{x'-x}){j} \rm{d}V_{\mathbf{x'}}
$$
Mathematical convergence to the classical theory
We'll now show briefly that this model converges to the classical theory in the limit of vanishing horizon. First, we assume a continuously differentialble stress field such that we can perform a Taylor expansion about the point $\mathbf{x'} = \mathbf{x}$, i.e.
$$
\sigma_{ij}[\mathbf{x'}] = \sigma_{ij}[\mathbf{x}] + \frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_k} \xi_k + \mathcal{O}(||\boldsymbol{\xi}||^2)
$$
where $\boldsymbol{\xi} = \mathbf{x'-x}$. Substituting this into the force-state relationship and back into the equation of motion (in the absence of body forces) we have
$$
\begin{aligned}
\rho \ddot{u}i[\mathbf{x}] =& \int\mathcal{H} \underline{\omega} \sigma_{ij}[\mathbf{x}] K_{jl}^{-1} \xi_l + \underline{\omega} \left[ \sigma_{ij}[\mathbf{x}] + \frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_k} \xi_k + \mathcal{O}(||\boldsymbol{\xi}||^2) \right] K_{jl}^{-1} \xi_l {\rm d}V_{\mathbf{x'}} \
=& 2 \int_\mathcal{H} \underline{\omega} \sigma_{ij}[\mathbf{x}] K_{jl}^{-1} \xi_l {\rm d}V_{\mathbf{x'}} + \int_\mathcal{H} \underline{\omega} \left[\frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_k} \xi_k + \mathcal{O}(||\boldsymbol{\xi}||^2) \right] K_{jl}^{-1} \xi_l {\rm d}V_{\mathbf{x'}} \
=& \frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_k} K_{jl}^{-1} \int_\mathcal{H} \underline{\omega}\xi_l \xi_k {\rm d}V_{\mathbf{x'}} + \mathcal{O}(\delta) \
=& \frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_k} K_{jl}^{-1} K_{lk} + \mathcal{O}(\delta) \
=& \frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_k} \delta_{jk} + \mathcal{O}(\delta) \
=& \frac{\partial \sigma_{ij}[\mathbf{x}]}{\partial x_j} + \mathcal{O}(\delta) \
\end{aligned}
$$
where we've made use of the antisymmetry in the integrand to eliminate the first term between the second and third step above. Now we take $\delta \to 0$
$$
\begin{aligned}
\lim_{\delta \to 0} \lbrace \rho \ddot{u}i[\mathbf{x}] =& \frac{\partial \sigma{ij}[\mathbf{x}]}{\partial x_j} + \mathcal{O}(\delta) \rbrace \
\rho \ddot{u}i[\mathbf{x}] =& \frac{\partial \sigma{ij}[\mathbf{x}]}{\partial x_j} \
\rho \ddot{\mathbf{u}}[\mathbf{x}] =& \nabla \cdot \boldsymbol{\sigma}[\mathbf{x}]
\end{aligned}
$$
thus establishing the convergence mathematically.
Numerical Examples
We'll start by importing the class that defines a 1-dimensional peridynamic problem.
End of explanation
"""
fixed_length = 40
delta_x = 0.5
fixed_horizon = 3.5 * delta_x
problem1 = PD_Problem(bar_length=fixed_length, number_of_elements=(fixed_length/delta_x),
horizon=fixed_horizon, constitutive_model_flag='LPS', randomization_factor=0.0)
"""
Explanation: Now we instantiate the problem using the LPS constititive model defined in the last section.
End of explanation
"""
problem1.solve(prescribed_displacement=0.0001)
disp1 = problem1.get_solution()
nodes1 = problem1.get_nodes()
"""
Explanation: Solve the problem and get the solution vectors for plotting.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(nodes1, disp1, 'k-');
plt.grid(b=True);
plt.axis('tight');
plt.ylabel('Displacement');
plt.xlabel('$x$ position');
"""
Explanation: Now plot the results
End of explanation
"""
def_grad1 = problem1.get_deformation_gradient()
plt.plot(nodes1, def_grad1, 'k-');
plt.grid(b=True);
plt.axis('tight');
plt.ylabel('Deformation Gradient');
plt.xlabel('$x$ position');
plt.ylim([0.99,1.01])
"""
Explanation: We can also get the deformation gradient and plot it as well. Note that the deformation gradient is not used in the constitutive model in any way. It is post processed and plotted here for comparison.
End of explanation
"""
fixed_length = 40
delta_x = 0.5
fixed_horizon = 3.5 * delta_x
problem2 = PD_Problem(bar_length=fixed_length, number_of_elements=(fixed_length/delta_x),
horizon=fixed_horizon, constitutive_model_flag='LPS', randomization_factor=0.3)
problem2.solve(prescribed_displacement=0.1)
disp2 = problem2.get_solution()
nodes2 = problem2.get_nodes()
plt.plot(nodes2, disp2, 'k-');
plt.grid(b=True);
plt.axis('tight');
plt.ylabel('Displacment');
plt.xlabel('$x$ position');
"""
Explanation: Now we will repeat the steps above, this time using a random purtabation of 30% to the interior nodes. The nodes near the boundaries are not purturbed in any way to preserve consistant application of the boundary conditions.
End of explanation
"""
plt.plot(range(len(nodes2)), problem2.lengths, 'k-');
plt.ylabel('Node volume');
plt.xlabel('Node number');
"""
Explanation: Note the very small oscilations in the displacement field. Just to verify below we plot the node volumes to show the randomness.
End of explanation
"""
def_grad2 = problem2.get_deformation_gradient()
plt.plot(nodes2, def_grad2, 'k-');
plt.grid(b=True);
plt.axis();
"""
Explanation: As expected from the small oscilations in the displacement field, here we see them as well in the deformation gradient.
End of explanation
"""
fixed_length = 40
delta_x = 0.5
fixed_horizon = 3.5 * delta_x
problem3 = PD_Problem(bar_length=fixed_length, number_of_elements=(fixed_length/delta_x),
horizon=fixed_horizon, constitutive_model_flag='correspondence', randomization_factor=0.0)
problem3.solve(prescribed_displacement=0.1)
disp3 = problem3.get_solution()
nodes3 = problem3.get_nodes()
plt.plot(nodes3, disp3, 'k-');
plt.grid(b=True);
plt.axis('tight');
plt.xlabel('$x$ position');
plt.ylabel('Displacement');
def_grad3 = problem3.get_deformation_gradient()
plt.plot(nodes3, def_grad3, 'k-');
plt.grid(b=True);
plt.axis('tight');
plt.xlabel('$x$ position');
plt.ylabel('Deformation Gradient');
plt.ylim([1.0,1.01]);
"""
Explanation: Moving on the corresondence models, we repeat the steps above. First for an equally spaced grid.
End of explanation
"""
fixed_length = 40
delta_x = 0.5
fixed_horizon = 3.5 * delta_x
problem4 = PD_Problem(bar_length=fixed_length, number_of_elements=(fixed_length/delta_x),
horizon=fixed_horizon, constitutive_model_flag='correspondence', randomization_factor=0.3)
problem4.solve(prescribed_displacement=0.1)
disp4 = problem4.get_solution()
nodes4 = problem4.get_nodes()
plt.plot(nodes4, disp4, 'k-');
plt.grid(b=True);
plt.axis('tight');
def_grad4 = problem4.get_deformation_gradient()
plt.plot(nodes4, def_grad4, 'k-');
plt.grid(b=True);
plt.axis();
"""
Explanation: Now a randomly purturbed grid.
End of explanation
"""
plt.plot(nodes1, disp1, 'k-', label='LPS - equal')
plt.plot(nodes2, disp2, 'r-', label='LPS - random')
plt.plot(nodes3, disp3, 'b-', label='Corr. - equal')
plt.plot(nodes4, disp4, 'g-', label='Corr. - random')
plt.legend(loc='upper left')
plt.grid(b=True);
plt.axis('tight');
plt.show()
"""
Explanation: To summarize
End of explanation
"""
|
WNoxchi/Kaukasos | misc/cuda-tensor-validate-issue.ipynb | mit | import torch
from fastai.conv_learner import *
x = torch.FloatTensor([[[1,1,],[1,1]]]); x
VV(x)
VV(VV(x))
torch.equal(VV(x), VV(VV(x)))
"""
Explanation: FastAI models.validate CUDA Tensor Issue
WNixalo – 2018/6/11
I ran into trouble trying to reimplement a CIFAR-10 baseline notebook. The notebook used PyTorch dataloaders fed into a ModelData object constructor. The issue occurred when running the learning rate finder: at the end of its run a TypeError would be thrown. This error came from attempting to compare preds.data and y inside of the metrics.accuracy function which was called in model.validate by the line:
res.append([f(preds.data, y) for f in metrics])
where metrics is [accuracy].
On an AWS p2.xlarge (and I assume any GPU) machine, this results in comparing a torch.cuda.FloatTensor (preds.data) to a torch.LongTensor (y), throwing an error.
This error did not occur when using an older version of the fast.ai library, available here.
The reason is that within model.validate(.), y = VV(y), and y.data is passed into the accuracy metric function. This is the proposed fix.
To make sure that recasting a Variable to Variable via VV(.) won't break anything (eg: if a fast.ai dataloader is used, returning .cuda. tensors:
End of explanation
"""
%matplotlib inline
%reload_ext autoreload
%autoreload 2
import cifar_utils
from fastai.conv_learner import *
from torchvision import transforms, datasets
torch.backends.cudnn.benchmark = True
"""
Explanation: None of this is an issue if constructing a fast.ai Model Data object via it's constructors (eg: md = ImageClassifierData.from_csv(...)) because fast.ai uses it's own dataloaders which automatically place data on the GPU if in use via dataloader.get_tensor(.). The issue arises when PyTorch dataloaders are used, but all low-level details (calculating loss, metrics, etc) are handled internally by the fast.ai library.
This notebook shows a demo workflow: triggering the issue, demonstrating the fix, and showing a mini debug walkthrough.
For more detailed troubleshooting notes, see the accompanying debugging_notes.txt.
End of explanation
"""
# fastai/imagenet-fast/cifar10/models/ repo
from imagenet_fast_cifar_models.wideresnet import wrn_22
stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159]))
def get_loaders(bs, num_workers):
traindir = str(PATH/'train')
valdir = str(PATH/'test')
tfms = [transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]
aug_tfms =transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
] + tfms)
train_dataset = datasets.ImageFolder(
traindir,
aug_tfms)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=bs, shuffle=True, num_workers=num_workers, pin_memory=True)
val_dataset = datasets.ImageFolder(valdir, transforms.Compose(tfms))
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=True)
aug_dataset = datasets.ImageFolder(valdir, aug_tfms)
aug_loader = torch.utils.data.DataLoader(
aug_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=True)
return train_loader, val_loader, aug_loader
def get_data(bs, num_workers):
trn_dl, val_dl, aug_dl = get_loaders(bs, num_workers)
data = ModelData(PATH, trn_dl, val_dl)
data.aug_dl = aug_dl
data.sz=32
return data
def get_learner(arch, bs):
learn = ConvLearner.from_model_data(arch.cuda(), get_data(bs, num_cpus()))
learn.crit = nn.CrossEntropyLoss()
learn.metrics = [accuracy]
return learn
def get_TTA_accuracy(learn):
preds, targs = learn.TTA()
# combining the predictions across augmented and non augmented inputs
preds = 0.6 * preds[0] + 0.4 * preds[1:].sum(0)
return accuracy_np(preds, targs)
"""
Explanation: Note: the fastai/imagenet-fast repository was cloned, with a symlink imagenet_fast_cifar_models pointing to imagenet-fast/cifar10/models/. This is because the wide-resnet-22 from fast.ai's DAWN Bench submission was used. Any other architecture can be used without going through the trouble to import this.
End of explanation
"""
PATH = Path("data/cifar10_tmp")
# PATH = Path("data/cifar10")
# print(cifar_utils.count_files(PATH))
# PATH = cifar_utils.create_cifar_subset(PATH, copydirs=['train','test'], p=0.1)
# print(cifar_utils.count_files(PATH))
"""
Explanation: Using a small (10%) random subset of the dataset:
End of explanation
"""
learn = get_learner(wrn_22(), 512)
learn.lr_find(wds=1e-4)
learn.sched.plot(n_skip_end=1)
"""
Explanation: 1.
With current fastai version:
End of explanation
"""
learn = get_learner(wrn_22(), 256)
learn.lr_find(wds=1e-4)
learn.sched.plot(n_skip_end=1)
"""
Explanation: 2.
With models.validate modified so that y = VV(y) and y.data is now passed to f(.) in res.append(.):
(note a smaller batch size is used just so a any plot is displayed -- the sample dataset size is a bit too small for useful information with these settings)
End of explanation
"""
log_preds, _ = learn.TTA(is_test=False) # 'test' dataloader never initialized; using val
"""
Explanation: Making sure test predictions work:
End of explanation
"""
PATH = Path('data/cifar10')
learn = get_learner(wrn_22(), 512)
learn.lr_find(wds=1e-4)
learn.sched.plot(n_skip_end=1)
"""
Explanation: Example same as the above, but with a full-size (50,000 element) training set:
End of explanation
"""
# using the 10% sample dataset
learn = get_learner(wrn_22(), 512)
learn.lr_find(wds=1e-4)
learn.sched.plot(n_skip_end=1)
"""
Explanation: 3.
An example debugging and showing y vs VV(y) vs VV(y).data.
accuracy(preds.data, y) throws a TypeError because preds.data is a torch.cuda.FloatTensor and y is a torch.LongTensor. y needs to be a CUDA tensor.
accuracy([preds.data, VV(y)) throws another TypeError because VV(y) is a Variable.
accuracy([pred.sdata, VV(y).data) works and returns an accuracy value, because y is now a torch.cuda.LongTensor which can be compared to the CUDA tensor preds.data.
End of explanation
"""
|
Santana9937/Classification_ML_Specialization | Week_2_Learning_Linear_Classifiers/week_2_assign_2_lin_reg_L2_reg.ipynb | mit | import os
import zipfile
import string
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Logistic Regression with L2 regularization
In this notebook, you will implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Importing Libraries
End of explanation
"""
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filename of unzipped file
unzipped_file = 'amazon_baby_subset.csv'
# If upzipped file not in files_list, unzip the file
if unzipped_file not in files_list:
zip_file = unzipped_file + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
"""
Explanation: Unzipping files with Amazon Baby Products Reviews
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
"""
products = pd.read_csv("amazon_baby_subset.csv")
"""
Explanation: Loading the products data
We will use a dataset consisting of baby product reviews on Amazon.com.
End of explanation
"""
products.head()
"""
Explanation: Now, let us see a preview of what the dataset looks like.
End of explanation
"""
products['sentiment']
"""
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
"""
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
"""
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
"""
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
"""
Explanation: Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
"""
products["review"] = products["review"].fillna("")
"""
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
Before removing the punctuation from the strings in the review column, we will fill all NA values with empty string.
End of explanation
"""
products["review_clean"] = products["review"].str.translate(None, string.punctuation)
"""
Explanation: Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
End of explanation
"""
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
"""
products['perfect']
"""
Explanation: The products DataFrame now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
"""
with open('module-4-assignment-train-idx.json', 'r') as f:
train_idx_lst = json.load(f)
train_idx_lst = [int(entry) for entry in train_idx_lst]
with open('module-4-assignment-validation-idx.json', 'r') as f:
validation_idx_lst = json.load(f)
validation_idx_lst = [int(entry) for entry in validation_idx_lst]
"""
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
Loading the JSON files with the indicies from the training data and the validation data into a a list.
End of explanation
"""
train_data = products.ix[train_idx_lst]
validation_data = products.ix[validation_idx_lst]
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
"""
Explanation: Using the list of the training data indicies and the validation data indicies to get a DataFrame with the training data and a DataFrame with the validation data.
End of explanation
"""
def get_numpy_data(data_frame, features, label):
data_frame['intercept'] = 1
features = ['intercept'] + features
features_frame = data_frame[features]
feature_matrix = data_frame.as_matrix(columns=features)
label_array = data_frame[label]
label_array = label_array.values
return(feature_matrix, label_array)
"""
Explanation: Convert DataFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from a DataFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
"""
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
"""
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
arg_exp = np.dot(coefficients,feature_matrix.transpose())
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1.0/(1.0 + np.exp(-arg_exp))
# return predictions
return predictions
"""
Explanation: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
"""
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = np.dot(feature.transpose(), errors)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative = derivative - 2.0*l2_penalty*coefficient
return derivative
"""
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
"""
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
"""
Explanation: Quiz question: In the code above, was the intercept term regularized?
No
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
"""
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j],coefficients[j],l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
coefficients[j] = coefficients[j] + step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
"""
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
Decreases
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
"""
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
"""
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
"""
def add_coefficients_to_table(coefficients, column_name):
return pd.Series(coefficients, index = column_name)
"""
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
"""
coeff_L2_0_table = add_coefficients_to_table(coefficients_0_penalty, ['intercept'] + important_words)
coeff_L2_4_table = add_coefficients_to_table(coefficients_4_penalty, ['intercept'] + important_words)
coeff_L2_10_table = add_coefficients_to_table(coefficients_10_penalty, ['intercept'] + important_words)
coeff_L2_1e2_table = add_coefficients_to_table(coefficients_1e2_penalty, ['intercept'] + important_words)
coeff_L2_1e3_table = add_coefficients_to_table(coefficients_1e3_penalty, ['intercept'] + important_words)
coeff_L2_1e5_table = add_coefficients_to_table(coefficients_1e5_penalty, ['intercept'] + important_words)
"""
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
"""
positive_words = coeff_L2_0_table.sort_values(ascending=False)[0:5].index.tolist()
negative_words = coeff_L2_0_table.sort_values(ascending=True)[0:5].index.tolist()
"""
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
End of explanation
"""
print "positive_words: ", positive_words
print "negative_words: ", negative_words
"""
Explanation: Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
"""
l2_pen_vals = [0.0, 4.0, 10.0, 1.0e2, 1.0e3, 1.0e5]
"""
Explanation: Plotting the Coefficient Path with Increase in L2 Penalty
Let us observe the effect of increasing L2 penalty on the 10 words just selected.
First, let's put the 6 L2 penalty values we considered in a list.
End of explanation
"""
feature_words_lst = ['intercept'] + important_words
"""
Explanation: Next, let's put all the words we considered as features for the classification model plus the intercept features
End of explanation
"""
pos_word_coeff_dict = {}
for curr_word in positive_words:
# Finding the index of the word we are considering in the feature_words_lst
word_index = feature_words_lst.index(curr_word)
# Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.
pos_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],
coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],
coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]
neg_word_coeff_dict = {}
for curr_word in negative_words:
# Finding the index of the word we are considering in the feature_words_lst
word_index = feature_words_lst.index(curr_word)
# Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.
neg_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],
coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],
coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]
"""
Explanation: Now, we will fill-in 2 dictionaries, one with the 5 positive words as the index for the dictionary and the other with the 5 negative words as the index for the dictionary. For each index (word), we fill in a list which has the coefficient value of the index (word) for the 6 different L2 penalties we considered.
End of explanation
"""
plt.figure(figsize=(10,6))
for pos_word in positive_words:
plt.semilogx(l2_pen_vals, pos_word_coeff_dict[pos_word], linewidth =2, label = pos_word )
plt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = "black")
plt.axis([4.0, 1.0e5, -0.5, 1.5])
plt.title("Positive Words Coefficient Path", fontsize=18)
plt.xlabel("L2 Penalty ($\lambda$)", fontsize=18)
plt.ylabel("Coefficient Value", fontsize=18)
plt.legend(loc = "upper right", fontsize=18)
"""
Explanation: Plotting coefficient path for positive words
End of explanation
"""
plt.figure(figsize=(10,6))
for pos_word in negative_words:
plt.semilogx(l2_pen_vals, neg_word_coeff_dict[pos_word], linewidth =2, label = pos_word )
plt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = "black")
plt.axis([4.0, 1.0e5, -1.5, 0.5])
plt.title("Negative Words Coefficient Path", fontsize=18)
plt.xlabel("L2 Penalty ($\lambda$)", fontsize=18)
plt.ylabel("Coefficient Value", fontsize=18)
plt.legend(loc = "lower right", fontsize=18)
"""
Explanation: Plotting coefficient path for negative words
End of explanation
"""
# Compute the scores as a dot product between feature_matrix and coefficients.
scores_l2_pen_0_train = np.dot(feature_matrix_train, coefficients_0_penalty)
scores_l2_pen_4_train = np.dot(feature_matrix_train, coefficients_4_penalty)
scores_l2_pen_10_train = np.dot(feature_matrix_train, coefficients_10_penalty)
scores_l2_pen_1e2_train = np.dot(feature_matrix_train, coefficients_1e2_penalty)
scores_l2_pen_1e3_train = np.dot(feature_matrix_train, coefficients_1e3_penalty)
scores_l2_pen_1e5_train = np.dot(feature_matrix_train, coefficients_1e5_penalty)
scores_l2_pen_0_valid = np.dot(feature_matrix_valid, coefficients_0_penalty)
scores_l2_pen_4_valid = np.dot(feature_matrix_valid, coefficients_4_penalty)
scores_l2_pen_10_valid = np.dot(feature_matrix_valid, coefficients_10_penalty)
scores_l2_pen_1e2_valid = np.dot(feature_matrix_valid, coefficients_1e2_penalty)
scores_l2_pen_1e3_valid = np.dot(feature_matrix_valid, coefficients_1e3_penalty)
scores_l2_pen_1e5_valid = np.dot(feature_matrix_valid, coefficients_1e5_penalty)
"""
Explanation: The following 2 questions relate to the 2 figures above.
Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
True
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
False
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Step 1: First compute the scores using feature_matrix and coefficients using a dot product. Do this for the training data and the validation data.
End of explanation
"""
def get_pred_from_score(scores_array):
# First, set predictions equal to scores array
predictions = scores_array
# Replace <= 0 scores with negative review classification (-1)
scores_array[scores_array<=0] = -1
# Replace > 0 scores with positive review classification (+1)
scores_array[scores_array>0] = 1
return predictions
"""
Explanation: Step 2: Using the formula above, compute the class predictions from the scores.
First, writing a helper function that will return an array with the predictions.
End of explanation
"""
pred_l2_pen_0_train = get_pred_from_score(scores_l2_pen_0_train)
pred_l2_pen_4_train = get_pred_from_score(scores_l2_pen_4_train)
pred_l2_pen_10_train = get_pred_from_score(scores_l2_pen_10_train)
pred_l2_pen_1e2_train = get_pred_from_score(scores_l2_pen_1e2_train)
pred_l2_pen_1e3_train = get_pred_from_score(scores_l2_pen_1e3_train)
pred_l2_pen_1e5_train = get_pred_from_score(scores_l2_pen_1e5_train)
pred_l2_pen_0_valid = get_pred_from_score(scores_l2_pen_0_valid)
pred_l2_pen_4_valid = get_pred_from_score(scores_l2_pen_4_valid)
pred_l2_pen_10_valid = get_pred_from_score(scores_l2_pen_10_valid)
pred_l2_pen_1e2_valid = get_pred_from_score(scores_l2_pen_1e2_valid)
pred_l2_pen_1e3_valid = get_pred_from_score(scores_l2_pen_1e3_valid)
pred_l2_pen_1e5_valid = get_pred_from_score(scores_l2_pen_1e5_valid)
"""
Explanation: Now, getting the predictions for the training data and the validation data for the 6 L2 penalties we considered.
End of explanation
"""
train_accuracy = {}
train_accuracy[0] = np.sum(pred_l2_pen_0_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[4] = np.sum(pred_l2_pen_4_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[10] = np.sum(pred_l2_pen_10_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e2] = np.sum(pred_l2_pen_1e2_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e3] = np.sum(pred_l2_pen_1e3_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e5] = np.sum(pred_l2_pen_1e5_train==sentiment_train)/float(len(sentiment_train))
validation_accuracy = {}
validation_accuracy[0] = np.sum(pred_l2_pen_0_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[4] = np.sum(pred_l2_pen_4_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[10] = np.sum(pred_l2_pen_10_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e2] = np.sum(pred_l2_pen_1e2_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e3] = np.sum(pred_l2_pen_1e3_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e5] = np.sum(pred_l2_pen_1e5_valid==sentiment_valid)/float(len(sentiment_valid))
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
"""
Explanation: Step 3: Getting the accurary for the training set data and the validation set data
End of explanation
"""
accuracy_training_data = [(train_accuracy[0], 0), (train_accuracy[4], 4), (train_accuracy[10], 10),
(train_accuracy[1e2], 1e2), (train_accuracy[1e3], 1e3), (train_accuracy[1e5], 1e5)]
accuracy_validation_data = [(validation_accuracy[0], 0), (validation_accuracy[4], 4), (validation_accuracy[10], 10),
(validation_accuracy[1e2], 1e2), (validation_accuracy[1e3], 1e3), (validation_accuracy[1e5], 1e5)]
"""
Explanation: Cleating a list of tuples with the entries as (accuracy, l2_penalty) for the training set and the validation set.
End of explanation
"""
max(accuracy_training_data)[1]
"""
Explanation: Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?
End of explanation
"""
max(accuracy_validation_data)[1]
"""
Explanation: Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?
End of explanation
"""
|
jrkerns/pylinac | docs/source/pylinac_core_hacking.ipynb | mit | %matplotlib inline
from urllib.request import urlretrieve
import matplotlib.pyplot as plt
import numpy as np
from pylinac.core import image
# pylinac demo images' URL
PF_URL = 'https://s3.amazonaws.com/pylinac/EPID-PF-LR.dcm'
STAR_URL = 'https://s3.amazonaws.com/pylinac/starshot.tif'
# local downloaded images
PF_FILE, _ = urlretrieve(PF_URL)
STAR_FILE, _ = urlretrieve(STAR_URL)
# sample numpy array
ARR = np.arange(36).reshape((6,6))
"""
Explanation: Hacking your own tools with Pylinac
Pylinac's main purpose is to make doing routine quality assurance easier and as automatic as possible. The main modules can be utilized in only a few lines of code and does a complete analysis of your QA images. However, when researching new tools, comparing specific algorithms, or doing custom testing, the default tools aren't appropriate. Pylinac's modules are built on simpler tools that can be used directly or easily extended. In this tutorial, we'll introduce some of pylinac's potential as a toolkit for customizing or creating new tests.
This is also viewable as a Jupyter notebook <http://nbviewer.ipython.org/github/jrkerns/pylinac/blob/master/docs/source/pylinac_core_hacking.ipynb>_.
The image Module
In many applications of QA, the physicist has images acquired via the EPID, kV imager, or scanned films. The image module provides easy tools for loading, manipulating, and converting these images. For this tutorial we'll use a few demo images and showcase the various methods available.
There are a few simple tips for using the image module:
Load images directly using load, load_url, or load_multiples.
There are 3 related image classes to handle the given image types: Dicom, File, and Array
The docs for the image API are here.
First, our imports for the whole tutorial:
End of explanation
"""
pfimg = image.load_url(PF_URL)
type(pfimg)
"""
Explanation: Loading images
Let's load the picket fence demo image:
End of explanation
"""
pfimg = image.load(PF_FILE)
type(pfimg)
"""
Explanation: We can load the local file just as easily:
End of explanation
"""
arr = np.arange(36).reshape((6,6))
img2 = image.load(arr) # .pixel_array is a numpy array of DICOM file
type(img2)
"""
Explanation: The load() function can take an string pointing to a file, a data stream, or a numpy array:
End of explanation
"""
superimposed_img = image.load_multiples([STAR_FILE, STAR_FILE])
"""
Explanation: Additionally, multiple images can be loaded and superimposed. For example multiple collimator star shots. Just pass a list of the images. In this case I'm passing the same image twice, but the point is this can be done with any set of images:
End of explanation
"""
dcm_img = image.DicomImage(PF_FILE)
arr_img = image.ArrayImage(ARR, dpi=30, sid=500)
file_img = image.FileImage(STAR_FILE, dpi=50, sid=1000)
file_img2 = image.load(STAR_FILE, dpi=30, sid=500)
type(file_img) == type(file_img2)
"""
Explanation: While the load() function will always do smart image inference for you, if you already know the file type you can instantiate directly. Furthermore, keyword arguments can be passed to FileImage and ArrayImage if they are known.
End of explanation
"""
dcm_img.plot()
"""
Explanation: Plotting
Images can be easily plotted using the plot() method:
End of explanation
"""
fig, ax = plt.subplots()
dcm_img.plot(ax=ax)
"""
Explanation: The plot can also be passed to an existing matplotlib axes:
End of explanation
"""
dcm_img.dpi
dcm_img.dpmm
dcm_img.sid
dcm_img.shape
"""
Explanation: Attributes
The image classes contain a number of useful attributes for analyzing and describing the data:
End of explanation
"""
dcm_img.cax # the beam CAX
dcm_img.center # the center location of the image
"""
Explanation: There are also attributes that are useful in radiation therapy:
End of explanation
"""
dcm_img[12, 60]
dcm_img[:100, 82]
"""
Explanation: Above, the values are the same because the EPID was not translated, which would move the CAX but not the image center.
The image values can also be sampled by slicing and indexing:
End of explanation
"""
file_img.filter(kind='median', size=3)
file_img.plot()
"""
Explanation: Data manipulation
Now the really fun stuff!
There are many methods available to manipulate the data.
First, let's smooth the data:
End of explanation
"""
file_img.remove_edges(pixels=100)
file_img.plot()
"""
Explanation: Sometimes starshots from scanned film have edges that are very high or low value (corners of the film can be bent or rounded). We can easily trim the edges:
End of explanation
"""
file_img.invert()
file_img.plot()
file_img.roll(direction='y', amount=300)
file_img.plot()
"""
Explanation: The data can also be explicitly inverted (EPID images oftentimes need this), or rolled on an axis:
End of explanation
"""
file_img.rot90(n=1)
file_img.plot()
file_img.resize(size=(1000, 1100))
file_img.plot()
"""
Explanation: We can also rotate and resize the image:
End of explanation
"""
np.min(file_img) # before grounding
file_img.ground()
np.min(file_img) # after grounding
file_img.plot()
"""
Explanation: Scanned film values can be very high, even in low dose areas. We can thus "ground" the image so that the lowest value is zero; this will help us later on when detecting profiles.
End of explanation
"""
thresh_val = np.percentile(file_img, 95)
file_img.threshold(thresh_val)
file_img.plot()
"""
Explanation: We can also apply a high-pass filter to the image:
End of explanation
"""
new_img = file_img.as_binary(thresh_val)
new_img.plot()
"""
Explanation: The image can also be converted to binary, which can be used later for ROI detection. Note that unlike any other method, this returns a new ArrayImage (hinted by the as_)...
End of explanation
"""
file_img.plot()
"""
Explanation: ...and leaves the original unchanged.
End of explanation
"""
from pylinac.core.profile import SingleProfile, MultiProfile, CircleProfile, CollapsedCircleProfile
star_img = image.load_url(STAR_URL)
"""
Explanation: The profile Module
Physicists often need to evalute a profile, perhaps from a linac beam EPID image, or some fluence profile. The profile module allows the physicist to find peaks in a 1D array and determine beam profile information (FWHM, penumbra, etc). There are two main profile classes:
SingleProfile - This class is for profiles with a single peak; e.g. an open beam delivered to a film or EPID. The main goal of this class is to describe the profile (FWHM, penumbra, etc).
MultiProfile - This class is for profiles with multiple peaks. The main goal of this class is to find the peak or valley locations. A MultiProfile can be broken down into SingleProfiles.
CircleProfile - A MultiProfile, but in the shape of a circle.
CollapsedCircleProfile - A CircleProfile that "collapses" a thick ring of pixel data to create a 1D profile.
The profile API docs are here.
For this demonstration we'll find some peaks and then determine profile information about one of those peaks. Let's use the starshot demo image since it contains all the types of profiles:
End of explanation
"""
row = star_img[800, :]
plt.plot(row)
"""
Explanation: Using a MultiProfile
Let's start by sampling one row from the starshot image:
End of explanation
"""
mprof = MultiProfile(row)
mprof.plot()
"""
Explanation: So, judging by the profile, it needs to be filtered for the spurious signals, it has multiple peaks, and it's upside down.
Let's make a MultiProfile and clean up the data.
End of explanation
"""
mprof.invert()
mprof.plot()
"""
Explanation: First, let's invert it so that pixel value increases with dose.
End of explanation
"""
mprof.filter(size=6)
mprof.plot()
"""
Explanation: We've loaded the profile and inverted it; let's run a filter over it.
End of explanation
"""
mprof.find_peaks()
"""
Explanation: The profile could probably be filtered more since there's still a few spurious signals, but this will work nicely for our demonstration.
First, we want to find the peak locations:
End of explanation
"""
mprof.find_peaks(threshold=0.1)
"""
Explanation: The method has found the 3 major peaks of the profile. Note that there are actually 5 peaks if we count the spurious signals near indices 800 and 1200.
For fun, let's see if we can detect these peaks. We can change the parameters to find_peaks() to optimize our search.
End of explanation
"""
mprof.find_peaks(threshold=0.1, min_distance=0.02)
"""
Explanation: By lowering the peak height threshold we've found another peak; but the peak near 1200 wasn't found. What gives?
The find_peaks() method also eliminates peaks that are too close to one another. We can change that:
End of explanation
"""
mprof.find_peaks(threshold=0.1, min_distance=0.02, max_number=3)
"""
Explanation: By changing the minimum distance peaks must be from each other, we've found the other peak.
But, let's say we need to use these settings for whatever reason. We can additionally limit the number of peaks using max_number.
End of explanation
"""
mprof.plot()
"""
Explanation: Now, we can visualize where these peaks are by using the plot() method, which shows the peaks if we've searched for them; note the green dots at the detected peak locations.
End of explanation
"""
mprof.find_peaks(search_region=(0, 0.6)) # search the left 60% of the profile
"""
Explanation: We can also search a given portion of the region; for example if we only wanted to detect peaks in the first half of the profile we can easily add a search_region. Note that the last peak was not detected.
End of explanation
"""
mprof.find_fwxm_peaks(x=50) # 50 is 50% height
"""
Explanation: We can search not simply for the max value peaks, but for the FWHM peaks. Note that these values are slightly different than the max value peaks we found earlier.
End of explanation
"""
single_profiles = mprof.subdivide() # returns a list of SingleProfile's
"""
Explanation: Finally, we can subdivide the profile into SingleProfile's to further describe single peaks:
End of explanation
"""
sprof = single_profiles[0]
sprof.plot()
"""
Explanation: Using a SingleProfile
SingleProfiles are useful to describe profiles with a single peak. It can describe the FWXM (X=any height), penumbra on each side, field width, and calculations of the field. Continuing from above:
End of explanation
"""
sprof.fwxm(x=50)
sprof.fwxm_center(x=50)
"""
Explanation: The multiprofile has been cut into multiple single profiles, of which this is the first.
Let's first find the FWHM, and the center of the FWHM:
End of explanation
"""
sprof.penumbra_width(side='left', upper=80, lower=20)
sprof.penumbra_width(upper=90, lower=10) # default is average of both sides
"""
Explanation: Note that this is the same value as the first FWHM peak value we found in the MultiProfile.
We can now find the penumbra values:
End of explanation
"""
sprof.ground()
sprof.plot()
sprof.penumbra_width(upper=90, lower=10)
"""
Explanation: The careful reader will notice that the profiles, since we created them, has not had a minimum value of 0. Normally, this would cause problems and sometimes it does, but pylinac normalizes the FWXM and penumbra search. However, just to make sure all is well we can easily shift the values so that the lower bound is 0:
End of explanation
"""
np.max(sprof)
sprof.normalize()
sprof.plot()
"""
Explanation: The average penumbra is the same as we found earlier.
We can also normalize and stretch the profile values. Let's first get the original maximum value so we know what we need to restore the profile:
End of explanation
"""
sprof.stretch(min=30, max=284)
sprof.plot()
"""
Explanation: We can also stretch the values to be bounded by any values we please:
End of explanation
"""
sprof.stretch(max=47, min=0)
sprof.plot()
"""
Explanation: Let's restore our profile based on the earlier max and min values:
End of explanation
"""
star_img.plot()
"""
Explanation: Using CircleProfile and CollapsedCircleProfile
Circular profiles are useful for concentric profiles; starshots are great examples. Let's explore the two circular profiles as they relate to a starshot image.
Let's once again load the starshot demo image:
End of explanation
"""
star_img.invert()
"""
Explanation: We saw above from the profile that the image is actually inverted (pixel value decreases with dose), so we need to invert the image:
End of explanation
"""
cprof = CircleProfile(center=(1250, 1450), radius=300, image_array=star_img) # center is given as (x, y)
cprof.plot()
"""
Explanation: To use a CircleProfile we need to specify the center and radius of the circle, as well as the array to operate over. The approximate center of the starshot is (1450, 1250). Let's also use a radius of 300 pixels.
End of explanation
"""
cprof = CircleProfile(center=(1250, 1450), radius=300, image_array=star_img, start_angle=0.2, ccw=False)
cprof.plot()
"""
Explanation: We have a nice profile showing the starshot peaks. It appears we've cut one of the peaks in half though; this is because the profile starts at 0 radians on the unit circle (directly to the right) and there is a starshot line right at 0. We can also change the direction of the profile from the default of counter-clockwise to clockwise:
End of explanation
"""
cprof.roll(amount=50)
cprof.plot()
"""
Explanation: Alternatively, we can roll the profile directly:
End of explanation
"""
cprof.find_peaks()
cprof.plot()
"""
Explanation: Now, because CircleProfile is a subclass of MultiProfile we can search for the peaks:
End of explanation
"""
cprof.x_locations
cprof.y_locations
"""
Explanation: The profile is 1D, but was derived from a circular sampling. How do we know what the locations of the sampling is? We have x and y attributes:
End of explanation
"""
ax = star_img.plot(show=False)
cprof.plot2axes(ax)
"""
Explanation: We can also add this profile to a plot to show where it is:
End of explanation
"""
ccprof = CollapsedCircleProfile(center=(1250, 1450), radius=300, image_array=star_img, width_ratio=0.2, num_profiles=10)
ccprof.plot()
"""
Explanation: Looks good! Now let's take a CollapsedCircleProfile. The advantage of this class is that a thick ring is sampled, which averages the pixel values. Thus, noise and spurious signals are reduced. Beyond CircleProfile there are 2 more keyword arguments:
End of explanation
"""
ccprof.find_peaks()
ccprof.plot()
ax = star_img.plot(show=False)
ccprof.plot2axes(ax)
"""
Explanation: Note that this profile looks smoothed; this comes from averaging over the num_profiles within the ring. The width_ratio is a function of the radius, so in this case the actual ring width is 300*0.2 = 60 pixels, and 10 equally distributed profiles are taken within that ring.
Let's find the peaks and then plot the ring to the starshot image:
End of explanation
"""
|
Danghor/Formal-Languages | ANTLR4-Python/Interpreter/Interpreter.ipynb | gpl-2.0 | !cat -n Pure.g4
"""
Explanation: An Interpreter for a Simple Programming Language
In this notebook we develop an interpreter for a small programming language.
The grammar for this language is stored in the file Pure.g4.
End of explanation
"""
!cat sum.sl
"""
Explanation: The grammar shown above does only contain skip actions. The corrsponding grammar that is enriched with actions is stored in the file Simple.g4.
An example program that conforms to this grammar is stored in the file sum.sl.
End of explanation
"""
!cat -n Simple.g4
"""
Explanation: The file Simple.g4 contains a parser for the language described by the grammar Pure.g4. This parser returns
an abstract syntax tree. This tree is represented as a nested tuple.
End of explanation
"""
!cat sum.ast
!antlr4 -Dlanguage=Python3 Simple.g4
from SimpleLexer import SimpleLexer
from SimpleParser import SimpleParser
import antlr4
%run ../AST-2-Dot.ipynb
"""
Explanation: The parser shown above will transform the program sum.sl into the nested tuple stored in the file sum.ast.
End of explanation
"""
def main(file):
with open(file, 'r') as handle:
program_text = handle.read()
input_stream = antlr4.InputStream(program_text)
lexer = SimpleLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = SimpleParser(token_stream)
result = parser.program()
Statements = result.stmnt_list
ast = tuple2dot(Statements)
print(Statements)
display(ast)
ast.render('ast', view=True)
execute_tuple(Statements)
"""
Explanation: The function main takes one parameter file. This parameter is a string specifying a program file.
The function reads the program contained in this file and executes it.
End of explanation
"""
def execute_tuple(Statement_List, Values={}):
for stmnt in Statement_List:
execute(stmnt, Values)
"""
Explanation: The function execute_list takes two arguments:
- Statement_List is a list of statements,
- Values is a dictionary assigning integer values to variable names.
The function executes the statements in Statement_List. If an assignment statement is executed,
the dictionary Values is updated.
End of explanation
"""
L = [1,2,3,4,5]
a, b, *R = L
a, b, R
def execute(stmnt, Values):
op = stmnt[0]
if stmnt == 'program':
pass
elif op == ':=':
_, var, value = stmnt
Values[var] = evaluate(value, Values)
elif op == 'read':
_, var = stmnt
Values[var] = int(input())
elif op == 'print':
_, expr = stmnt
print(evaluate(expr, Values))
elif op == 'if':
_, test, *SL = stmnt
if evaluate(test, Values):
execute_tuple(SL, Values)
elif op == 'while':
_, test, *SL = stmnt
while evaluate(test, Values):
execute_tuple(SL, Values)
else:
assert False, f'{stmnt} unexpected'
"""
Explanation: The function execute takes two arguments:
- stmnt is a statement,
- Values is a dictionary assigning integer values to variable names.
The function executes the statements in Statement_List. If an assignment statement is executed,
the dictionary Values is updated.
End of explanation
"""
def evaluate(expr, Values):
if isinstance(expr, int):
return expr
if isinstance(expr, str):
return Values[expr]
op = expr[0]
if op == '==':
_, lhs, rhs = expr
return evaluate(lhs, Values) == evaluate(rhs, Values)
if op == '<':
_, lhs, rhs = expr
return evaluate(lhs, Values) < evaluate(rhs, Values)
if op == '+':
_, lhs, rhs = expr
return evaluate(lhs, Values) + evaluate(rhs, Values)
if op == '-':
_, lhs, rhs = expr
return evaluate(lhs, Values) - evaluate(rhs, Values)
if op == '*':
_, lhs, rhs = expr
return evaluate(lhs, Values) * evaluate(rhs, Values)
if op == '/':
_, lhs, rhs = expr
return evaluate(lhs, Values) / evaluate(rhs, Values)
assert False, f'{stmnt} unexpected'
!cat sum.sl
main('sum.sl')
!cat factorial.sl
main('factorial.sl')
!rm *.py *.tokens *.interp
!rm -r __pycache__/
!rm *.pdf
!ls
"""
Explanation: The function evaluate takes two arguments:
- expr is a logical expression or an arithmetic expression,
- Values is a dictionary assigning integer values to variable names.
The function evaluates the given expression and returns this value.
End of explanation
"""
|
LxMLS/lxmls-toolkit | labs/notebooks/basic_tutorials/Exercises_0.10_to_0.14.ipynb | mit | %load_ext autoreload
%autoreload 2
from lxmls.readers import galton
galton_data = galton.load()
print(galton_data.mean(0))
print(galton_data.std(0))
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(galton_data)
plt.plot(galton_data[:,0], galton_data[:,1], '.')
import numpy as np
np.random.randn?
galton_data_randn = galton_data + 0.5*np.random.randn(len(galton_data), 2)
plt.plot(galton_data_randn[:,0], galton_data_randn[:,1], '.')
"""
Explanation: Exercise 0.10
Over the next couple of exercises we will make use of the Galton dataset, a dataset of heights of fathers
and sons from the 1877 paper that first discussed the “regression to the mean” phenomenon. This dataset has 928 pairs
of numbers.
* Use the load() function in the galton.py file to load the dataset. The file is located under the lxmls/readers folder. Type the following in your Python interpreter:
import galton as galton
galton_data = galton.load()
What are the mean height and standard deviation of all the people in the sample? What is the mean height of the
fathers and of the sons?
Plot a histogram of all the heights (you might want to use the plt.hist function and the ravel method on
arrays).
Plot the height of the father versus the height of the son.
You should notice that there are several points that are exactly the same (e.g., there are 21 pairs with the values 68.5
and 70.2). Use the ? command in ipython to read the documentation for the numpy.random.randn function
and add random jitter (i.e., move the point a little bit) to the points before displaying them. Does your impression
of the data change?
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
a = np.arange(-5,5,0.01)
f_x = np.power(a,2)
plt.plot(a,f_x)
plt.xlim(-5,5)
plt.ylim(-5,15)
k = np.array([-2,0,2])
plt.plot(k,k**2,"bo")
for i in k:
plt.plot(a, (2*i)*a - (i**2))
"""
Explanation: Exercise 0.11
Consider the function $f(x) = x^2$ and its derivative
$ \frac{\partial f}{\partial x} $.
Look at the derivative of that function at points [-2,0,2], draw the tangent to the graph in that point
$ \frac{\partial f}{\partial x}(-2)=-4 $,
$ \frac{\partial f}{\partial x}(0)=0 $ and
$ \frac{\partial f}{\partial x}(2)=4 $.
For example, the tangent
equation for $x = -2$ is $y = -4x - b$, where $b = f(-2)$. The following code plots the function and the derivatives on
those points using matplotlib (See Figure 4).
End of explanation
"""
def get_y(x):
return (x+2)**2 - 16*np.exp(-((x-2)**2))
"""
Explanation: Exercise 0.12
Consider the function $f(x) = (x + 2)^2 - 16 \text{exp}(-(x - 2)^2)$
. Make a function that computes the
function value given x
End of explanation
"""
x = np.arange(-8, 8, 0.001)
y = get_y(x)
plt.plot(x, y)
"""
Explanation: Draw a plot around $x \in [-8, 8]$.
End of explanation
"""
def get_grad(x):
return (2*x+4)-16*(-2*x + 4)*np.exp(-((x-2)**2))
"""
Explanation: Calculate the derivative of the function $f(x)$, implement the function get grad(x).
End of explanation
"""
def gradient_descent_scalar(start_x, func, grad, step_size=0.1, prec=0.0001):
max_iter=100
x_new = start_x
res = []
for i in range(max_iter):
x_old = x_new
#Use negative step size for gradient descent
x_new = x_old - step_size * grad(x_new)
f_x_new = func(x_new)
f_x_old = func(x_old)
res.append([x_new,f_x_new])
if(abs(f_x_new - f_x_old) < prec):
print("change in function values too small, leaving")
return np.array(res)
print("exceeded maximum number of iterations, leaving")
return np.array(res)
"""
Explanation: Use the method gradient descent to find the minimum of this function. Convince yourself that the code is doing the
proper thing. Look at the constants we defined. Note, that we are using a simple approach to pick the step size (always
have the value step size) which is not necessarily correct
End of explanation
"""
x = np.arange(-8, 8, 0.001)
y = get_y(x)
plt.plot(x, y)
x_0 = -8
res = gradient_descent_scalar(x_0, get_y, get_grad)
plt.plot(res[:,0], res[:,1], 'r+')
"""
Explanation: Run the gradient descent algorithm starting from $x_0 = -8$ and plot the minimizing sequence.
End of explanation
"""
x = np.arange(-8, 8, 0.001)
y = get_y(x)
plt.plot(x, y)
x_0 = 8
res = gradient_descent_scalar(x_0, get_y, get_grad)
plt.plot(res[:,0], res[:,1], 'r+')
"""
Explanation: Note that the algorithm converged to a minimum, but since the function is not convex it converged only to a local minimum.
Now try the same exercise starting from the initial point $x_0 = 8$.
End of explanation
"""
x = np.arange(-8, 8, 0.001)
y = get_y(x)
plt.plot(x, y)
x_0 = 8
res = gradient_descent_scalar(x_0, get_y, get_grad, step_size=0.01)
plt.plot(res[:,0], res[:,1], 'r+')
"""
Explanation: Note that now the algorithm converged to the global minimum. However, note that to get to the global minimum the sequence of points jumped from one side of the minimum to the other. This is a consequence of using a wrong step size (in this case too large). Repeat the previous exercise changing both the values of the step-size and the precision. What do you observe?
End of explanation
"""
# Get data.
use_bias = True #True
y = galton_data[:,0]
if use_bias:
x = np.vstack( [galton_data[:,1], np.ones(galton_data.shape[0])] )
else:
x = np.vstack( [galton_data[:,1], np.zeros(galton_data.shape[0])] )
# derivative of the error function e
def get_e_dev(w, x, y): # y, x,
error_i = np.matmul(w, x) - y
derro_dw = np.matmul(2*x, error_i) / len(y)
# print(derro_dw, np.multiply(error_i,error_i).sum())
return derro_dw
# Initialize w.
w = np.array([1,0])
#get_e_dev(w, x, y)
# Initialize w.
w = np.array([0.5,50.0])
def gradient_descent_linear_regression(start_w, step_size, prec, x,y): #gradient=get_e_dev
'''
runs the gradient descent algorithm and returns the list of estimates
'''
w_new = start_w
w_old = start_w + prec * 2
res = [w_new]
mses = []
while abs(w_old-w_new).sum() > prec:
w_old = w_new
w_new = w_old - step_size * get_e_dev(w_new, x, y)
res.append(w_new)
error_i = np.matmul(w_new, x) - y
mse = np.multiply(error_i, error_i).mean()
mses.append(mse)
print(w_new, mse)
return np.array(res), np.array(mses)
res, mses = gradient_descent_linear_regression(w, 0.0002, 0.00001, x, y)
w_sgd = res[-1]
w_sgd
#res
"""
Explanation: Exercise 0.13
Consider the linear regression problem (ordinary least squares) on the Galton dataset, with a single response
variable
\begin{equation}
y = x^Tw + ε
\end{equation}
The linear regression problem is, given a set ${y
(i)}_i$ of samples of $y$ and the corresponding $x^{(i)}$ vectors, estimate w
to minimise the sum of the $\epsilon$ variables. Traditionally this is solved analytically to obtain a closed form solution (although
this is not the way in which it should be computed in this exercise, linear algebra packages have an optimised solver,
with numpy, use numpy.linalg.lstsq).
Alternatively, we can define the error function for each possible $w$:
\begin{equation}
e(w) = \sum_i ( x^{(i)^T} w - y^{(i)} )^2.
\end{equation}
1) Derive the gradient of the error $\frac{\partial e}{\partial w_j}$.
\begin{equation}
\frac{\partial e}{\partial w_j} = \sum_i 2 x_j^{(i)} ( x^{(i)^T} w - y^{(i)} ).
\end{equation}
2) Implement a solver based on this for two dimensional problem (i.e., $w \in R^2$)
3) Use this method on the Galton dataset from the previous exercise to estimate the relationship between father and son’s height. Try two formulas
$s = f w_1 + \epsilon$,
where s is the son’s height, and f is the father heights; and
$s = f w_1 + 1w_0 + \epsilon$,
where the input variable is now two dimensional: $(f , 1)$. This allows the intercept to be non-zero.
End of explanation
"""
plt.plot(res[:, 0], res[:, 1], '*')
plt.show()
plt.plot(mses)
np.multiply(y - np.matmul(w_sgd,x), y - np.matmul(w_sgd,x)).sum()
"""
Explanation: 4) Plot the regression line you obtain with the points from the previous exercise.
End of explanation
"""
from numpy.linalg import lstsq
m, c = np.linalg.lstsq(x.T, y)[0]
print(m,c)
w_lstsq = np.array([m, c])
error2 = np.multiply(y - np.matmul(w_lstsq,x), y - np.matmul(w_lstsq,x)).sum()
print(error2)
#lstsq(a=, b=)
"""
Explanation: 5) Use the np.linalg.lstsq function and compare to your solution.
End of explanation
"""
plt.plot(galton_data_randn[:,0], galton_data_randn[:,1], ".")
plt.title("We nailed it??")
maxim, minim = int(np.max(galton_data_randn[:,0])), int(np.min(galton_data_randn[:,0]))
xvals = np.array(range(minim-1, maxim+1))
# Gradient descent solution
yvals = w_sgd[0] * xvals + w_sgd[1]
plt.plot(xvals, yvals, '--', c='k',linewidth=2)
# solution from closed form
yvals2 = w_lstsq[0] * xvals + w_lstsq[1]
plt.plot(xvals, yvals2, '--', c='r',linewidth=2)
"""
Explanation: Plotting regressions:
End of explanation
"""
def next_x(x):
x += np.random.normal(scale=.0625)
if x < 0:
return 0.
return xnext
def walk():
iters = 0
while x <= 1.:
x = next_x(x)
iters += 1
return nr_iters
walks = np.array([walk() for i in range(1000)])
print(np.mean(walks))
"""
Explanation: Exercise 0.14
Use the debugger to debug the buggy.py script which attempts to repeatedly perform the following
computation:....
Start $x_0 = 0$
Iterate
(a) $x'_{t+1} = x_t + r$, where $r$ is a random variable.
(b) if $x'_{t+1} >= 1$, then stop.
(c) if $x'{t+1} <= 0$, then $x{t+1} = 0$
(d) else $x_{t+1} = x'_{t+1}$
Return the number of iterations
Having repeated this computation a number of times, the programme prints the average. Unfortunately, the program
has a few bugs, which you need to fix.
End of explanation
"""
|
ThyrixYang/LearningNotes | MOOC/stanford_cnn_cs231n/assignment2/.ipynb_checkpoints/FullyConnectedNets-checkpoint.ipynb | gpl-3.0 | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
"""
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
""" Receive inputs x and weights w """
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
"""
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
"""
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
"""
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
"""
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
"""
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
"""
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
"""
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
"""
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
"""
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
"""
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
"""
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
"""
model = TwoLayerNet(reg=1e-2, hidden_dim=200)
optim_config = {
'learning_rate': 1e-3
}
solver = Solver(model, data,
num_train_samples=20000,
num_epochs=15,
batch_size=500,
num_val_samples=1000,
optim_config=optim_config,
print_every=30000,
lr_decay=0.95)
solver.train()
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
"""
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
"""
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
"""
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-4
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
"""
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
"""
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-5
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
"""
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
"""
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
"""
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
"""
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
"""
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
"""
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
"""
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
"""
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
"""
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
"""
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
"""
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation
"""
|
lewisamarshall/ionize | tutorial.ipynb | gpl-2.0 | from __future__ import print_function, absolute_import, division
import ionize
# We'll also import numpy to set up some of our inputs.
# And pprint to prettily print some lists.
import numpy
import pprint
# And set up inline plotting.
from matplotlib.pyplot import *
%matplotlib inline
# Prettify numpy printing
numpy.set_printoptions(precision=3)
"""
Explanation: ionize Tutorial
ionize is a Python module for calculating the properties of ions in aqueous solution.
To load the library, simply import ionize.
End of explanation
"""
# Initialize an ion and print it.
acid = ionize.Ion('myAcid', [-1], [5], [-25e-9])
base = ionize.Ion('myBase', [1], [8], [20e-9])
print(acid) # The string includes only the class and name.
print(repr(base)) # The representation contains enough information to reconstruct the ion.
"""
Explanation: Ion
The basic building block of an ionize simulation is an ionic species, modeled by the Ion class. Call ionize.Ion(name, z, pKa, absolute_mobility). name is the name of the ion, typically as a string. z is a list containing the charge states of the ion. pKa is a list of the pKas of the charge states, with the same order as the list z. absolute_mobility is a list containing the absolute, infinite dilution mobilities of each charge state, ordered the same as the other two lists, in units of m<sup>2</sup>V<sup>-1</sup>s<sup>-1</sup>.
End of explanation
"""
print('myAcid Ka at (I=0 M) =', acid.acidity())
print('myAcid Ka at (I=0.5 M) =', acid.acidity(ionic_strength=0.5))
pH = numpy.linspace(0,14)
for I in [None, 0., 0.001, 0.01, 0.1]:
mu = [base.mobility(p, I) for p in pH]
if I is not None:
label = 'I={} M'.format(I)
else:
label = 'I=None'
plot(pH, mu, label=label)
xlabel('pH'); xlim(0, 14)
ylabel('effective mobility (m^2/v/s)'); ylim(-.1e-8, 2.1e-8)
legend()
show()
"""
Explanation: Once an ion species is initialized, you can call the properties of the ion, typically as a function of pH, ionic strength, and temperature, in that order.
End of explanation
"""
db = ionize.Database()
histidine = db['histidine']
print(repr(histidine))
for ionic_strength in (None, 0):
mu_histidine = [histidine.mobility(p, ionic_strength=ionic_strength) for p in pH]
plot(pH, mu_histidine, label="I={}".format(ionic_strength))
xlabel('pH'); xlim([0, 14])
ylabel('effective mobility (m^2/v/s)')
legend()
show()
"""
Explanation: Note the difference between ionic_strength parameters here. If ionic_strength is 0, the numerical value of 0 is used in each calculation. However, it is impossible to have a solution of pH 0 with ionic_strength of 0.
When the default value of None is used for ionic_strength, ionize uses the minimum ionic strength at the selected pH.
Using the ionize database
Individually initializing ions is error-prone and time-consuming. To simplify the process, load ions from
the database by initializing the database, and accessing the database like a dictionary.
End of explanation
"""
print("Search results for 'amino'\n--------------------------")
pprint.pprint(db.search('amino'))
print("\nSearch results for 'chloric'\n----------------------------")
pprint.pprint(db.search('chloric'))
print("\nSearch results for 'per'\n------------------------")
pprint.pprint(db.search('per'))
print('\nOh, copper is what I was looking for.')
print(db.load('copper'))
"""
Explanation: search_ion()
You can also search for ions in the database by name using Database().search(). Call it by specifying a search_string. search() will print the names of all ions that contain the search_string. search will not return a list of strings, so load the ion when you find what you want.
End of explanation
"""
print(len(db.data), 'ions in database.')
"""
Explanation: Other db functions
You can get the database data as a dictionary using the data method.
End of explanation
"""
hcl=database.load('hydrochloric acid')
tris=database.load('tris')
buffer=ionize.Solution([tris, hcl], [0.1, 0.085])
print 'pH =', buffer.pH
print 'I =', buffer.ionic_strength, 'M'
print 'conductivity =', buffer.conductivity(), 'S/m'
print 'buffering capacity =', buffer.buffering_capacity(), 'M'
print 'debye length =', buffer.debye(), 'm'
"""
Explanation: Solution
Getting the properties of a single ionic species in solution is useful, but the real challenge of dealing with aqueous solutions of ions is finding properties based on the equilibrium state of multiple ionic species. ionize can perform those calculations using the Solution class. Solution objects are initialized using ionize.Solution(ions, concentrations), where ions is a list of Ion objects and concentration is a list concentrations of the ions, with concentrations in molar.
End of explanation
"""
print [ion.name for ion in ionize.Solution(['bis-tris', 'acetic acid'], [0.1, 0.03]).ions]
print ionize.Solution(['bis-tris', 'acetic acid'], [0.1, 0.03]).concentration(database.load('acetic acid'))
"""
Explanation: Solutions can be initialized with ion names instead of ions. If so, the Solution calls load_ion to determine the ion identities.
End of explanation
"""
c_tris = 0.1
c_hcl = numpy.linspace(0.0, 0.2, 50)
t_pH = [ionize.Solution(['tris', 'hydrochloric acid'], [c_tris, c_h], temperature=25).pH for c_h in c_hcl]
plot(c_hcl/c_tris, t_pH)
xlabel('[HCl]/[Tris]')
ylabel('pH')
show()
"""
Explanation: We can iterate through solutions to quickly calculate the pH of a titration between two ions
End of explanation
"""
water = ionize.Solution()
print 'I =', water.ionic_strength, 'M'
print 'pH =', water.pH
print 'conductivity =', water.conductivity(), 'S/m'
"""
Explanation: A Solution can also be initialized without ions, e.g. as water.
End of explanation
"""
print 'Stock:', buffer
dilution = 0.5 * buffer + 0.5 * water
print 'Dilution:', dilution
"""
Explanation: A Solution can also be added and multiplied through operator overloading. This can be useful when calculating the results of diltuions, as below.
End of explanation
"""
buff = ionize.Solution([tris], 0.1)
print buff.titrate('hydrochloric acid', 8.2)
print buff.titrate('hydrochloric acid', 3)
print buff.conductivity()
print repr(buff.titrate('hydrochloric acid', 3, titration_property = 'conductivity'))
print repr(buff.titrate('hydrochloric acid', 8))
"""
Explanation: Solutions can be titrated to a specified pH. To do so, make a solution, and then specify a titrant, a property, and a target.
End of explanation
"""
silver = database.load('silver')
tris = database.load('tris')
T = numpy.linspace(20.0, 80.0)
mu_silver = [silver.absolute_mobility(Tp) for Tp in T]
mu_tris = [tris.absolute_mobility(Tp) for Tp in T]
pKa_silver = [silver.pKa(0, Tp) for Tp in T]
pKa_tris = [tris.pKa(0, Tp) for Tp in T]
figure()
plot(T, mu_silver, label = 'Silver')
plot(T, mu_tris, label = 'Tris')
legend(loc = 'upper left')
xlabel('Temperature ($^{\circ}$C)'); ylabel('Absolute mobility ($m^2V^{-1}s^{-1}$)')
show()
figure()
plot(T, pKa_silver, label = 'Silver')
plot(T, pKa_tris, label = 'Tris')
legend(loc = 'lower left')
xlabel('Temperature ($^{\circ}$C)'); ylabel('pKa')
show()
"""
Explanation: Temperature Effects
Both Ion objects and Solution objects take T as an optional argument for temperature. Temperature should be specified in degrees C.
Ion objects adjust their absolute mobility and pKa attributes based on temperature. They also make adjustments to their ionic strength correction algorithms based on temperature. The type of temperature adjustment data depends on the specific ion. For small ions, emperical data from literature is included. For organic molecules, ΔH and ΔCp values may be provided. All ions also correct their mobilities for viscosity.
End of explanation
"""
buffer_ref = ionize.Solution(['tris', 'hydrochloric acid'], [.200, .100], temperature=25.)
mu_ref = buffer_ref.ions[1].mobility()
mup = []
pH = []
I = []
mu=[]
cond = []
for Tp in T:
buffer = ionize.Solution([tris, hcl], [.200, .100], temperature=Tp)
mu.append(buffer.ions[1].mobility())
mup.append(buffer.ions[1].mobility()/mu_ref)
pH.append(buffer.pH)
I.append(buffer.ionic_strength)
cond.append(buffer.conductivity())
# mup.append(hcl.nightingale_function(Tp))
cond_norm = [c / cond[0] for c in cond]
figure()
plot(T, pH); xlabel('Temperature ($^{\circ}$C)'); ylabel('pH')
show()
figure()
plot(T, mup, label='chloride'); xlabel('Temperature ($^{\circ}$C)'); ylabel('$\mu$(T)/$\mu$(T$_o$)'); legend(loc='upper left')
show()
"""
Explanation: Solution objects send their temperature correction parameters to the object that they contain. In addition, they use the temperature input to correct their ionic strength correction parameters.
End of explanation
"""
saltwater = ionize.Solution(['sodium', 'hydrochloric acid'], [0.1, 0.1])
print saltwater.kohlrausch()
print buffer_ref.ions
print buffer_ref.kohlrausch()
"""
Explanation: Conservation Functions
Conservation functions are spatially invariant quantities that remain constant as a solution undergoes electrophoresis. They are useful in calculating ion concentrations in zones formed during electrophoresis.
The Kohlrausch Regulating Function (KRF)
The most basic conservation function is the KRF. This function is only valid for strongly ionized species, when water dissociation doesn't play a strong role. Solutions can calculate their own KRF values. They throw a warning if they contain species that are not strongly ionized.
End of explanation
"""
tcap = ionize.Solution(['tris', 'caproic acid'], [0.1, 0.05])
print tcap.alberty()
tcit = ionize.Solution(['tris', 'citric acid'], [0.1, 0.05])
print tcit.alberty()
"""
Explanation: The Alberty Conservation Function
The Alberty conservation function is useful for weakly ionized monovalent species, when water dissocation doesn't play a strong role.
End of explanation
"""
print tcap.jovin()
print tcit.jovin()
"""
Explanation: The Jovin Conservation Function
The Jovin conservation function is applicable under the same conditions that the Alberty conservation function is. It is often used as a compliment.
End of explanation
"""
print tcap.gas()
print tcit.gas()
"""
Explanation: The Gas Conservation Functions
End of explanation
"""
# %load_ext snakeviz
# %%snakeviz
# database = ionize.Database()
# pH = np.linspace(0, 14)
# for ion in database:
# for p in pH:
# ion.mobility(p)
database
import itertools
concentrations = np.linspace(0, 0.14)
ref_mob = 50.e-9
z = [1, 2]
for zp, zm in itertools.product(z, repeat=2):
positive_ion = ionize.Ion('positive', [zp], [14], [ref_mob])
negative_ion = ionize.Ion('negative', [-zm], [0], [-ref_mob])
mob = []
i = []
for c in concentrations:
sol = ionize.Solution([positive_ion, negative_ion], [c/zp, c/zm])
mob.append(sol.ions[0].actual_mobility() / ref_mob )
i.append(sol.ionic_strength)
plot(i, mob, label='-{}:{}'.format(zm, zp))
ylim(0, 1)
# xlim(0, .14)
legend(loc='lower left')
xlabel('Concentration (M)')
ylabel('$\mu$/$\mu_o$')
show()
"""
Explanation: Serialization, Saving, and Loading
You can also save and load ions and solutions in JSON format.
End of explanation
"""
|
jskksj/cv2stuff | cv2stuff/notebooks/pathlib.ipynb | isc | p = Path('.')
[x for x in p.iterdir() if x.is_dir()]
list(p.glob('**/*.py'))
p = Path('/etc')
p
q = p / 'init.d' / 'reboot'
q
q.resolve()
q.exists()
q.is_dir()
with q.open() as f:
print(f.readline())
"""
Explanation: Path is the basic operator for portability.
End of explanation
"""
PurePosixPath('foo') == PurePosixPath('FOO')
PureWindowsPath('foo') == PureWindowsPath('FOO')
PureWindowsPath('FOO') in { PureWindowsPath('foo') }
PureWindowsPath('C:') < PureWindowsPath('d:')
"""
Explanation: Paths are immutable and hashable. Paths of a same flavour are comparable and orderable. These properties respect the flavour’s case-folding semantics:
End of explanation
"""
PureWindowsPath('foo') == PurePosixPath('foo')
try:
PureWindowsPath('foo') < PurePosixPath('foo')
except TypeError as e:
if "unorderable types" in str(e):
print(True)
"""
Explanation: Paths of a different flavour compare unequal and cannot be ordered:
End of explanation
"""
p = PurePath('/etc')
p
p / 'init.d' / 'apache2'
q = PurePath('bin')
'/usr' / q
"""
Explanation: The slash operator helps create child paths, similarly to os.path.join():
End of explanation
"""
p = PurePath('/etc')
str(p)
bytes(p)
p = PureWindowsPath('c:/Program Files')
str(p)
p = PurePath('/usr/bin/python3')
p
p.parts
p = PureWindowsPath('c:/Program Files/PSF')
p.parts
PureWindowsPath('c:/Program Files/').drive
PureWindowsPath('/Program Files/').drive
PurePosixPath('/etc').drive
PureWindowsPath('//host/share/foo.txt').drive
PureWindowsPath('c:/Program Files/').root
PureWindowsPath('c:Program Files/').root
PurePosixPath('/etc').root
PureWindowsPath('//host/share').root
PureWindowsPath('c:/Program Files/').anchor
PureWindowsPath('c:Program Files/').anchor
PurePosixPath('/etc').anchor
PureWindowsPath('//host/share').anchor
p = PureWindowsPath('c:/foo/bar/setup.py')
p.parents[0]
p.parents[1]
p.parents[2]
p = PurePosixPath('/a/b/c/d')
p.parent
p = PurePosixPath('/')
"""
Explanation: The string representation of a path is the raw filesystem path itself (in native form, e.g. with backslashes under Windows), which you can pass to any function taking a file path as a string:
End of explanation
"""
os.name
Path('setup.py')
PosixPath('setup.py')
try:
WindowsPath('setup.py')
except NotImplementedError as e:
if "cannot instantiate 'WindowsPath' on your system" in str(e):
print(True)
Path.cwd()
p = Path('../../setup.py')
p.exists()
p.stat().st_mtime
p.stat().st_mode
p.stat().st_size
p.group()
p.is_dir()
p.is_file()
p.owner()
setup_text = p.read_text()
'Python' in setup_text
PureWindowsPath(home) # I thought this would translate the seperators.
ini_file = Path.home() / '.cv2stuff.ini'
ini_file
ini_file.exists()
config = configparser.ConfigParser()
config['DEFAULT'] = {'MAXIMUM_ITERATIONS':'30', 'PIXEL_RESOLUTION':'0.001'}
max = config['DEFAULT'].getint('MAXIMUM_ITERATIONS')
type(max)
max
pix = max = config['DEFAULT'].getfloat('PIXEL_RESOLUTION')
type(pix)
pix
print(cv2.TERM_CRITERIA_MAX_ITER, "and", cv2.TERM_CRITERIA_EPS)
type(cv2.TERM_CRITERIA_EPS)
"""
Explanation: You can only instantiate the class flavour that corresponds to your system (allowing system calls on non-compatible path flavours could lead to bugs or failures in your application):
End of explanation
"""
config['criteria'] = {}
criteria = config['criteria']
cv2config_path = Path.home() / '.cv2stuffconf'
cv2config_file = 'default.ini'
config['inner_points'] = {'columns':'9', 'rows':'6'}
config['search_window'] = {'x':'11', 'y':'11'}
if not Path.exists(cv2config_path):
Path.mkdir(cv2config_path)
cv2config = cv2config_path / cv2config_file
if not Path.exists(cv2config):
# Create default ini configuration file.
with open(str(cv2config), 'wt') as configfile:
config.write(configfile)
print('Does not exist', cv2config)
else:
config.read(str(cv2config))
config.sections()
config_path = click.get_app_dir('cv2stuff')
config_path
config_name = 'cv2stuff.ini'
Path.exists(Path(config_path))
Path.exists(Path(config_path) / config_name)
image_path = set(sorted(Path('/home/jsk/GitHub/cv2stuff/data/images').glob('left*[0-9].jpg')))
image_path
for p in image_path:
print(p.parts)
parts = ()
if len(parts) == (1 if (False or False) else 0):
print(True)
"""
Explanation: If the config does not exist then a one time initialization of values must be done. After that the file can just be read in on startup.
Should default camera calibration values be included for example purposes?
criteria – Termination criteria for the iterative optimization algorithm.
End of explanation
"""
|
JerryKurata/MachineLearningWithPython | Notebooks/.ipynb_checkpoints/Pima-Prediction-checkpoint.ipynb | gpl-3.0 | import pandas as pd # pandas is a dataframe library
import matplotlib.pyplot as plt # matplotlib.pyplot plots data
import numpy as np # numpy provides N-dim object support
# do ploting inline instead of in a separate window
%matplotlib inline
"""
Explanation: Predicting Diabetes
Import Libraries
End of explanation
"""
df = pd.read_csv("./data/pima-data.csv") # load Pima data. Adjust path as necessary
df.shape
df.head(5)
df.tail(5)
"""
Explanation: Load and review data
End of explanation
"""
df.isnull().values.any()
def plot_corr(df, size=10):
"""
Function plots a graphical correlation matrix for each pair of columns in the dataframe.
Input:
df: pandas DataFrame
size: vertical and horizontal size of the plot
Displays:
matrix of correlation between columns. Blue-cyan-yellow-red-darkred => less to more correlated
0 ------------------> 1
Expect a darkred line running from top left to bottom right
"""
corr = df.corr() # data frame correlation function
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr) # color code the rectangles by correlation value
plt.xticks(range(len(corr.columns)), corr.columns) # draw x tick marks
plt.yticks(range(len(corr.columns)), corr.columns) # draw y tick marks
plot_corr(df)
df.corr()
df.head()
del df['skin']
df.head()
plot_corr(df)
"""
Explanation: Definition of features
From the metadata on the data source we have the following definition of the features.
| Feature | Description | Comments |
|--------------|-------------|--------|
| num_preg | number of pregnancies |
| glucose_conc | Plasma glucose concentration a 2 hours in an oral glucose tolerance test |
| diastolic_bp | Diastolic blood pressure (mm Hg) |
| thickness | Triceps skin fold thickness (mm) |
|insulin | 2-Hour serum insulin (mu U/ml) |
| bmi | Body mass index (weight in kg/(height in m)^2) |
| diab_pred | Diabetes pedigree function |
| Age (years) | Age (years)|
| skin | ???? | What is this? |
| diabetes | Class variable (1=True, 0=False) | Why is our data boolean (True/False)? |
Handle null values
Pandas makes it easy to see if there are any null values in the data frame.
The isnull() method will check each value in the data frame for null values, and then .any() will return if any nulls are found.
End of explanation
"""
df.head(5)
"""
Explanation: Check Data Types
End of explanation
"""
diabetes_map = {True : 1, False : 0}
df['diabetes'] = df['diabetes'].map(diabetes_map)
df.head(5)
"""
Explanation: Change True to 1, False to 0
End of explanation
"""
num_true = len(df.loc[df['diabetes'] == True])
num_false = len(df.loc[df['diabetes'] == False])
print("Number of True cases: {0} ({1:2.2f}%)".format(num_true, (num_true/ (num_true + num_false)) * 100))
print("Number of False cases: {0} ({1:2.2f}%)".format(num_false, (num_false/ (num_true + num_false)) * 100))
"""
Explanation: Check true/false ratio
End of explanation
"""
|
davewsmith/notebooks | temperature/InitialTemperatureValues.ipynb | mit | !head -5 temps.csv
"""
Explanation: A preliminary look at sensor data
The general idea of the project is to get a handle on how the house heats and cools so that we can better program the thermostat.
To gather data, I've assembled and programmed 5 probes using inexpensive hardware (Wemos D1 Mini ESP8266 Wifi boards and SHT30 temperature/humidity sensors). The intent is to move the probes around the house to help us tune the thermostat.
Here, I look at several hours of data from an initial test run. The probes were colocated, but not identically oriented.
The initial version of software connected probes to the house WiFi at startup, but not if WiFi dropped out. And there was a hiccup, and all of the probes stopped reporting. Fortunately, I was able to get a decent data sample.
The probes report temperature and humidity readings every 30 seconds or so, along with the probe's WiFi MAC address. The web server on a spare laptop collects the data, adds a timestamp, and appends to a .csv file. (Eventually, data will go into a database, but flat files are fine for getting started.)
Here's what we're starting with
End of explanation
"""
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12, 5)
import pandas as pd
"""
Explanation: Some prelimaries. Import code, and configure chart sizes to be larger than the default.
End of explanation
"""
df = pd.read_csv('temps.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0])
df.head()
"""
Explanation: Load the .csv into a pandas DataFrame, adding column names.
End of explanation
"""
df.plot();
"""
Explanation: A quick plot to get a rough idea of how the sensors differ.
End of explanation
"""
per_sensor_f = df.pivot(index='time', columns='mac', values='f')
per_sensor_f.head()
"""
Explanation: Aside from a wider spread in sensor values than I'd like, and higher temperatures (the room wasn't that hot!), this is roughly what I expected for the temperature pattern. It was a hot, humid day. The bedroom starts off warm, cools when I turned on A/C at 9pm, then oscillates during the night as the A/C kicks in on a scheduled setting.
I didn't know what to expect for humidity.
To get per-sensor plots, the data needs to be reorganized so that each probe is in a different column. This'll need to be done for temperature and humidity independently. Temperature first, since that's what I'm interested in.
End of explanation
"""
downsampled_f = per_sensor_f.resample('2T').mean()
downsampled_f.head()
downsampled_f.plot();
"""
Explanation: This is roughly what's needed, except for the NaN (missing) values. Resampling the data into 2 minute buckets deals with those.
End of explanation
"""
downsampled_f['5C:CF:7F:33:F7:F8'] += 5.0
downsampled_f.plot();
"""
Explanation: The first thing that jumps out is that one of the sensor ~5 degrees lower than the others. The SHT30 sensors are inexpensive; it might be a manufacturing problem, or I might have damaged one while soldering on the headers. (Or maybe it's the sane one, and the other four are measuring hot.)
There also seems to be a 20-30 minute warmup period. I suspect here that a probe, being basically a small computer with stuff attached, is generating its own heat, and the chart shows the slow warm-up. That might also explain why temperatures were higher than expected.
Let's try adding 5F to the suspect sensor's temperature reading to bring it in line with the others.
End of explanation
"""
per_sensor_h = df.pivot(index='time', columns='mac', values='h')
downsampled_h = per_sensor_h.resample('2T').mean()
downsampled_h.plot();
"""
Explanation: That looks promising.
Next, reorganize the data so that we plot humidity. I'm not as interested in humidity, since it's not as easily controlled, but hey, it's data!
End of explanation
"""
downsampled_h['5C:CF:7F:33:F7:F8'] -= 9.0
downsampled_h.plot();
"""
Explanation: That same sensor is the outlier. Eyeballing the graph, that sensor's humidity reading looks high by about 9 units.
End of explanation
"""
|
aidiary/notebooks | keras/180103-stacked-lstm.ipynb | mit | from math import sin
from math import pi
import matplotlib.pyplot as plt
%matplotlib inline
length = 100
freq = 5
sequence = [sin(2 * pi * freq * (i / length)) for i in range(length)]
plt.plot(sequence)
from math import sin, pi, exp
import matplotlib.pyplot as plt
length = 100
period = 10
decay = 0.05
sequence = [0.5 + 0.5 * sin(2 * pi * i / period) * exp(-decay * i) for i in range(length)]
plt.plot(sequence)
# generate damped sine wave in [0, 1]
def generate_sequence(length, period, decay):
return [0.5 + 0.5 * sin(2 * pi * i / period) * exp(-decay * i) for i in range(length)]
seq = generate_sequence(100, 10, 0.05)
plt.plot(seq)
seq = generate_sequence(100, 20, 0.05)
plt.plot(seq)
# generate input and output pairs of damped sine waves
import numpy as np
import random
def generate_examples(length, n_patterns, output):
X, y = list(), list()
for _ in range(n_patterns):
p = random.randint(10, 20) # period
d = random.uniform(0.01, 0.1) # decay
sequence = generate_sequence(length + output, p, d)
X.append(sequence[:-output])
y.append(sequence[-output:])
X = np.array(X).reshape(n_patterns, length, 1) # (samples, time steps, features)の3Dテンソル
y = np.array(y).reshape(n_patterns, output)
return X, y
X, y = generate_examples(20, 5, 5)
for i in range(len(X)):
plt.plot([x for x in X[i, :, 0]] + [x for x in y[i]], 'o-')
from keras.models import Sequential
from keras.layers import LSTM, Dense
length = 50
output = 5 # 最後の5ステップの値を予測する
model = Sequential()
model.add(LSTM(20, return_sequences=True, input_shape=(length, 1)))
model.add(LSTM(20))
model.add(Dense(output))
model.compile(loss='mae', optimizer='adam')
model.summary()
"""
Explanation: Sine Wave
End of explanation
"""
# fit model
X, y = generate_examples(length, 10000, output)
history = model.fit(X, y, batch_size=10, epochs=1)
# evaluate model
X, y = generate_examples(length, 1000, output)
loss = model.evaluate(X, y, verbose=0)
print('MAE: %f' % loss)
# prediction on new data
X, y = generate_examples(length, 1, output)
yhat = model.predict(X, verbose=0)
X = X.reshape(50)
y = y.reshape(5)
yhat = yhat.reshape(5)
plt.plot(np.concatenate((X, y)), label='y')
plt.plot(np.concatenate((X, yhat)), label='yhat')
plt.legend()
"""
Explanation: https://stackoverflow.com/questions/43882796/when-does-keras-reset-an-lstm-state
デフォルトではstateful = False
バッチごとにパラメータ更新時に内部状態が自動的にリセットされる
バッチ内の各シーケンス(サンプル)ごとに内部状態を 別々に 管理
End of explanation
"""
from keras.models import Sequential
from keras.layers import LSTM, Dense
length = 50
output = 5 # 最後の5ステップの値を予測する
model = Sequential()
model.add(LSTM(20, input_shape=(length, 1)))
model.add(Dense(output))
model.compile(loss='mae', optimizer='adam')
model.summary()
# fit model
X, y = generate_examples(length, 10000, output)
history = model.fit(X, y, batch_size=10, epochs=1)
# evaluate model
X, y = generate_examples(length, 1000, output)
loss = model.evaluate(X, y, verbose=0)
print('MAE: %f' % loss)
# prediction on new data
X, y = generate_examples(length, 1, output)
yhat = model.predict(X, verbose=0)
X = X.reshape(50)
y = y.reshape(5)
yhat = yhat.reshape(5)
plt.plot(np.concatenate((X, y)), label='y')
plt.plot(np.concatenate((X, yhat)), label='yhat')
plt.legend()
"""
Explanation: Vanilla LSTMの場合
End of explanation
"""
|
caisq/tensorflow | tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
tf.enable_eager_execution()
tfe = tf.contrib.eager # Shorthand for some symbols
"""
Explanation: Automatic differentiation and gradient tape
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /><span>Run in Google Colab</span></a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /><span>View source on GitHub</span></a></td></table>
In the previous tutorial we introduced Tensors and operations on them. In this tutorial we will cover automatic differentiation, a key technique for optimizing machine learning models.
Setup
End of explanation
"""
from math import pi
def f(x):
return tf.square(tf.sin(x))
assert f(pi/2).numpy() == 1.0
# grad_f will return a list of derivatives of f
# with respect to its arguments. Since f() has a single argument,
# grad_f will return a list with a single element.
grad_f = tfe.gradients_function(f)
assert tf.abs(grad_f(pi/2)[0]).numpy() < 1e-7
"""
Explanation: Derivatives of a function
TensorFlow provides APIs for automatic differentiation - computing the derivative of a function. The way that more closely mimics the math is to encapsulate the computation in a Python function, say f, and use tfe.gradients_function to create a function that computes the derivatives of f with respect to its arguments. If you're familiar with autograd for differentiating numpy functions, this will be familiar. For example:
End of explanation
"""
def f(x):
return tf.square(tf.sin(x))
def grad(f):
return lambda x: tfe.gradients_function(f)(x)[0]
x = tf.lin_space(-2*pi, 2*pi, 100) # 100 points between -2π and +2π
import matplotlib.pyplot as plt
plt.plot(x, f(x), label="f")
plt.plot(x, grad(f)(x), label="first derivative")
plt.plot(x, grad(grad(f))(x), label="second derivative")
plt.plot(x, grad(grad(grad(f)))(x), label="third derivative")
plt.legend()
plt.show()
"""
Explanation: Higher-order gradients
The same API can be used to differentiate as many times as you like:
End of explanation
"""
def f(x, y):
output = 1
for i in range(y):
output = tf.multiply(output, x)
return output
def g(x, y):
# Return the gradient of `f` with respect to it's first parameter
return tfe.gradients_function(f)(x, y)[0]
assert f(3.0, 2).numpy() == 9.0 # f(x, 2) is essentially x * x
assert g(3.0, 2).numpy() == 6.0 # And its gradient will be 2 * x
assert f(4.0, 3).numpy() == 64.0 # f(x, 3) is essentially x * x * x
assert g(4.0, 3).numpy() == 48.0 # And its gradient will be 3 * x * x
"""
Explanation: Gradient tapes
Every differentiable TensorFlow operation has an associated gradient function. For example, the gradient function of tf.square(x) would be a function that returns 2.0 * x. To compute the gradient of a user-defined function (like f(x) in the example above), TensorFlow first "records" all the operations applied to compute the output of the function. We call this record a "tape". It then uses that tape and the gradients functions associated with each primitive operation to compute the gradients of the user-defined function using reverse mode differentiation.
Since operations are recorded as they are executed, Python control flow (using ifs and whiles for example) is naturally handled:
End of explanation
"""
x = tf.ones((2, 2))
# TODO(b/78880779): Remove the 'persistent=True' argument and use
# a single t.gradient() call when the bug is resolved.
with tf.GradientTape(persistent=True) as t:
# TODO(ashankar): Explain with "watch" argument better?
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the same tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
"""
Explanation: At times it may be inconvenient to encapsulate computation of interest into a function. For example, if you want the gradient of the output with respect to intermediate values computed in the function. In such cases, the slightly more verbose but explicit tf.GradientTape context is useful. All computation inside the context of a tf.GradientTape is "recorded".
For example:
End of explanation
"""
# TODO(ashankar): Should we use the persistent tape here instead? Follow up on Tom and Alex's discussion
x = tf.constant(1.0) # Convert the Python 1.0 to a Tensor object
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
t2.watch(x)
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
"""
Explanation: Higher-order gradients
Operations inside of the GradientTape context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
End of explanation
"""
|
opesci/notebooks | AcousticFWI/Acoustic3D.ipynb | bsd-3-clause | p=Function('p')
m,s,h = symbols('m s h')
m=M(x,y,z)
q=Q(x,y,z,t)
d=D(x,y,z,t)
e=E(x,y,z)
# Choose dimension (2 or 3)
dim = 3
# Choose order
time_order = 2
space_order = 2
# half width for indexes, goes from -half to half
width_t = int(time_order/2)
width_h = int(space_order/2)
solvep = p(x,y,z,t+width_t*s)
solvepa = p(x,y,z,t-width_t*s)
# Indexes for finite differences
indx = []
indy = []
indz = []
indt = []
for i in range(-width_h,width_h+1):
indx.append(x + i * h)
indy.append(y + i * h)
indz.append(z + i* h)
for i in range(-width_t,width_t+1):
indt.append(t + i * s)
# Finite differences
dtt=as_finite_diff(p(x,y,z,t).diff(t,t),indt)
dxx=as_finite_diff(p(x,y,z,t).diff(x,x), indx)
dyy=as_finite_diff(p(x,y,z,t).diff(y,y), indy)
dzz=as_finite_diff(p(x,y,z,t).diff(z,z), indz)
dt=as_finite_diff(p(x,y,z,t).diff(t), indt)
lap = dxx + dyy + dzz
arglamb=[]
arglamba=[]
for i in range(-width_t,width_t):
arglamb.append( p(x,y,z,indt[i+width_t]))
arglamba.append( p(x,y,z,indt[i+width_t+1]))
for i in range(-width_h,width_h+1):
arglamb.append( p(indx[i+width_h],y,z,t))
arglamba.append( p(indx[i+width_h],y,z,t))
for i in range(-width_h,width_h+1):
arglamb.append( p(x,indy[i+width_h],z,t))
arglamba.append( p(x,indy[i+width_h],z,t))
for i in range(-width_h,width_h+1):
arglamb.append( p(x,y,indz[i+width_h],t))
arglamba.append( p(x,y,indz[i+width_h],t))
arglamb.extend((q , m, s, h, e))
arglamb=tuple(arglamb)
arglamba.extend((q , m, s, h, e))
arglamba=tuple(arglamba)
arglamb=[ii for n,ii in enumerate(arglamb) if ii not in arglamb[:n]]
arglamb
# Forward wave equation
wave_equation = m*dtt- lap - q + e*dt
stencil = solve(wave_equation,solvep)[0]
ts=lambdify(arglamb,stencil,"numpy")
stencil
# Adjoint wave equation
wave_equationA = m*dtt- lap - d - e*dt
stencilA = solve(wave_equationA,solvepa)[0]
tsA=lambdify(arglamba,stencilA,"numpy")
stencilA
"""
Explanation: PDE
The acoustic wave equation for the square slowness m and a source q is given in 3D by :
\begin{cases}
&m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q \
&u(.,0) = 0 \
&\frac{d u(x,t)}{dt}|_{t=0} = 0
\end{cases}
with the zero initial conditons to guaranty unicity of the solution
End of explanation
"""
import matplotlib.pyplot as plt
from matplotlib import animation
hstep=25 #space increment d = minv/(10*f0);
tstep=2 #time increment dt < .5 * hstep /maxv;
tmin=0.0 #initial time
tmax=300 #simulate until
xmin=-500.0 #left bound
xmax=500.0 #right bound...assume packet never reaches boundary
ymin=-600.0 #left bound
ymax=600.0 #right bound...assume packet never reaches boundary
zmin=-250.0 #left bound
zmax=400.0 #right bound...assume packet never reaches boundary
f0=.010
t0=1/.010
nbpml=10
nx = int((xmax-xmin)/hstep) + 1 #number of points on x grid
ny = int((ymax-ymin)/hstep) + 1 #number of points on x grid
nz = int((zmax-zmin)/hstep) + 1 #number of points on x grid
nt = int((tmax-tmin)/tstep) + 2 #number of points on t grid
xsrc=0.0
ysrc=0.0
zsrc=25.0
#set source as Ricker wavelet for f0
def source(x,y,z,t):
r = (np.pi*f0*(t-t0))
val = (1-2.*r**2)*np.exp(-r**2)
if abs(x-xsrc)<hstep/2 and abs(y-ysrc)<hstep/2 and abs(z-zsrc)<hstep/2:
return val
else:
return 0.0
def dampx(x):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if x<nbpml:
return dampcoeff*((nbpml-x)/nbpml)**2
elif x>nx-nbpml-1:
return dampcoeff*((x-nx+nbpml)/nbpml)**2
else:
return 0.0
def dampy(y):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if y<nbpml:
return dampcoeff*((nbpml-y)/nbpml)**2
elif y>ny-nbpml-1:
return dampcoeff*((y-ny+nbpml)/nbpml)**2
else:
return 0.0
def dampz(z):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if z<nbpml:
return dampcoeff*((nbpml-z)/nbpml)**2
elif z>nz-nbpml-1:
return dampcoeff*((z-ny+nbpml)/nbpml)**2
else:
return 0.0
# True velocity
vel=np.ones((nx,ny,nz)) + 2.0
vel[:,:,int(nz/2):nz]=4.5
mt=vel**-2
def Forward(nt,nx,ny,nz,m):
u=np.zeros((nt+2,nx,ny,nz))
for ti in range(2,nt+2):
for a in range(1,nx-1):
for b in range(1,ny-1):
for c in range(1,nz-1):
src = source(xmin+a*hstep,ymin+b*hstep,zmin+c*hstep,tstep*ti)
damp=dampx(a)+dampy(b)+dampz(c)
u[ti,a,b,c]=ts(u[ti-2,a,b,c],
u[ti-1,a,b,c],
u[ti-1,a-1,b,c],
u[ti-1,a+1,b,c],
u[ti-1,a,b-1,c],
u[ti-1,a,b+1,c],
u[ti-1,a,b,c-1],
u[ti-1,a,b,c+1],
src , m[a,b,c], tstep, hstep, damp)
return u
u = Forward(nt,nx,ny,nz,mt)
"""
Explanation: Forward modelling
Point source on the grid
No receivers
Non cube grid (to make it general)
End of explanation
"""
|
heatseeknyc/data-science | src/bryan analyses/Hack for Heat #4.ipynb | mit | connection = psycopg2.connect('dbname = threeoneone user=threeoneoneadmin password=threeoneoneadmin')
cursor = connection.cursor()
"""
Explanation: Hack for Heat #4: Number of complaints over time pt.2
In this post, we're going to look at the number of complaints each borough received for the last five or so years. First, let's look at the total number of complaints received:
End of explanation
"""
cursor.execute('''SELECT createddate, borough FROM service;''')
borodata = cursor.fetchall()
borodata = pd.DataFrame(borodata)
borodata.columns = ['Date', 'Boro']
borobydate = borodata.groupby(by='Boro').count()
borobydate
"""
Explanation: Borough complaints by date
End of explanation
"""
boropop = {
'MANHATTAN': 1636268,
'BRONX': 1438159,
'BROOKLYN': 2621793,
'QUEENS': 2321580,
'STATEN ISLAND': 473279,
}
borobydate['Pop'] = [boropop.get(x) for x in borobydate.index]
borobydate['CompPerCap'] = borobydate['Date']/borobydate['Pop']
borobydate
"""
Explanation: For fun, let's look at the number of complaints per capita in each of the 5 boroughs. The population values below are from Wikipedia.
End of explanation
"""
borodata['Year'] = [x.year for x in borodata['Date']]
borodata['Month'] = [x.month for x in borodata['Date']]
borodata = borodata.loc[borodata['Boro'] != 'Unspecified']
"""
Explanation: Complaints by borough over months
The next thing we're going to do is make a stacked plot of complaints by borough, over months. To do this, we need to extract the day and month from the date column:
End of explanation
"""
boroplotdata = borodata.groupby(by=['Boro', 'Year','Month']).count()
boroplotdata
"""
Explanation: Next, we need to generate an array of Ys. We want the rows of this dataframe to be the 5 boroughs, and the columns to be the count of complaints for the year and month.
End of explanation
"""
boros = borobydate.index
borodict = {x:[] for x in boros}
borodict.pop('Unspecified')
for boro in borodict:
borodict[boro] = list(boroplotdata.xs(boro).Date)
plotdata = np.zeros(len(borodict['BROOKLYN']))
for boro in borodict:
plotdata = np.row_stack((plotdata, borodict[boro]))
plotdata = np.delete(plotdata, (0), axis=0)
plotdata
"""
Explanation: We basically need to get the above table into a graph.
End of explanation
"""
from matplotlib import patches as mpatches
x = np.arange(len(plotdata[0]))
#crude xlabels
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
years = ['2010', '2011', '2012', '2013', '2014', '2015', '2016']
xlabels = []
for year in years:
for month in months:
xlabels.append("{0} {1}".format(month,year))
plotcolors = [(1,0,103),(213,255,0),(255,0,86),(158,0,142),(14,76,161),(255,229,2),(0,95,57),\
(0,255,0),(149,0,58),(255,147,126),(164,36,0),(0,21,68),(145,208,203),(98,14,0)]
#rescaling rgb from 0-255 to 0 to 1
plotcolors = [(color[0]/float(255),color[1]/float(255),color[2]/float(255)) for color in plotcolors]
legendcolors = [mpatches.Patch(color = color) for color in plotcolors]
plt.figure(figsize = (15,10));
plt.stackplot(x,plotdata, colors = plotcolors);
plt.xticks(x,xlabels,rotation=90);
plt.xlim(0,76)
plt.legend(legendcolors,borodict.keys(), bbox_to_anchor=(0.2, 1));
plt.title('311 Complaints by Borough', size = 24)
plt.ylabel('Number of Complaints',size = 14)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().yaxis.set_ticks_position('left')
plt.gca().xaxis.set_ticks_position('bottom')
"""
Explanation: Awesome! Now we have 5 rows with 77 columns each denoting the complaints for each of the boros for each of the months from 2010 to 2016.
End of explanation
"""
borodata.groupby(by = ['Boro', 'Year']).count()
"""
Explanation: Sanity checks
So, some parts of this graph bear checking. First, did Staten Island really not increase in complaints over the years? The data below (complaints by by borough by year) suggest that that was the case:
End of explanation
"""
|
kit-cel/wt | sigNT/tutorial/ls_polynomial.ipynb | gpl-2.0 | # importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 30}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(30, 15) )
"""
Explanation: Content and Objective
Show result of LS estimator for polynomials:
Given $(x_i, y_i), i=1,...,N$
Assume polynomial model (plus awgn) to be valid
Get LS estimate for polynomial coefficients and show result
Method: Sample groups and get estimator
End of explanation
"""
# define number of samples
N = 20
# define degrees of polynomials
K_actual = 8
K_est = 2
# randomly sample coefficients of polynomial and "noise-it"
coeffs = np.random.rand( K_actual ) * ( -1 )**np.random.randint( 2, size=K_actual )
coeffs /= np.linalg.norm( coeffs, 1 )
f = np.polynomial.polynomial.Polynomial( coeffs )
x_fine = np.linspace( 0, 1, 100)
# define variance of noise
sigma2 = .0
# define random measuring points
x_sample = np.sort( np.random.choice( x_fine, N, replace=False) )
f_sample = f( x_sample ) + np.sqrt( sigma2 ) * np.random.randn( x_sample.size )
"""
Explanation: Parameters
End of explanation
"""
X_LS = np.zeros( ( N, K_est ) )
for _n in range( N ):
for _k in range( K_est ):
X_LS[ _n, _k ] = ( x_sample[ _n ] )** _k
a_LS = np.matmul (np.linalg.pinv( X_LS ), f_sample )
f_LS = np.polynomial.polynomial.Polynomial( a_LS )
print( 'Actual coefficients:\n{}\n'.format( coeffs ) )
print( 'LS estimation:\n{}'.format( a_LS ) )
"""
Explanation: Do LS Estimation
End of explanation
"""
# plot results
plt.plot( x_fine, f( x_fine ), label='$f(x)$' )
plt.plot( x_sample, f_sample, '-x', ms=12, label='$(x_i, y_i)$' )
plt.plot( x_fine, f_LS( x_fine ), ms=12, label='$f_{LS}(x)$' )
plt.grid( True )
plt.legend()
"""
Explanation: Plotting
End of explanation
"""
|
IRC-SPHERE/HyperStream | examples/tutorial_03.ipynb | mit | %load_ext watermark
import sys
from datetime import datetime
sys.path.append("../") # Add parent dir in the Path
from hyperstream import HyperStream
from hyperstream import TimeInterval
from hyperstream.utils import UTC
from utils import plot_high_chart
%watermark -v -m -p hyperstream -g
hs = HyperStream(loglevel=20)
print hs
reader_tool = hs.plugins.example.tools.csv_reader('data/sea_ice.csv')
sea_ice_stream = hs.channel_manager.memory.get_or_create_stream("sea_ice")
ti = TimeInterval(datetime(1990, 1, 1).replace(tzinfo=UTC), datetime(2012, 1, 1).replace(tzinfo=UTC))
reader_tool.execute(sources=[], sink=sea_ice_stream, interval=ti)
for key, value in sea_ice_stream.window().items()[:10]:
print '[%s]: %s' % (key, value)
"""
Explanation: <img style="float: right;" src="images/hyperstream.svg">
HyperStream Tutorial 3: Stream composition
We will be ussing the tool created in the previous tutorial and we will compose the output of the stream with a new one.
End of explanation
"""
list_mean_tool = hs.tools.list_mean()
sea_ice_means_stream = hs.channel_manager.mongo.get_or_create_stream('sea_ice_means')
list_mean_tool.execute(sources=[sea_ice_stream], sink=sea_ice_means_stream, interval=ti)
for key, value in sea_ice_means_stream.window().items()[:10]:
print '[%s]: %s' % (key, value)
"""
Explanation: Stream composition
We can compose a chain of streams using different tools to get a new stream. As an example, we can use the tool read_csv to generate a stream from a csv file. Then, we can apply the tool list_mean, that computes the mean of all the values of each instance of a stream, and outputs a new stream. Finally, we can define the new stream to store the output in memory or in a MongoDB database. In this case, we will store the final Stream in the MongoDB database.
|~stream||tool||stream||tool||stream|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| csv_file | $\rightarrow$ | reader_tool | $\rightarrow$ | sea_ice_stream | $\rightarrow$ | list_mean_tool | $\rightarrow$ | sea_ice_mean_stream |
|filesystem||memory||memory||memory||MongoDB|
End of explanation
"""
my_time, my_data = zip(*[(key.__str__(), value) for key, value in sea_ice_means_stream.window().items()])
plot_high_chart(my_time, my_data, type="high_stock",
title='Mean of sea levels in the Artic and the Antartica', yax='meters')
"""
Explanation: Visualization
We can now plot all the values of the last computed window. In this case there is only one window with all the data computed by the tool.
End of explanation
"""
|
matthewzhenggong/fiwt | workspace_py/.ipynb_checkpoints/RigStaticRollId-Exp36-Copy1-checkpoint.ipynb | lgpl-3.0 | %run matt_startup
%run -i matt_utils
button_qtconsole()
#import other needed modules in all used engines
#with dview.sync_imports():
# import os
"""
Explanation: Parameter Estimation of RIG Roll Experiments
Setup and descriptions
Without ACM model
Turn on wind tunnel
Only 1DoF for RIG roll movement
Use small-amplitude aileron command of CMP as inputs (in degrees)
$$U = \delta_{a,cmp}(t)$$
Consider RIG roll angle and its derivative as States (in radians)
$$X = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Observe RIG roll angle and its derivative as Outputs (in degrees)
$$Z = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Use output error method based on most-likehood(ML) to estimate
$$ \theta = \begin{pmatrix} C_{l,\delta_a,cmp} \ C_{lp,cmp} \end{pmatrix} $$
Startup computation engines
End of explanation
"""
filename = 'FIWT_Exp036_20150605144438.dat.npz'
def loadData():
# Read and parse raw data
global exp_data
exp_data = np.load(filename)
# Select colums
global T_cmp, da_cmp
T_cmp = exp_data['data33'][:,0]
da_cmp = exp_data['data33'][:,3]
global T_rig, phi_rig
T_rig = exp_data['data44'][:,0]
phi_rig = exp_data['data44'][:,2]
loadData()
"""
Explanation: Data preparation
Load raw data
End of explanation
"""
def checkInputOutputData():
#check inputs/outputs
fig, ax = plt.subplots(2,1,True)
ax[1].plot(T_cmp,da1_cmp,'r', picker=2)
ax[0].plot(T_rig,phi_rig, 'b', picker=2)
ax[1].set_ylabel('$\delta \/ / \/ ^o$')
ax[0].set_ylabel('$\phi \/ / \/ ^o/s$')
ax[1].set_xlabel('$T \/ / \/ s$', picker=True)
ax[0].set_title('Output', picker=True)
fig.canvas.mpl_connect('pick_event', onPickTime)
fig.show()
display(fig)
button_CheckData()
"""
Explanation: Check time sequence and inputs/outputs
Click 'Check data' button to show the raw data.
Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
End of explanation
"""
process_set1 = {
'time_marks' : [
[17.6940178696,117,"ramp u1"],
[118.7,230.395312673,"ramp d1"],
[258.807481992,357.486188688,"ramp u2"],
[359.463122988,459.817499014,"ramp d2"],
[461.067939262,558.784538108,"ramp d3"],
[555.553175853,658.648739191,"ramp u4"],
],
'Z':[(T_cmp, da_cmp,1),],
'Z_names' : ['$\delta_{a,cmp} \, / \, ^o$'],
'U_names' : ['$\phi_{a,rig} \, / \, ^o$'],
'U':[(T_rig, phi_rig,1),],
'cutoff_freq': 1, #Hz
'consts':{'DT':0.5,'id':0,'V':30}
}
display_data_set(process_set1)
resample(process_set1, append=False);
"""
Explanation: Input $\delta_T$ and focused time ranges
End of explanation
"""
%%px --local
#update common const parameters in all engines
angles = range(-170,171,10)
angles_num = len(angles)
#problem size
Nx = 0
Nu = 1
Ny = 1
Npar = angles_num
#reference
S_c = 0.1254 #S_c(m2)
b_c = 0.7 #b_c(m)
g = 9.81 #g(m/s2)
#static measurement
m_T = 7.5588 #m_T(kg)
l_z_T = 0.0424250531303 #l_z_T(m)
#previous estimations
F_c = 0.0532285873599 #F_c(N*m)
#for short
_m_T_l_z_T_g = -(m_T*l_z_T)*g
angles_cmpx = [-41, -35, -30, -25, -20, -15, -10, -5, 5, 10, 15, 20, 25, 30, 35, 41]
Clda_cmpx = np.array([[ 0.05968291, 0.0553231, 0.04696643, 0.03889542, 0.03162213, 0.0271057,
0.01839293, 0.00900769, -0.00497219, -0.01407865, -0.02279645, -0.02607804,
-0.03158932, -0.03814162, -0.04606267, -0.05423365],
[ 0.04821761, 0.0404353, 0.03394095, 0.0276883, 0.0223423, 0.01843785,
0.01272706, 0.00739553, -0.00449509, -0.00899431, -0.01628838, -0.01994654,
-0.02522128, -0.03103204, -0.03819486, -0.04861837],
[ 0.04235569, 0.03650224, 0.02990824, 0.02505072, 0.01880923, 0.01535756,
0.01121967, 0.00679852, -0.0041091, -0.00866967, -0.01516793, -0.01995755,
-0.02558169, -0.03024035, -0.0346464, -0.03949022],
[ 0.03393535, 0.03038317, 0.02596484, 0.02259024, 0.01868117, 0.01450576,
0.00870284, 0.00571733, -0.00445593, -0.00858266, -0.01293602, -0.0150844,
-0.01936097, -0.02433856, -0.03052023, -0.0351404 ]])
Clda_cmp = np.sum(Clda_cmpx, axis=0)
Da_cmp = scipy.interpolate.interp1d(Clda_cmp, angles_cmpx)
def obs(Z,T,U,params,consts):
DT = consts['DT']
ID = consts['id']
V = consts['V']
unk_moment = scipy.interpolate.interp1d(angles, params[0:angles_num])
s = T.size
phi = U[:s,0]
hdr = int(1/DT)
col_fric = np.copysign(-F_c,phi[hdr:]-phi[:-hdr])
col_fric = np.concatenate((np.ones(hdr-1)*col_fric[0], col_fric, np.ones(hdr-1)*col_fric[-1]))
qbarSb = 0.5*1.225*V*V*S_c*b_c
Da = Da_cmp((-_m_T_l_z_T_g*np.sin(phi/57.3)-unk_moment(phi)+col_fric)/qbarSb)
return Da.reshape((-1,1))
display(HTML('<b>Constant Parameters</b>'))
table = ListTable()
table.append(['Name','Value','unit'])
table.append(['$S_c$',S_c,'$m^2$'])
table.append(['$b_c$',b_c,'$m$'])
table.append(['$g$',g,'$m/s^2$'])
table.append(['$m_T$',m_T,'$kg$'])
table.append(['$l_{zT}$',l_z_T,'$m$'])
table.append(['$F_c$',F_c,'$Nm$'])
display(table)
"""
Explanation: Define dynamic model to be estimated
$$\left{\begin{matrix}\begin{align}
M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \
M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \
M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \
M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right )
\end{align}\end{matrix}\right.$$
End of explanation
"""
#initial guess
param0 = [0]*angles_num
param_name = ['Mu_{}'.format(angles[i]) for i in range(angles_num)]
param_unit = ['Nm']*angles_num
NparID = Npar
opt_idx = range(Npar)
opt_param0 = [param0[i] for i in opt_idx]
par_del = [0.01]*(angles_num)
bounds = [(-2,2)]*(angles_num)
display_default_params()
#select sections for training
section_idx = range(len(sections))
display_data_for_train()
#push parameters to engines
push_opt_param()
# select 4 section from training data
#idx = random.sample(section_idx, 4)
idx = section_idx[:]
# interact_guess();
%matplotlib inline
update_guess();
"""
Explanation: Initial guess
Input default values and ranges for parameters
Select sections for trainning
Adjust parameters based on simulation results
Decide start values of parameters for optimization
End of explanation
"""
display_preopt_params()
if False:
InfoMat = None
method = 'trust-ncg'
def hessian(opt_params, index):
global InfoMat
return InfoMat
dview['enable_infomat']=True
options={'gtol':1}
opt_bounds = None
else:
method = 'L-BFGS-B'
hessian = None
dview['enable_infomat']=False
options={'ftol':1e-6,'maxfun':400}
opt_bounds = bounds
cnt = 0
tmp_rslt = None
T0 = time.time()
print('#cnt, Time, |R|')
%time res = sp.optimize.minimize(fun=costfunc, x0=opt_param0, \
args=(opt_idx,), method=method, jac=True, hess=hessian, \
bounds=opt_bounds, options=options)
"""
Explanation: Optimize using ML
End of explanation
"""
display_opt_params()
# show result
idx = range(len(sections))
display_data_for_test();
update_guess();
res_params = res['x']
params = param0[:]
for i,j in enumerate(opt_idx):
params[j] = res_params[i]
k1 = np.array(params[0:angles_num])
print('angeles = ')
print(angles)
print('L_unk = ')
print(k1)
%matplotlib inline
plt.figure(figsize=(12,8),dpi=300)
plt.plot(angles, k1, 'r')
plt.ylabel('$L_{unk} \, / \, Nm$')
plt.xlabel('$\phi \, / \, ^o/s$')
plt.show()
toggle_inputs()
button_qtconsole()
(-0.05-0.05)/(80/57.3)
Clda_cmp/4
"""
Explanation: Show and test results
End of explanation
"""
|
newlawrence/poliastro | docs/source/examples/Catch that asteroid!.ipynb | mit | import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time
from astropy.utils.data import conf
conf.dataurl
conf.remote_timeout
"""
Explanation: Catch that asteroid!
End of explanation
"""
conf.remote_timeout = 10000
"""
Explanation: First, we need to increase the timeout time to allow the download of data occur properly
End of explanation
"""
from astropy.coordinates import solar_system_ephemeris
solar_system_ephemeris.set("jpl")
from poliastro.bodies import *
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot
EPOCH = Time("2017-09-01 12:05:50", scale="tdb")
earth = Orbit.from_body_ephem(Earth, EPOCH)
earth
plot(earth, label=Earth);
from poliastro.neos import neows
florence = neows.orbit_from_name("Florence")
florence
"""
Explanation: Then, we do the rest of the imports and create our initial orbits.
End of explanation
"""
florence.epoch
florence.epoch.iso
florence.inc
"""
Explanation: Two problems: the epoch is not the one we desire, and the inclination is with respect to the ecliptic!
End of explanation
"""
florence = florence.propagate(EPOCH)
florence.epoch.tdb.iso
"""
Explanation: We first propagate:
End of explanation
"""
florence_icrs = florence.to_icrs()
florence_icrs.rv()
"""
Explanation: And now we have to convert to the same frame that the planetary ephemerides are using to make consistent comparisons, which is ICRS:
End of explanation
"""
from poliastro.util import norm
norm(florence_icrs.r - earth.r) - Earth.R
"""
Explanation: Let us compute the distance between Florence and the Earth:
End of explanation
"""
from IPython.display import HTML
HTML(
"""<blockquote class="twitter-tweet" data-lang="en"><p lang="es" dir="ltr">La <a href="https://twitter.com/esa_es">@esa_es</a> ha preparado un resumen del asteroide <a href="https://twitter.com/hashtag/Florence?src=hash">#Florence</a> 😍 <a href="https://t.co/Sk1lb7Kz0j">pic.twitter.com/Sk1lb7Kz0j</a></p>— AeroPython (@AeroPython) <a href="https://twitter.com/AeroPython/status/903197147914543105">August 31, 2017</a></blockquote>
<script src="//platform.twitter.com/widgets.js" charset="utf-8"></script>"""
)
"""
Explanation: <div class="alert alert-success">This value is consistent with what ESA says! $7\,060\,160$ km</div>
End of explanation
"""
frame = OrbitPlotter()
frame.plot(earth, label="Earth")
frame.plot(Orbit.from_body_ephem(Mars, EPOCH))
frame.plot(Orbit.from_body_ephem(Venus, EPOCH))
frame.plot(Orbit.from_body_ephem(Mercury, EPOCH))
frame.plot(florence_icrs, label="Florence");
"""
Explanation: And now we can plot!
End of explanation
"""
frame = OrbitPlotter()
frame.plot(earth, label="Earth")
frame.plot(florence, label="Florence (Ecliptic)")
frame.plot(florence_icrs, label="Florence (ICRS)");
"""
Explanation: The difference between doing it well and doing it wrong is clearly visible:
End of explanation
"""
from astropy.coordinates import GCRS, CartesianRepresentation
florence_heclip = florence.frame.realize_frame(
florence.represent_as(CartesianRepresentation)
)
florence_gcrs_trans_cart = (florence_heclip.transform_to(GCRS(obstime=EPOCH))
.represent_as(CartesianRepresentation))
florence_gcrs_trans_cart
florence_hyper = Orbit.from_vectors(
Earth,
r=florence_gcrs_trans_cart.xyz,
v=florence_gcrs_trans_cart.differentials['s'].d_xyz,
epoch=EPOCH
)
florence_hyper
"""
Explanation: And now let's do something more complicated: express our orbit with respect to the Earth! For that, we will use GCRS, with care of setting the correct observation time:
End of explanation
"""
moon = Orbit.from_body_ephem(Moon, EPOCH)
moon
plot(moon, label=Moon)
plt.gcf().autofmt_xdate()
"""
Explanation: We now retrieve the ephemerides of the Moon, which are given directly in GCRS:
End of explanation
"""
frame = OrbitPlotter()
# This first plot sets the frame
frame.plot(florence_hyper, label="Florence")
# And then we add the Moon
frame.plot(moon, label=Moon)
plt.xlim(-1000000, 8000000)
plt.ylim(-5000000, 5000000)
plt.gcf().autofmt_xdate()
"""
Explanation: And now for the final plot:
End of explanation
"""
|
ryan-leung/SolvingProjectEuler | Q021-Q030.ipynb | bsd-3-clause | def d(n):
if n==2:
return 1
result=0
for i in xrange(1,n/2+1):
if n % i == 0:
result=i+result
return result
amicable=[]
for a in xrange(2,10000):
b=d(a)
if d(b) == a and a<>b:
amicable.append(a)
amicable.append(b)
#print a,b
print sum(set(amicable))
"""
Explanation: Q021
Amicable numbers
Let d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).
If d(a) = b and d(b) = a, where a ≠ b, then a and b are an amicable pair and each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
End of explanation
"""
import urllib
urllib.urlretrieve("https://projecteuler.net/project/resources/p022_names.txt","p022_names.txt")
f=open('p022_names.txt')
s=f.read().replace('"','').split(",")
sorted_s=sorted(s)
scores=0
for i,a in enumerate(sorted_s):
score=0
for j in map(ord, a):
score=score+j-64
scores=scores+(i+1)*score
print scores
"""
Explanation: Q022
Names scores
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.
For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 × 53 = 49714.
What is the total of all the name scores in the file?
End of explanation
"""
n=28123
abundant=[]
for a in xrange(2,n):
if d(a) > a:
abundant.append(a)
abundant=set(abundant)
print len(abundant)
abundant_sum=0
for i in xrange(2,n):
for x in abundant:
if i-x in abundant:
abundant_sum+=i
break
print sum(xrange(2,n))-abundant_sum+1 # we missout the 1
"""
Explanation: Q0223
Non-abundant sums
A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
End of explanation
"""
def next_permutation(arr):
# Find non-increasing suffix
i = len(arr) - 1
while i > 0 and arr[i - 1] >= arr[i]:
i -= 1
if i <= 0:
return False
# Find successor to pivot
j = len(arr) - 1
while arr[j] <= arr[i - 1]:
j -= 1
arr[i - 1], arr[j] = arr[j], arr[i - 1]
# Reverse suffix
arr[i : ] = arr[len(arr) - 1 : i - 1 : -1]
return True
# https://www.nayuki.io/page/next-lexicographical-permutation-algorithm
s=[0,1,2,3,4,5,6,7,8,9]
#s=[0,1,2,3]
n=1000000
def next_lexicographical(s):
# Find non-increasing suffix
i = len(s) - 1
while i > 0 and s[i-1] >= s[i]:
i -= 1
if i <= 0:
return False
# Find successor to pivot
j = len(s) - 1
while s[j] <= s[i - 1]:
j -= 1
s[i-1],s[j]=s[j],s[i-1]
s[i:]=s[len(s)-1:i-1:-1]
return s
i=1
while i < n:
s=next_lexicographical(s)
#print s
i+=1
for i in s:
print i,
"""
Explanation: Q024
Lexicographic permutations
A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:
012 021 102 120 201 210
What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?
End of explanation
"""
from decimal import *
getcontext().prec = 100
def recurring_cycle(n):
d= Decimal(1) / Decimal(n)
print d
for i in range(0,100):
if (d*(10**i)-d) % 1 == 0 :
break
return i
recurring_cycle(100)
"""
Explanation: Q026
Reciprocal cycles
A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given:
1/2 = 0.5
1/3 = 0.(3)
1/4 = 0.25
1/5 = 0.2
1/6 = 0.1(6)
1/7 = 0.(142857)
1/8 = 0.125
1/9 = 0.(1)
1/10 = 0.1
Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen that 1/7 has a 6-digit recurring cycle.
Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part.
End of explanation
"""
import numpy as np
def quadratic_prime(n,a,b):
return n*n+a*n+b
def if_prime(n):
if n < 1:
return False
elif n < 4:
return True
elif (n % 2 == 0) or (n % 3 == 0):
return False
i = 5
while i*i < n + 1:
if (n % i == 0) or (n % (i + 2) == 0):
return False
i = i + 6
return True
A,B = np.meshgrid(range(-1000,1000),range(-1000,1000))
longest=[[],0]
for i in zip(A.ravel(),B.ravel()):
n=0
while if_prime(quadratic_prime(n,i[0],i[1])):
n=n+1
if n > longest[1]:
longest=[i,n]
print longest
print longest[0][0]*longest[0][1]
"""
Explanation: Q027
Quadratic primes
Euler discovered the remarkable quadratic formula:
$n^2 + n + 41$
It turns out that the formula will produce 40 primes for the consecutive integer values $0 \le n \le 39$. However, when $n = 40, 40^2 + 40 + 41 = 40(40 + 1) + 41$ is divisible by 41, and certainly when $n = 41, 41^2 + 41 + 41$ is clearly divisible by 41.
The incredible formula $n^2 - 79n + 1601$ was discovered, which produces 80 primes for the consecutive values $0 \le n \le 79$. The product of the coefficients, −79 and 1601, is −126479.
Considering quadratics of the form:
$n^2 + an + b$, where $|a| < 1000$ and $|b| \le 1000$
where $|n|$ is the modulus/absolute value of $n$
e.g. $|11| = 11$ and $|-4| = 4$
Find the product of the coefficients, $a$ and $b$, for the quadratic expression that produces the maximum number of primes for consecutive values of $n$, starting with $n = 0$.
End of explanation
"""
print 1+3+5+7+9+13+17+21+25
def gen_spiral(n):
result=[1]
i=1
number=1
while i<n:
j=1
while j <= 4:
number=number+i*2
result.append(number)
j+=1
i+=1
return result
print gen_spiral(3)
"""
Explanation: Q028
Number spiral diagonals
Starting with the number 1 and moving to the right in a clockwise direction a 5 by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is 101.
What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed in the same way?
End of explanation
"""
print 2*np.log(4)
print 4*np.log(2)
import numpy as np
uniq=[]
for a in range(2,101):
for b in range(2,101):
uniq.append(a**b)
len(set(uniq))
"""
Explanation: Q029
Distinct powers
Consider all integer combinations of $a^b$ for 2 ≤ a ≤ 5 and 2 ≤ b ≤ 5:
$2^2=4, 2^3=8, 2^4=16, 2^5=32$
$3^2=9, 3^3=27, 3^4=81, 3^5=243$
$4^2=16, 4^3=64, 4^4=256, 4^5=1024$
$5^2=25, 5^3=125, 5^4=625, 5^5=3125$
If they are then placed in numerical order, with any repeats removed, we get the following sequence of 15 distinct terms:
4, 8, 9, 16, 25, 27, 32, 64, 81, 125, 243, 256, 625, 1024, 3125
How many distinct terms are in the sequence generated by $a^b$ for 2 ≤ a ≤ 100 and 2 ≤ b ≤ 100?
End of explanation
"""
print "maximum number of digit="
n=1
while 10**(n+1)-1 < n*9**5:
n+=1
print n
result=[]
for n in range(2,10**6):
d = [int(i) for i in str(n)]
e = sum([j**5 for j in d])
if n == e:
result.append(n)
sum(result)
"""
Explanation: Q030
Digit fifth powers
Surprisingly there are only three numbers that can be written as the sum of fourth powers of their digits:
$1634 = 1^4 + 6^4 + 3^4 + 4^4$
$8208 = 8^4 + 2^4 + 0^4 + 8^4$
$9474 = 9^4 + 4^4 + 7^4 + 4^4$
As 1 = $1^4$ is not a sum it is not included.
The sum of these numbers is 1634 + 8208 + 9474 = 19316.
Find the sum of all the numbers that can be written as the sum of fifth powers of their digits.
End of explanation
"""
|
atulsingh0/MachineLearning | python_DC/IntoductionToDataBase_#1.4.ipynb | gpl-3.0 | # import
"""
Explanation: Creating and Manipulating DataBase
End of explanation
"""
# # Import Table, Column, String, Integer, Float, Boolean from sqlalchemy
# from sqlalchemy import Table, Column, String, Integer, Float, Boolean
# # Define a new table with a name, count, amount, and valid column: data
# data = Table('data', metadata,
# Column('name', String(255)),
# Column('count', Integer()),
# Column('amount', Float()),
# Column('valid', Boolean())
# )
# # Use the metadata to create the table
# metadata.create_all(engine)
# # Print table repr
# print(repr(data))
"""
Explanation: Import Table, Column, String, Integer, Float, Boolean from sqlalchemy.
Build a new table called data with columns 'name' (String(255)), 'count' (Integer), 'amount'(Float), and 'valid' (Boolean) columns. The second argument of Table needs to be metadata, which is already initialized.
Create the table in the database by passing data to metadata.create_all().
End of explanation
"""
# # Import Table, Column, String, Integer, Float, Boolean from sqlalchemy
# from sqlalchemy import Table, Column, String, Integer, Float, Boolean
# # Define a new table with a name, count, amount, and valid column: data
# data = Table('data', metadata,
# Column('name', String(255), unique=True),
# Column('count', Integer(), default=1),
# Column('amount', Float()),
# Column('valid', Boolean(), default=False)
# )
# # Use the metadata to create the table
# metadata.create_all(engine)
# # Print the table details
# print(repr(metadata.tables['data']))
"""
Explanation: -Table, Column, String, Integer, Float, Boolean are already imported from sqlalchemy. - Build a new table called data with a unique name (String), count (Integer) defaulted to 1, amount (Float), and valid (Boolean) defaulted to False. - Hit submit to create the table in the database and to print the table details for data.
End of explanation
"""
# # Import insert and select from sqlalchemy
# from sqlalchemy import insert, select
# # Build an insert statement to insert a record into the data table: stmt
# stmt = insert(data).values(name='Anna', count=1, amount=1000.00, valid=True)
# # Execute the statement via the connection: results
# results = connection.execute(stmt)
# # Print result rowcount
# print(results.rowcount)
# # Build a select statement to validate the insert
# stmt = select([data]).where(data.columns.name == 'Anna')
# # Print the result of executing the query.
# print(connection.execute(stmt).first())
"""
Explanation: Import insert and select from the sqlalchemy module.
Build an insert statement for the data table to set name to 'Anna', count to 1, amount to 1000.00, and valid to True. Save the statement as stmt.
Execute stmt with the connection and store the results.
Print the rowcount attribute of results to see how many records were inserted.
Build a select statement to query for the record with the name of Anna.
Hit submit to print the results of executing the select statement.
End of explanation
"""
# # Build a list of dictionaries: values_list
# values_list = [
# {'name': 'Anna', 'count': 1, 'amount': 1000.00, 'valid': True},
# {'name': 'Taylor', 'count': 1, 'amount': 750.00, 'valid': False}
# ]
# # Build an insert statement for the data table: stmt
# stmt = insert(data)
# # Execute stmt with the values_list: results
# results = connection.execute(stmt, values_list)
# # Print rowcount
# print(results.rowcount)
"""
Explanation: Build a list of dictionaries called values_list with two dictionaries. In the first dictionary set name to 'Anna', count to 1, amount to 1000.00, and valid to True. In the second dictionary of the list, set name to 'Taylor', count to 1, amount to 750.00, and valid to False.
Build an insert statement for the data table for a multiple insert, save it as stmt.
Execute stmt with the values_list via connection and store the results.
Print the rowcount of the results.
End of explanation
"""
# # Create a insert statement for census: stmt
# stmt = insert(census)
# # Create an empty list and zeroed row count: values_list, total_rowcount
# values_list = []
# total_rowcount = 0
# # Enumerate the rows of csv_reader
# for idx, row in enumerate(csv_reader):
# #create data and append to values_list
# data = {'state': row[0], 'sex': row[1], 'age': row[2], 'pop2000': row[3],
# 'pop2008': row[4]}
# values_list.append(data)
# # Check to see if divisible by 51
# if idx % 51 == 0:
# results = connection.execute(stmt, values_list)
# total_rowcount += results.rowcount
# values_list = []
# # Print total rowcount
# print(total_rowcount)
"""
Explanation: Create a statement for bulk insert into the census table and save it as stmt.
Create an empty list called values_list, create a variable called total_rowcount set to 0 (an empty list can be created with []).
Within the for loop, create a dictionary data for each row and append it to values_list. Within the for loop, row will be a list whose entries are state , sex, age, pop2000 and pop2008 (in that order);
Recall that, in the for loop, idx will be the csv line number. If 51 will cleanly divide into the current idx (NOTE: use the % operator and make sure it is 0), execute stmt with the values_list. Save the result as results. The results rowcount is then added to total_rowcount, and values_list is set back to an empty list.
Hit submit to print total_rowcount when done with all the records.
End of explanation
"""
# # Build a select statement: select_stmt
# select_stmt = select([state_fact]).where(state_fact.columns.name == 'New York')
# # Print the results of executing the select_stmt
# print(connection.execute(select_stmt).fetchall())
# # Build a statement to update the fips_state to 36: stmt
# stmt = update(state_fact).values(fips_state=36)
# # Append a where clause to limit it to records for New York state
# stmt = stmt.where(state_fact.columns.name == 'New York')
# # Execute the statement: results
# results = connection.execute(stmt)
# # Print rowcount
# print(results.rowcount)
# # Execute the select_stmt again to view the changes
# print(connection.execute(select_stmt).fetchall())
"""
Explanation: Updating individual records
Build a statement to select all columns from the state_fact table where the name column is New York. Call it select_stmt.
Print the results of executing the select_stmt and fetching all records.
Build an update statement to change the fips_state column code to 36, save it as stmt.
Append a where clause to filter for states states with the name of 'New York' in the state_fact table.
Execute stmt via the connection and save the output as results.
Hit Submit to print the rowcount of the results and to print the results of executing the select_stmt. This will verify the fips_state code is now 36.
End of explanation
"""
# # Build a statement to update the notes to 'The Wild West': stmt
# stmt = update(state_fact).values(notes='The Wild West')
# # Append a where clause to match the West census region records
# stmt = stmt.where(state_fact.columns.census_region_name == 'West')
# # Execute the statement: results
# results = connection.execute(stmt)
# # Print rowcount
# print(results.rowcount)
"""
Explanation: Build an update statement to update the notes column in the state_fact table to The Wild West. Save it as stmt.
Append a where clause to to filter for records have 'West' in the census_region_name column.
Execute stmt via the connection and save the output as results.
Hit submit to print rowcount of the results.
End of explanation
"""
# # Build a statement to select name from state_fact: stmt
# fips_stmt = select([state_fact.columns.name])
# # Append a where clause to Match the fips_state to flat_census fips_code
# fips_stmt = fips_stmt.where(
# state_fact.columns.fips_state == flat_census.columns.fips_code)
# # Build an update statement to set the name to fips_stmt: update_stmt
# update_stmt = update(flat_census).values(state_name=fips_stmt)
# # Execute update_stmt: results
# results = connection.execute(update_stmt)
# # Print rowcount
# print(results.rowcount)
"""
Explanation: Build a statement to select the name column from state_fact. Save the statement as fips_stmt.
Append a where clause to fips_stmt that matches fips_state from the state_fact table with fips_code in the flat_census table.
Build an update statement to set the state_name in flat_census to fips_stmt. Save the statement as update_stmt.
Hit Submit to execute update_stmt, store the results and print the rowcount of results.
End of explanation
"""
# # Import delete, select
# from sqlalchemy import delete, select
# # Build a statement to empty the census table: stmt
# stmt = delete(census)
# # Execute the statement: results
# results = connection.execute(stmt)
# # Print affected rowcount
# print(results.rowcount)
# # Build a statement to select all records from the census table
# stmt = select([census])
# # Print the results of executing the statement to verify there are no rows
# print(connection.execute(stmt).fetchall())
"""
Explanation: Deleting objects from DataBase
Import delete and select from sqlalchemy.
Build a delete statement to remove all the data from the census table; saved it as stmt.
Execute stmt via the connection and save the results.
Hit 'Submit Answer' to select all remaining rows from the census table and print the result to confirm that the table is now empty!
End of explanation
"""
# # Build a statement to count records using the sex column for Men (M) age 36: stmt
# stmt = select([func.count(census.columns.sex)]).where(
# and_(census.columns.sex == 'M',
# census.columns.age == 36)
# )
# # Execute the select statement and use the scalar() fetch method to save the record count
# to_delete = connection.execute(stmt).scalar()
# # Build a statement to delete records from the census table: stmt_del
# stmt_del = delete(census)
# # Append a where clause to target man age 36
# stmt_del = stmt_del.where(
# and_(census.columns.sex == 'M',
# census.columns.age == 36)
# )
# # Execute the statement: results
# results = connection.execute(stmt_del)
# # Print affected rowcount and to_delete record count, make sure they match
# print(results.rowcount, to_delete)
"""
Explanation: Build a delete statement to remove data from the census table; save it as stmt.
Execute stmt via the connection; save the results.
Append a where clause to stmt_del that filters for rows which have 'M' in the sex column AND 36 in the age column.
Hit Submit to print the rowcount of the results, as well as to_delete, which returns the number of rows that should be deleted. These should match and this is an important sanity check!
End of explanation
"""
# # Drop the state_fact table
# state_fact.drop(engine)
# # Check to see if state_fact exists
# print(state_fact.exists(engine))
# # Drop all tables
# metadata.drop_all(engine)
# # Check to see if census exists
# print(census.exists(engine))
# print(engine.table_names())
"""
Explanation: Drop the state_fact table by applying the method drop() to it and passing it the argument engine (in fact, engine will be the sole argument for every function/method in this exercises!)
Check to see if state_fact exists via print.
Drop all the tables via the metadata.
Check to see if census exists via print.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/00ac060e49528fd74fda09b97366af98/3d_to_2d.ipynb | bsd-3-clause | # Authors: Christopher Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
from scipy.io import loadmat
import numpy as np
from matplotlib import pyplot as plt
from os import path as op
import mne
from mne.viz import ClickableImage # noqa
from mne.viz import (plot_alignment, snapshot_brain_montage,
set_3d_view)
print(__doc__)
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
path_data = mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat'
# We've already clicked and exported
layout_path = op.join(op.dirname(mne.__file__), 'data', 'image')
layout_name = 'custom_layout.lout'
"""
Explanation: How to convert 3D electrode positions to a 2D image.
Sometimes we want to convert a 3D representation of electrodes into a 2D
image. For example, if we are using electrocorticography it is common to
create scatterplots on top of a brain, with each point representing an
electrode.
In this example, we'll show two ways of doing this in MNE-Python. First,
if we have the 3D locations of each electrode then we can use Mayavi to
take a snapshot of a view of the brain. If we do not have these 3D locations,
and only have a 2D image of the electrodes on the brain, we can use the
:class:mne.viz.ClickableImage class to choose our own electrode positions
on the image.
End of explanation
"""
mat = loadmat(path_data)
ch_names = mat['ch_names'].tolist()
elec = mat['elec'] # electrode coordinates in meters
# Now we make a montage stating that the sEEG contacts are in head
# coordinate system (although they are in MRI). This is compensated
# by the fact that below we do not specicty a trans file so the Head<->MRI
# transform is the identity.
montage = mne.channels.make_dig_montage(ch_pos=dict(zip(ch_names, elec)),
coord_frame='head')
info = mne.create_info(ch_names, 1000., 'ecog').set_montage(montage)
print('Created %s channel positions' % len(ch_names))
"""
Explanation: Load data
First we will load a sample ECoG dataset which we'll use for generating
a 2D snapshot.
End of explanation
"""
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces=['pial'], meg=False)
set_3d_view(figure=fig, azimuth=200, elevation=70)
xy, im = snapshot_brain_montage(fig, montage)
# Convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in info['ch_names']])
# Define an arbitrary "activity" pattern for viz
activity = np.linspace(100, 200, xy_pts.shape[0])
# This allows us to use matplotlib to create arbitrary 2d scatterplots
fig2, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')
ax.set_axis_off()
# fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage
"""
Explanation: Project 3D electrodes to a 2D snapshot
Because we have the 3D location of each electrode, we can use the
:func:mne.viz.snapshot_brain_montage function to return a 2D image along
with the electrode positions on that image. We use this in conjunction with
:func:mne.viz.plot_alignment, which visualizes electrode positions.
End of explanation
"""
# This code opens the image so you can click on it. Commented out
# because we've stored the clicks as a layout file already.
# # The click coordinates are stored as a list of tuples
# im = plt.imread('./brain.png')
# click = ClickableImage(im)
# click.plot_clicks()
# # Generate a layout from our clicks and normalize by the image
# print('Generating and saving layout...')
# lt = click.to_layout()
# lt.save(op.join(layout_path, layout_name)) # To save if we want
# # We've already got the layout, load it
lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)
x = lt.pos[:, 0] * float(im.shape[1])
y = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position
fig, ax = plt.subplots()
ax.imshow(im)
ax.scatter(x, y, s=120, color='r')
plt.autoscale(tight=True)
ax.set_axis_off()
plt.show()
"""
Explanation: Manually creating 2D electrode positions
If we don't have the 3D electrode positions then we can still create a
2D representation of the electrodes. Assuming that you can see the electrodes
on the 2D image, we can use :class:mne.viz.ClickableImage to open the image
interactively. You can click points on the image and the x/y coordinate will
be stored.
We'll open an image file, then use ClickableImage to
return 2D locations of mouse clicks (or load a file already created).
Then, we'll return these xy positions as a layout for use with plotting topo
maps.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.