markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Double-gyre example Define a double gyre fieldset that varies in time
def doublegyre_fieldset(times, xdim=51, ydim=51): """Implemented following Froyland and Padberg (2009), 10.1016/j.physd.2009.03.002""" A = 0.25 delta = 0.25 omega = 2 * np.pi a, b = 2, 1 # domain size lon = np.linspace(0, a, xdim, dtype=np.float32) lat = np.linspace(0, b, ydim, dtype=np.fl...
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
Now simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel
X, Y = np.meshgrid(np.arange(0.15, 1.85, 0.1), np.arange(0.15, 0.85, 0.1)) psetAA = ParticleSet(fieldsetDG, pclass=ScipyParticle, lon=X, lat=Y) output = psetAA.ParticleFile(name='doublegyreAA.nc', outputdt=0.1) psetAA.execute(AdvectionAnalytical, dt=np.inf, # needs to be set to np.inf for Analytical Ad...
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
And then show the particle trajectories in an animation
output.close() plotTrajectoriesFile('doublegyreAA.nc', mode='movie2d_notebook')
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
Now, we can also compute these trajectories with the AdvectionRK4 kernel
psetRK4 = ParticleSet(fieldsetDG, pclass=JITParticle, lon=X, lat=Y) psetRK4.execute(AdvectionRK4, dt=0.01, runtime=3)
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
And we can then compare the final locations of the particles from the AdvectionRK4 and AdvectionAnalytical simulations
plt.plot(psetRK4.lon, psetRK4.lat, 'r.', label='RK4') plt.plot(psetAA.lon, psetAA.lat, 'b.', label='Analytical') plt.legend() plt.show()
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
The final locations are similar, but not exactly the same. Because everything else is the same, the difference has to be due to the different kernels. Which one is more correct, however, can't be determined from this analysis alone. Bickley Jet example Let's as a second example, do a similar analysis for a Bickley Jet,...
def bickleyjet_fieldset(times, xdim=51, ydim=51): """Bickley Jet Field as implemented in Hadjighasem et al 2017, 10.1063/1.4982720""" U0 = 0.06266 L = 1770. r0 = 6371. k1 = 2 * 1 / r0 k2 = 2 * 2 / r0 k3 = 2 * 3 / r0 eps1 = 0.075 eps2 = 0.4 eps3 = 0.3 c3 = 0.461 * U0 c2 = ...
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
Add a zonal halo for periodic boundary conditions in the zonal direction
fieldsetBJ.add_constant('halo_west', fieldsetBJ.U.grid.lon[0]) fieldsetBJ.add_constant('halo_east', fieldsetBJ.U.grid.lon[-1]) fieldsetBJ.add_periodic_halo(zonal=True) def ZonalBC(particle, fieldset, time): if particle.lon < fieldset.halo_west: particle.lon += fieldset.halo_east - fieldset.halo_west el...
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
And simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel
X, Y = np.meshgrid(np.arange(0, 19900, 100), np.arange(-100, 100, 100)) psetAA = ParticleSet(fieldsetBJ, pclass=ScipyParticle, lon=X, lat=Y, time=0) output = psetAA.ParticleFile(name='bickleyjetAA.nc', outputdt=delta(hours=1)) psetAA.execute(AdvectionAnalytical+psetAA.Kernel(ZonalBC), dt=np.inf, ...
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
And then show the particle trajectories in an animation
output.close() plotTrajectoriesFile('bickleyjetAA.nc', mode='movie2d_notebook')
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
Like with the double gyre above, we can also compute these trajectories with the AdvectionRK4 kernel
psetRK4 = ParticleSet(fieldsetBJ, pclass=JITParticle, lon=X, lat=Y) psetRK4.execute(AdvectionRK4+psetRK4.Kernel(ZonalBC), dt=delta(minutes=5), runtime=delta(days=1))
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
And finally, we can again compare the end locations from the AdvectionRK4 and AdvectionAnalytical simulations
plt.plot(psetRK4.lon, psetRK4.lat, 'r.', label='RK4') plt.plot(psetAA.lon, psetAA.lat, 'b.', label='Analytical') plt.legend() plt.show()
parcels/examples/tutorial_analyticaladvection.ipynb
OceanPARCELS/parcels
mit
Import required packages
import copy from kubeflow.katib import KatibClient from kubernetes.client import V1ObjectMeta from kubeflow.katib import V1beta1Experiment from kubeflow.katib import V1beta1AlgorithmSpec from kubeflow.katib import V1beta1ObjectiveSpec from kubeflow.katib import V1beta1FeasibleSpace from kubeflow.katib import V1beta1Ex...
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Define your Experiment You have to create your Experiment object before deploying it. This Experiment is similar to this example.
# Experiment name and namespace. namespace = "kubeflow-user-example-com" experiment_name = "cmaes-example" metadata = V1ObjectMeta( name=experiment_name, namespace=namespace ) # Algorithm specification. algorithm_spec=V1beta1AlgorithmSpec( algorithm_name="cmaes" ) # Objective specification. objective_spe...
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Define Experiments with resume policy We will define another 2 Experiments with ResumePolicy = Never and ResumePolicy = FromVolume. Experiment with Never resume policy can't be resumed, the Suggestion resources will be deleted. Experiment with FromVolume resume policy can be resumed, volume is attached to the Suggestio...
experiment_never_resume_name = "never-resume-cmaes" experiment_from_volume_resume_name = "from-volume-resume-cmaes" # Create new Experiments from the previous Experiment info. # Define Experiment with never resume. experiment_never_resume = copy.deepcopy(experiment) experiment_never_resume.metadata.name = experiment_n...
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
You can print the Experiment's info to verify it before submission.
print(experiment.metadata.name) print(experiment.spec.algorithm.algorithm_name) print("-----------------") print(experiment_never_resume.metadata.name) print(experiment_never_resume.spec.resume_policy) print("-----------------") print(experiment_from_volume_resume.metadata.name) print(experiment_from_volume_resume.spec...
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Create your Experiment You have to create Katib client to use the SDK.
# Create client. kclient = KatibClient() # Create your Experiment. kclient.create_experiment(experiment,namespace=namespace)
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Create other Experiments.
# Create Experiment with never resume. kclient.create_experiment(experiment_never_resume,namespace=namespace) # Create Experiment with from volume resume. kclient.create_experiment(experiment_from_volume_resume,namespace=namespace)
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Get your Experiment You can get your Experiment by name and receive required data.
exp = kclient.get_experiment(name=experiment_name, namespace=namespace) print(exp) print("-----------------\n") # Get the max trial count and latest status. print(exp["spec"]["maxTrialCount"]) print(exp["status"]["conditions"][-1])
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Get all Experiments You can get list of the current Experiments.
# Get names from the running Experiments. exp_list = kclient.get_experiment(namespace=namespace) for exp in exp_list["items"]: print(exp["metadata"]["name"])
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
List of the current Trials You can get list of the current trials with the latest status.
# Trial list. kclient.list_trials(name=experiment_name, namespace=namespace)
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Get the optimal HyperParameters You can get the current optimal Trial from your Experiment. For the each metric you can see the max, min and latest value.
# Optimal HPs. kclient.get_optimal_hyperparameters(name=experiment_name, namespace=namespace)
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Status for the Suggestion objects You can check the Suggestion object status for more information about resume status. For Experiment with FromVolume you should be able to check created PVC.
# Get the current Suggestion status for the never resume Experiment. suggestion = kclient.get_suggestion(name=experiment_never_resume_name, namespace=namespace) print(suggestion["status"]["conditions"][-1]["message"]) print("-----------------") # Get the current Suggestion status for the from volume Experiment. sugge...
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Delete your Experiments You can delete your Experiments.
kclient.delete_experiment(name=experiment_name, namespace=namespace) kclient.delete_experiment(name=experiment_never_resume_name, namespace=namespace) kclient.delete_experiment(name=experiment_from_volume_resume_name, namespace=namespace)
examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb
kubeflow/katib
apache-2.0
Representing Data and Engineering Features Categorical Variables One-Hot-Encoding (Dummy variables)
import pandas as pd # The file has no headers naming the columns, so we pass header=None and provide the column names explicitly in "names" data = pd.read_csv("data/adult.data", header=None, index_col=False, names=['age', 'workclass', 'fnlwgt', 'education', 'education-num', ...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Checking string-encoded categorical data
data.gender.value_counts() print("Original features:\n", list(data.columns), "\n") data_dummies = pd.get_dummies(data) print("Features after get_dummies:\n", list(data_dummies.columns)) data_dummies.head() # Get only the columns containing features, that is all columns from 'age' to 'occupation_ Transport-moving' # ...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Numbers can encode categoricals
# create a dataframe with an integer feature and a categorical string feature demo_df = pd.DataFrame({'Integer Feature': [0, 1, 2, 1], 'Categorical Feature': ['socks', 'fox', 'socks', 'box']}) demo_df pd.get_dummies(demo_df) demo_df['Integer Feature'] = demo_df['Integer Feature'].astype(str) pd.get_dummies(demo_df)
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Binning, Discretization, Linear Models and Trees
from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor X, y = mglearn.datasets.make_wave(n_samples=100) plt.plot(X[:, 0], y, 'o') line = np.linspace(-3, 3, 1000)[:-1].reshape(-1, 1) reg = LinearRegression().fit(X, y) plt.plot(line, reg.predict(line), label="linear regression"...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Interactions and Polynomials
X_combined = np.hstack([X, X_binned]) print(X_combined.shape) plt.plot(X[:, 0], y, 'o') reg = LinearRegression().fit(X_combined, y) line_combined = np.hstack([line, line_binned]) plt.plot(line, reg.predict(line_combined), label='linear regression combined') for bin in bins: plt.plot([bin, bin], [-3, 3], ':', c=...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Univariate Non-linear transformations
rnd = np.random.RandomState(0) X_org = rnd.normal(size=(1000, 3)) w = rnd.normal(size=3) X = np.random.poisson(10 * np.exp(X_org)) y = np.dot(X_org, w) np.bincount(X[:, 0]) bins = np.bincount(X[:, 0]) plt.bar(range(len(bins)), bins, color='w') plt.ylabel("number of appearances") plt.xlabel("value") from sklearn.lin...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Automatic Feature Selection Univariate statistics
from sklearn.datasets import load_breast_cancer from sklearn.feature_selection import SelectPercentile from sklearn.model_selection import train_test_split cancer = load_breast_cancer() # get deterministic random numbers rng = np.random.RandomState(42) noise = rng.normal(size=(len(cancer.data), 50)) # add noise featu...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Model-based Feature Selection
from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import RandomForestClassifier select = SelectFromModel(RandomForestClassifier(n_estimators=100, random_state=42), threshold="median") select.fit(X_train, y_train) X_train_l1 = select.transform(X_train) print(X_train.shape) print(X_train_l1.sha...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Recursive Feature Elimination
from sklearn.feature_selection import RFE select = RFE(RandomForestClassifier(n_estimators=100, random_state=42), n_features_to_select=40) #select = RFE(LogisticRegression(penalty="l1"), n_features_to_select=40) select.fit(X_train, y_train) # visualize the selected features: mask = select.get_support() plt.matshow(mas...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Sequential Feature Selection
from mlxtend.feature_selection import SequentialFeatureSelector sfs = SequentialFeatureSelector(LogisticRegression(), k_features=40, forward=True, scoring='accuracy',cv=5) sfs = sfs.fit(X_train, y_train) mask = np.zeros(80, dtype='bool') mask[np.array(sfs.k_feature_idx_)] = True plt....
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Exercises Choose either the Boston housing dataset or the adult dataset from above. Compare a linear model with interaction features against one without interaction features. Use feature selection to determine which interaction features were most important.
data = pd.read_csv("data/adult.data", header=None, index_col=False, names=['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'gender', 'capital-gain', 'capital-loss', 'hours-per-week...
02.2 Feature preprocessing, feature selection, interactions.ipynb
amueller/advanced_training
bsd-2-clause
Ahora vamos a ver una serie de modelos basados en árboles de decisión. Los árboles de decisión son modelos muy intuitivos. Codifican una serie de decisiones del tipo "SI" "ENTONCES", de forma similar a cómo las personas tomamos decisiones. Sin embargo, qué pregunta hacer y cómo proceder a cada respuesta es lo que apren...
from figures import make_dataset x, y = make_dataset() X = x.reshape(-1, 1) plt.figure() plt.xlabel('Característica X') plt.ylabel('Objetivo y') plt.scatter(X, y); from sklearn.tree import DecisionTreeRegressor reg = DecisionTreeRegressor(max_depth=5) reg.fit(X, y) X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1)) ...
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Un único árbol de decisión nos permite estimar la señal de una forma no paraḿetrica, pero está claro que tiene algunos problemas. En algunas regiones, el modelo muestra un alto sesgo e infra-aprende los datos (observa las regiones planas, donde no predecimos correctamente los datos), mientras que en otras el modelo mue...
from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from figures import plot_2d_separator X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100) X_train, X_test, y_train, y_test = train_test_split(X, y, ra...
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Hay varios parámetros que controla la complejidad de un árbol, pero uno que es bastante fácil de entender es la máxima profundidad. Esto limita hasta que nivel se puede afinar particionando el espacio, o, lo que es lo mismo, cuantas reglas del tipo "Si-Entonces" podemos preguntar antes de decidir la clase de un patrón....
# %matplotlib inline from figures import plot_tree_interactive plot_tree_interactive()
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Los árboles de decisión son rápidos de entrenar, fáciles de entender y suele llevar a modelos interpretables. Sin embargo, un solo árbol de decisión a veces tiende al sobre-aprendizaje. Jugando con el gráfico anterior, puedes ver como el modelo empieza a sobre-entrenar antes incluso de que consiga una buena separación ...
from figures import plot_forest_interactive plot_forest_interactive()
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Elegir el estimador óptimo usando validación cruzada
from sklearn.model_selection import GridSearchCV from sklearn.datasets import load_digits from sklearn.ensemble import RandomForestClassifier digits = load_digits() X, y = digits.data, digits.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) rf = RandomForestClassifier(n_estimators=20...
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Gradient Boosting Otro método útil tipo ensemble es el Boosting. En lugar de utilizar digamos 200 estimadores en paralelo, construimos uno por uno los 200 estimadores, de forma que cada uno refina los resultados del anterior. La idea es que aplicando un conjunto de modelos muy simples, se obtiene al final un modelo fin...
from sklearn.ensemble import GradientBoostingRegressor clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2) clf.fit(X_train, y_train) print(clf.score(X_train, y_train)) print(clf.score(X_test, y_test))
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
<div class="alert alert-success"> <b>Ejercicio: Validación cruzada para Gradient Boosting</b>: <ul> <li> Utiliza una búsqueda *grid* para optimizar los parámetros `learning_rate` y `max_depth` de un *Gradient Boosted Decision tree* para el dataset de los dígitos manuscritos. </li> </ul> <...
from sklearn.datasets import load_digits from sklearn.ensemble import GradientBoostingClassifier digits = load_digits() X_digits, y_digits = digits.data, digits.target # divide el dataset y aplica búsqueda grid
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Importancia de las características Las clases RandomForest y GradientBoosting tienen un atributo feature_importances_ una vez que han sido entrenados. Este atributo es muy importante e interesante. Básicamente, cuantifica la contribución de cada característica al rendimiento del árbol.
X, y = X_digits[y_digits < 2], y_digits[y_digits < 2] rf = RandomForestClassifier(n_estimators=300, n_jobs=1) rf.fit(X, y) print(rf.feature_importances_) # un valor por característica plt.figure() plt.imshow(rf.feature_importances_.reshape(8, 8), cmap=plt.cm.viridis, interpolation='nearest')
notebooks-spanish/18-arboles_y_bosques.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Nonlinear regression with fixed basis functions Given a set of basis functions $\phi_h(x)$, we represent our function class as $y_{pred}(x;\mathbf{w}) = \sum_h w_h \phi_h(x)$, and want to learn a vector $\mathbf{w}{opt}$ of weights such that $y{pred}(x;\mathbf{w}{opt}) \approx y{true}(x)$
n_basis_fxns=15 basis_points=np.linspace(xmin,xmax,n_basis_fxns) basis_fxns = np.empty(n_basis_fxns,dtype=object) class RBF(): def __init__(self,center,r=1.0): self.c=center self.r=r def __call__(self,x): return np.exp(-(np.sum((x-self.c)**2) / (2*self.r**2))) class StepFxn(): def...
notebooks/Gaussian Process tinker.ipynb
maxentile/msm-learn
mit
Adaptive basis functions $$y(\mathbf{x}; \mathbf{w}) = \sum_h w_h^{(2)} \text{tanh}\left( \sum_i w_{hi}^{(1)}x_i + w_{h0}^{(1)} \right) + w_0^{(2)}$$
import gptools # reproducing Figure 1 from: http://mlg.eng.cam.ac.uk/pub/pdf/Ras04.pdf import numpy as np import numpy.random as npr from numpy.linalg import cholesky from numpy.matlib import repmat xs = np.linspace(-5,5,1000) ns = len(xs) keps=1e-9 m = lambda x: 0.25*x**2 def K_mat(xs_1,xs_2): diff_mat = repmat...
notebooks/Gaussian Process tinker.ipynb
maxentile/msm-learn
mit
Sparse Tensor Representation & Conversion Q1. Convert tensor x into a SparseTensor.
x = tf.constant([[1, 0, 0, 0], [0, 0, 2, 0], [0, 0, 0, 0]], dtype=tf.int32) sp = ... print(sp.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors.ipynb
diegocavalca/Studies
cc0-1.0
Q2. Investigate the dtype, indices, dense_shape and values of the SparseTensor sp in Q1.
print("dtype:", ...) print("indices:", ...) print("dense_shape:", ...) print("values:", ...)
programming/Python/tensorflow/exercises/Sparse_Tensors.ipynb
diegocavalca/Studies
cc0-1.0
Q3. Let's write a custom function that converts a SparseTensor to Tensor. Complete it.
def dense_to_sparse(tensor): indices = tf.where(tf.not_equal(tensor, 0)) return tf.SparseTensor(indices=indices, values=..., # for zero-based index dense_shape=tf.to_int64(tf.shape(tensor))) # Test print(dense_to_sparse(x).eval())
programming/Python/tensorflow/exercises/Sparse_Tensors.ipynb
diegocavalca/Studies
cc0-1.0
Q4. Convert the SparseTensor sp to a Tensor using tf.sparse_to_dense.
output = ... print(output.eval()) print("Check if this is identical with x:\n", x.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors.ipynb
diegocavalca/Studies
cc0-1.0
Q5. Convert the SparseTensor sp to a Tensor using tf.sparse_tensor_to_dense.
output = ... print(output.eval()) print("Check if this is identical with x:\n", x.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors.ipynb
diegocavalca/Studies
cc0-1.0
Streamfunction and velocity potential from zonal and meridional wind component windspharm is a Python library developed by Andrew Dawson which provides an pythonic interface to the pyspharm module, which is basically a bindings to the [spherepack] Fortran library Installation 1) Download and unpack pyspharm 2) Downlo...
from windspharm.standard import VectorWind from windspharm.tools import prep_data, recover_data, order_latdim
notebooks/spharm.ipynb
nicolasfauchereau/metocean
unlicense
usual imports
import os, sys import pandas as pd import numpy as np from numpy import ma from matplotlib import pyplot as plt from mpl_toolkits.basemap import Basemap as bm dpath = os.path.join(os.environ.get('HOME'), 'data/NCEP1')
notebooks/spharm.ipynb
nicolasfauchereau/metocean
unlicense
defines a function to plot a 2D field map
def plot_field(X, lat, lon, vmin, vmax, step, cmap=plt.get_cmap('jet'), ax=False, title=False, grid=False): if not ax: f, ax = plt.subplots(figsize=(10, (X.shape[0] / float(X.shape[1])) * 10)) m.ax = ax im = m.contourf(lons, lats, X, np.arange(vmin, vmax+step, step), latlon=True, cmap=cmap, extend=...
notebooks/spharm.ipynb
nicolasfauchereau/metocean
unlicense
load the wind data using xray
import xray; print(xray.__version__) dset_u = xray.open_dataset(os.path.join(dpath,'uwnd.2014.nc')) dset_v = xray.open_dataset(os.path.join(dpath,'vwnd.2014.nc')) dset_u = dset_u.sel(level=200) dset_v = dset_v.sel(level=200) dset_u = dset_u.mean('time') dset_v = dset_v.mean('time') lats = dset_u['lat'].values lo...
notebooks/spharm.ipynb
nicolasfauchereau/metocean
unlicense
Sparse Tensor Representation & Conversion Q1. Convert tensor x into a SparseTensor.
x = tf.constant([[1, 0, 0, 0], [0, 0, 2, 0], [0, 0, 0, 0]], dtype=tf.int32) sp = tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]) print(sp.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors-Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q2. Investigate the dtype, indices, dense_shape and values of the SparseTensor sp in Q1.
print("dtype:", sp.dtype) print("indices:", sp.indices.eval()) print("dense_shape:", sp.dense_shape.eval()) print("values:", sp.values.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors-Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q3. Let's write a custom function that converts a SparseTensor to Tensor. Complete it.
def dense_to_sparse(tensor): indices = tf.where(tf.not_equal(tensor, 0)) return tf.SparseTensor(indices=indices, values=tf.gather_nd(tensor, indices) - 1, # for zero-based index dense_shape=tf.to_int64(tf.shape(tensor))) # Test print(dense_to_sparse(x).eva...
programming/Python/tensorflow/exercises/Sparse_Tensors-Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q4. Convert the SparseTensor sp to a Tensor using tf.sparse_to_dense.
output = tf.sparse_to_dense(sparse_indices=[[0, 0], [1, 2]], sparse_values=[1, 2], output_shape=[3, 4]) print(output.eval()) print("Check if this is identical with x:\n", x.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors-Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q5. Convert the SparseTensor sp to a Tensor using tf.sparse_tensor_to_dense.
output = tf.sparse_tensor_to_dense(s) print(output.eval()) print("Check if this is identical with x:\n", x.eval())
programming/Python/tensorflow/exercises/Sparse_Tensors-Solutions.ipynb
diegocavalca/Studies
cc0-1.0
5 Link Analysis 5.1 PageRank 5.1.1 Eearly Search Engines and Term Spam inverted index: a data structure that makes it easy to find all the palces where that a term given occurs. term spam: techniques for fooling search engines. To combat term spam, Google introduced two innovations: PageRank was used to simulate...
plt.imshow(plt.imread('./res/fig_5_1.png')) plt.imshow(plt.imread('./res/eg_5_1.png'))
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
PageRank $v$ simulate random surfers: start at a random page of all $n$. $v_i^0 = \frac{1}{n} \quad i = 1, 2, \dotsc, n$. randomly choose next page linked. $v^{k+1} = M v^{k}$ give us the distribution of the surfer after $k+1$ stpes.
# eg. 5.1 matrix_5_1 = np.array([ [0, 1/3, 1/3, 1/3], [1/2, 0, 0, 1/2], [1, 0, 0, 0], [0, 1/2, 1/2, 0] ]).T matrix_5_1 n = matrix_5_1.shape[1] v = np.ones((n,1)) / n v def dist_after_surfing(M, v=None, steps=1): if v is None: n...
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Markov processes: It is known that the distribution of the surfer approaches a limiting distribution $v$ that satisfies $v = Mv$, provided two conditions are met: The graph is trongly connnected. namely, it is possible to get from any node to any other node. There are no dead ends. eigenvalue and eigenv...
# eg 5.2 v_ = dist_after_surfing(matrix_5_1, v, 10) v_ v_ = dist_after_surfing(matrix_5_1, v, 50) v_ v_ = dist_after_surfing(matrix_5_1, v, 75) v_
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
5.1.3 Structure of the Web Some structures in reality violate the assumptions needed for the Markov-process iteration to converge to a limit.
plt.figure(figsize = (10,10)) plt.imshow(plt.imread('./res/fig_5_2.png'))
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Two problems we need to avoid by modifing PageRank: the dead end. spider traps. the groups of pages that all have outlinks but they never link to any other pages. 5.1.4 Avoiding Dead Ends Dead Ends $\to$ $M$ is no longer stochastic, since some of the columns will sum to 0 rather than 1. If we compute $M^iv$...
# eg 5.3 plt.imshow(plt.imread('./res/fig_5_3.png')) M = np.array([ [0, 1/3, 1/3, 1/3], [1/2, 0, 0, 1/2], [0, 0, 0, 0], [0, 1/2, 1/2, 0] ]).T M dist_after_surfing(M, v, 50)
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Two Solutions 1. Drop the dead end recursive deletion of dead ends, and solve the remaining graph $G'$. then we restore $G$ from $G'$, recursivly. $$ e = \sum \frac{v_p}{k_p} $$ where $e \in (G - G')$, $p \in G'$ and is the predecessor of $e$, $k$ is the number of successors of $p$ in $G$.
# eg 5.4 plt.imshow(plt.imread('./res/fig_5_4.png')) M_G = np.array([ [0, 1/3, 1/3, 1/3, 0], [1/2, 0, 0, 1/2, 0], [0, 0, 0, 0, 1], [0, 1/2, 1/2, 0, 0], [0, 0, 0, 0, 0] ]).T M_G from sklearn.preprocessing import normalize index = [0, 1, 3] M = M_G.take(index, axis=0).take(...
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
2. modify the process of moving "taxation" 5.1.5 Spider Traps and Taxation spider traps: a set of nodes with no dead ends but no arcs out. they cause the PageRank calculation to place all the weights within the spider traps.
plt.imshow(plt.imread('./res/fig_5_6.png')) M = np.array([ [0, 1/3, 1/3, 1/3], [1/2, 0, 0, 1/2], [0, 0, 1, 0], [0, 1/2, 1/2, 0] ]).T M np.round(dist_after_surfing(M, steps=50), 3)
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Solution: allow each random surfer a small probability of teleporting to a random page. $$v = \beta M v + (1 - \beta) \frac{e}{n}$$ where $n$ is the number of nodes in $G$, and $e$ is a vector of all 1's.
def dist_using_taxation(M, v=None, beta=1, steps=1): n = M.shape[1] if v is None: v = np.ones((n,1)) / n e = np.ones(v.shape) for __ in xrange(steps): v = beta * M.dot(v) + (1-beta) * e / n return v dist_using_taxation(M, beta=0.8, steps=30)
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Although C gets more than half of the PageRank for itself, the effect has been limited. Note that for a random surfer, there are three path to move: follow a link. teleport to a random page. $\gets$ taxation goes nowhere. $\gets$ dead ends Since there will always be some fraction of a surfer operating on the We...
matrix_5_1 import string df_M = pd.DataFrame(matrix_5_1, index=list(string.uppercase[0:4]), columns=list(string.uppercase[0:4])) df_M def compact_representation_of_sparse_matrix(df): """It is introduced in Example 5.7""" degree = df.apply(np.count_nonzero, axis=0) dest = df.apply(np.nonzero, a...
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
5.2.2 PageRank Iteration Using MapReduce estimate: $$v' = \beta M v + (1 - \beta) \frac{e}{n}$$ if $v$ is much too large to fit in main memory, we could use the method of triping: break $M $into stripes and break $v$ into corresponding horizontal stripes. 5.2.3 Use of Combiners to Consolidate the Result Vector There ...
plt.imshow(plt.imread('./res/fig_5_12.png'))
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Each task gets $M_{ij}$ and $v_j$. Thus, $v$ is transmitted over the network $k$ times, but $M$ is sent only once. The advantage of this approach is: We can keep both the $v_j$ and $v'i$ in main memory as we process $M{ij}$. 5.2.4 Representing Blocks of the Transition Matrix for each columns of $M_{ij}$, we need l...
plt.figure(figsize=(10,10)) plt.imshow(plt.imread('./res/fig_5_14.png')) #todo
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
5.2.5 Other Efficient Approaches to PageRank Iteration We can assign all the blocks of one row of blocks to a single Map task, namely, use blocks $M_{i1}$ through $M_{ik}$ and all of $v$ to compute $v'i$. 5.2.6 Exercise ex 5.2.1 the fraction of 1's should be less than $\frac{log_2 n}{n}$. ex 5.2.2 略 ex 5.2.3 略 ex 5...
plt.imshow(plt.imread('./res/fig_5_15.png')) beta = 0.8 M = matrix_5_1 S = ['B', 'D'] e_s = pd.Series(np.zeros(4), index=list(string.uppercase[0:4])) for s in S: e_s[s] = 1 e_s M print('v = \n{} v \n+ \n{}'.format(beta*M, (1-beta)*e_s/np.sum(e_s)))
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
5.3.3 Using Topic-Sensitive PageRank Integrate topic-sensitive PageRank into a search engine Select topics Pick a teleport set for each topic, and compute the topic-sensitive PageRank vector. Find a way of determining the topic of a particular search query. Use the corresponding topic-sensitive PageRank to resp...
plt.figure(figsize=(10,10)) plt.imshow(plt.imread('./res/fig_5_16.png'))
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Links from Accessible Pages: comments contained spam link in blog or news sites. 5.4.2 Analysis of a Spam Farm Given: $n$ pages in total, $m$ support pages, $y$ is the PageRank of target page $t$. then: the PageRank of each support page: $$\frac{\beta y}{m} + \frac{1 - \beta}{n}$$ support $x$ is the contribute of al...
beta = 0.85 x_coe = 1 / (1 - beta**2) c = beta / (1+beta) print('y = {} x + {} m/n'.format(x_coe, c))
Mining_of_Massive_Datasets/Link_Analysis/note.ipynb
facaiy/book_notes
cc0-1.0
Then to run the LdaModel on it
model = SklLdaModel(num_topics=2, id2word=dictionary, iterations=20, random_state=1) model.fit(corpus) model.transform(corpus)
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Integration with Sklearn To provide a better example of how it can be used with Sklearn, Let's use CountVectorizer method of sklearn. For this example we will use 20 Newsgroups data set. We will only use the categories rec.sport.baseball and sci.crypt and use it to generate topics.
import numpy as np from gensim import matutils from gensim.models.ldamodel import LdaModel from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import CountVectorizer from gensim.sklearn_integration.sklearn_wrapper_gensim_ldamodel import SklLdaModel rand = np.random.mtrand.RandomState(1...
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Next, we just need to fit X and id2word to our Lda wrapper.
obj = SklLdaModel(id2word=id2word, num_topics=5, iterations=20) lda = obj.fit(X)
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Example for Using Grid Search
from sklearn.model_selection import GridSearchCV from gensim.models.coherencemodel import CoherenceModel def scorer(estimator, X, y=None): goodcm = CoherenceModel(model=estimator.gensim_model, texts= texts, dictionary=estimator.gensim_model.id2word, coherence='c_v') return goodcm.get_coherence() obj = SklLdaM...
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Example of Using Pipeline
from sklearn.pipeline import Pipeline from sklearn import linear_model def print_features_pipe(clf, vocab, n=10): ''' Better printing for sorted list ''' coef = clf.named_steps['classifier'].coef_[0] print coef print 'Positive features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argso...
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
LSI Model To use LsiModel begin with importing LsiModel wrapper
from gensim.sklearn_integration import SklLsiModel
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Example of Using Pipeline
model = SklLsiModel(num_topics=15, id2word=id2word) clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used pipe = Pipeline((('features', model,), ('classifier', clf))) pipe.fit(corpus, data.target) print_features_pipe(pipe, id2word.values()) print(pipe.score(corpus, data.target))
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Random Projections Model To use RpModel begin with importing RpModel wrapper
from gensim.sklearn_integration import SklRpModel
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Example of Using Pipeline
model = SklRpModel(num_topics=2) np.random.mtrand.RandomState(1) # set seed for getting same result clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used pipe = Pipeline((('features', model,), ('classifier', clf))) pipe.fit(corpus, data.target) print_features_pipe(pipe, id2word.values()) print(...
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
LDASeq Model To use LdaSeqModel begin with importing LdaSeqModel wrapper
from gensim.sklearn_integration import SklLdaSeqModel
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Example of Using Pipeline
test_data = data.data[0:2] test_target = data.target[0:2] id2word = Dictionary(map(lambda x: x.split(), test_data)) corpus = [id2word.doc2bow(i.split()) for i in test_data] model = SklLdaSeqModel(id2word=id2word, num_topics=2, time_slice=[1, 1, 1], initialize='gensim') clf = linear_model.LogisticRegression(penalty='l2...
docs/notebooks/sklearn_wrapper.ipynb
ELind77/gensim
lgpl-2.1
Setup MPSLib Setup MPSlib, and select to compute entropy using for example
# Initialize MPSlib using the mps_snesim_tree algorthm, and a simulation grid of size [80,70,1] #O = mps.mpslib(method='mps_genesim', simulation_grid_size=[80,70,1], n_max_cpdf_count=30, verbose_level=-1) O = mps.mpslib(method='mps_snesim_tree', simulation_grid_size=[80,70,1], verbose_level=-1) O.delete_local_files() O...
scikit-mps/examples/ex_mpslib_entropy.ipynb
ergosimulation/mpslib
lgpl-3.0
Plot entropy
fig = plt.figure(figsize=(18, 6)) plt.subplot(1,2,1) plt.hist(O.SI) plt.plot(np.array([1, 1])*O.H,[-5,5],'k:') plt.xlabel('SelfInformation') plt.title('Entropy = %3.1f' % (O.H)) plt.subplot(1,2,2) plt.plot(O.SI,'.', label='SI') plt.plot(np.cumsum(O.SI)/(np.arange(1,1+len(O.SI))),'-',label='H') plt.legend() plt.grid(...
scikit-mps/examples/ex_mpslib_entropy.ipynb
ergosimulation/mpslib
lgpl-3.0
Entropy as a function of number of conditional data
TI, TI_filename = mps.trainingimages.strebelle(di=4, coarse3d=1) n_cond_arr = np.array([1,2,4,6,8,12,16,24,32,64]) H=np.zeros(n_cond_arr.size) # entropy t=np.zeros(n_cond_arr.size) # simulation time i=0 SI=[] for n_cond in n_cond_arr: O = mps.mpslib(method='mps_snesim_tree', simulation_grid_...
scikit-mps/examples/ex_mpslib_entropy.ipynb
ergosimulation/mpslib
lgpl-3.0
Now I will create an object that knows how to deal with Martian times and illuminations.
inca = kmaspice.MarsSpicer()
notebooks/blog_post.ipynb
michaelaye/planet4
isc
I saved some predefined places and their locations into the code, so that I don't need to remember the coordinates all the time. So let's justify the variable name by actually setting it on top of Inca City:
inca.goto('inca')
notebooks/blog_post.ipynb
michaelaye/planet4
isc
By default, when I don't provide a time, the time is set to the current time. In the UTC timezone, that is:
inca.time.isoformat()
notebooks/blog_post.ipynb
michaelaye/planet4
isc
To double-check how close we are to spring time in the southern hemisphere on Mars, I need to look at a value called L_s, which is the solar longitude. This value measures the time of the seasons on Mars as its angular position during its trip around the sun which southern spring being at Ls = 180.
round(inca.l_s, 1)
notebooks/blog_post.ipynb
michaelaye/planet4
isc
So, we are pretty close to spring then. But do we already have sunlight in Inca? We should remember that we are in polar areas, where we have darkness for half a year, just like on Earth. Let's have a look what is the local time in Inca:
inca.local_soltime
notebooks/blog_post.ipynb
michaelaye/planet4
isc
Right, that's still in the night, so that most likely means that the sun is below the horizon, right?
round(inca.illum_angles.dsolar,1)
notebooks/blog_post.ipynb
michaelaye/planet4
isc
Solar angles are measured from the local normal direction, with the sun directly over head being defined as 0. Which means the horizon is at 90 degrees. Hence, this value of 96 means the sun is below the horizon. But it is local night, so we would expect that! Now comes the magic, let's just advance the time by a coupl...
inca.advance_time_by(7*3600) round(inca.illum_angles.dsolar)
notebooks/blog_post.ipynb
michaelaye/planet4
isc
Oh yes! This is just 2 degrees above the horizon, the sun is lurking over it just a tiny bit. But all you humans that work so much in helping us know what this means, right? Where there is sun, there is energy. And this energy can be used to sublime CO2 gas and create the wonderful fans we are studying. Let's make this...
inca.advance_time_by(-7*3600)
notebooks/blog_post.ipynb
michaelaye/planet4
isc
Now, I will create a loop with 100 elements, and check and write down the time each 10 minutes (= 600 seconds). I save the stuff in 2 new arrays to have it easier to plot things over time.
times = [] angles = [] for i in range(100): inca.advance_time_by(600) times.append(inca.local_soltime[3]) angles.append(inca.illum_angles.dsolar)
notebooks/blog_post.ipynb
michaelaye/planet4
isc
I'm now importing the pandas library, an amazing toolbox to deal with time-series data. Especially, the plots automatically get nicely formatted time-axes.
import pandas as pd data = pd.Series(angles, index=times)
notebooks/blog_post.ipynb
michaelaye/planet4
isc
I need to switch this notebook to show plots inside this notebook and not outside as an extra window, which is my default:
%pylab inline data.plot()
notebooks/blog_post.ipynb
michaelaye/planet4
isc
Here we see how the sun's angle is developing over time. As expected we see a minimum (i.e. highest sun over horizon) right around noon. Do you hear the CO2 ice crackling?? ;) I find it amazing to know that in a couple of hours some of our beloved fans are being created! Next I wondered how long we already have the su...
times = [] angles = [] for i in range(2000): inca.advance_time_by(-600) times.append(inca.time) angles.append(inca.illum_angles.dsolar) pd.Series(angles,index=times).plot()
notebooks/blog_post.ipynb
michaelaye/planet4
isc
Part B Print out a famous quote! In the code below, fill out the string variables to contain the name of a famous person, and a quote that they said.
famous_person = "" their_quote = "" ### BEGIN SOLUTION ### END SOLUTION print("{}, at age {}, said:\n\n\"{}\"".format(famous_person, favorite_number, their_quote)) assert len(famous_person) > 0 assert len(their_quote) > 0
assignments/A1/A1_Q1.ipynb
eds-uga/csci1360e-su17
mit
Part C You're working late on a homework assignment and have copied a few lines from a Wikipedia article. In your tired stupor, your copy/paste skills leave something to be desired. Rather than try to force your mouse hand to stop shaking, you figure it's easier to write a small Python program to strip out errant white...
line1 = 'Python supports multiple programming paradigms, including object-oriented, imperative\n' line2 = ' and functional programming or procedural styles. It features a dynamic type\n' line3 = ' system and automatic memory management and has a large and comprehensive standard library.\n ' ### BEGIN SOLUTION ### EN...
assignments/A1/A1_Q1.ipynb
eds-uga/csci1360e-su17
mit