markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Model evaluation We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
models = pd.DataFrame({ 'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression', 'Random Forest', 'Naive Bayes', 'Perceptron', 'Stochastic Gradient Decent', 'Linear SVC', 'Decision Tree'], 'Score': [acc_svc, acc_knn, acc_log, acc_random_fores...
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
The 10th degree polynomial appears to give the best fit overall. The lower order polynomials dont fit the curve exceedingly well below 100 K Also, the polynomial tracks the heating curve (the slightly higher mV points from 80-150K) a little more closely than the cooling curve (295 to 80 K). Heating occurred much more ...
# These mV values are also close ~0.5 degrees K of one another print(fit_poly(273.15)) # fit print(typeC.emf_mVK(273.15)) # NIST value
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
It's also a good idea to check that the polynomial does not have any inflection points, at least in the area we are interested in using the polynomial (77 K - 273.15 K). We can use the second derivative test to see if this will be important for our case.
x = sp.symbols('x') polynom = sp.Poly(fit_coeffs[0],x) # print(fit_coeffs[0]) # find the second derivative of the polynomial second_derivative = polynom.diff(x,x) print(second_derivative) sp.solve(second_derivative,x, domain= sp.S.Reals) print(second_derivative.evalf(subs={x:77})) print(second_derivative.evalf(subs={x...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
Well this is not optimal-- there exists a local minimum at 83.86 K in our polynomial fit. We can attempt to fit an exponential curve to this very low temperature data and append this to the polynomial function.
lowT_df = df.query('T<103') # Now lets fit the data to an exponential # print(np.min(lowT_df['TypeC_calib_mV'])) def func(x, a, b, c, d): return a * np.exp(b * x - c) + d fit_coeffs = optimize.curve_fit(func, lowT_df['T'],lowT_df['TypeC_calib_mV'], p0=(1, 1, 90, -3)) print(fit_coeffs) a = fit_coeffs[0][0] b = fit_...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
This appears to be a better fit than the polynomial in this regime. Now lets concatenate these two functions and interpolate near the points around 100 K to smooth things out if necessary. Recall that the two functions are fit_poly and expfunc
# select data from 103 to 120 K just so we can see the point of intersection a little better checkT_df = df.query('77<=T<=120') fig4, ax4 = plt.subplots() ax4.plot(checkT_df['T'], fit_poly(checkT_df['T']), 'o', ms=0.5, label='polyfit', color='g') ax4.plot(lowT_df['T'], expfunc, 'o', ms=0.5, label='expfunc', color='r')...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
The two fitted plots almost match near 103 K, but there is a little 'cusp'-like shape near the point of intersection. Let's smooth it out. Also, notice that the expfunc fit is a little better than the polyfit.
def switch_fcn(x, switchpoint, smooth): s = 0.5 + 0.5*np.tanh((x - switchpoint)/smooth) return s sw = switch_fcn(df['T'], 103, 0.2) expfunc2 = func(df['T'],a,b,c,d) len(expfunc2) fig, ax = plt.subplots() ax.plot(df['T'], sw,'o', ms=0.5) def combined(switch, low_f1, high_f2): comb = (1-switch)*low_f1 + sw...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
Now I will take the polynomial and take the values from 77 K to 273 K for calibration and append them to the NIST values
# low temperature array low_temp = np.arange(77.15,273.15, 0.1) # low_temp_calib = fit_poly(low_temp) low_temp_calib = combined(switch_fcn(low_temp, 103, 3), func(low_temp,a,b,c,d), fit_poly(low_temp)) # high temperature array high_temp = np.arange(273.15,2588.15, 0.1) high_temp_nist = typeC.emf_mVK(high_temp) # conc...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
But wait! Suppose we also want to fix that discontinuity at 273.15 K? We can apply the same procudure as before. 1. Apply a tanh(x) function: $switch = 0.5 + 0.5np.tanh((x - switchpoint)/smooth)$ 2. Combine both functions: $comb = (1-switch)f1 + (switch)*f2 $
low_calib = combined(switch_fcn(Temperature, 103, 3), func(Temperature,a,b,c,d), fit_poly(Temperature)) high_calib = pd.DataFrame(index=high_temp, data=high_temp_nist,columns=['mV']) dummy_df = pd.DataFrame(index=low_temp, data=np.zeros(len(low_temp)),columns=['mV']) concat_high_calib = dummy_df.append(high_calib) pr...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
The prior value at 273.15 K was -0.00867, when the actual value is 0. After the smoothing, the new value is -0.004336, about half of the prior value. Some of the values a little after 273.15 do not match exactly with the NIST table, but it is much better than the jump that we had before.
fig, ax = plt.subplots() freezept_calib.plot(ax=ax, label ='combined') ax.plot(Temperature,low_calib, label = 'low calib') ax.plot(Temperature,concat_high_calib, label= 'high_calib') ax.set_ylim([-.04,0.04]) ax.set_xlim([268,277]) ax.legend() print(signal.argrelmin(freezept_calib.values)) # print(signal.argrelextrema(...
Type C calibrations/TypeC calcs corrected.ipynb
mannyfin/IRAS
bsd-3-clause
We will also need some specific modules and a litle "IPython magic" to show the plots:
from numpy import linalg as LA from scipy import signal import matplotlib.pyplot as plt %matplotlib inline
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
Back to top Dynamic system setup In this example we will simulate a two degree of freedom system (2DOF) as a LTI system. For that purpose, we will define a mass and a stiffness matrix and use proportional damping:
MM = np.asmatrix(np.diag([1., 2.])) print(MM) KK = np.asmatrix([[20., -10.],[-10., 10.]]) print(KK) C1 = 0.1*MM+0.02*KK print(C1)
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
For the LTI system we will use a state space formulation. For that we will need the four matrices describing the system (A), the input (B), the output (C) and the feedthrough (D):
A = np.bmat([[np.zeros_like(MM), np.identity(MM.shape[0])], [LA.solve(-MM,KK), LA.solve(-MM,C1)]]) print(A) Bf = KK*np.asmatrix(np.ones((2, 1))) B = np.bmat([[np.zeros_like(Bf)],[LA.solve(MM,Bf)]]) print(B) Cd = np.matrix((1,0)) Cv = np.asmatrix(np.zeros((1,MM.shape[1]))) Ca = np.asmatrix(np.zeros((1,MM.shape[1]))) C...
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
The LTI system is simply defined as:
system = signal.lti(A, B, C, D)
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
To check the results presented ahead we will need the angular frequencies and damping coefficients of this system. The eigenanalysis of the system matrix yields them after some computations:
w1, v1 = LA.eig(A) ix = np.argsort(np.absolute(w1)) # order of ascending eigenvalues w1 = w1[ix] # sorted eigenvalues v1 = v1[:,ix] # sorted eigenvectors zw = -w1.real # damping coefficient time angular frequency wD = w1.imag # damped angular frequency zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correc...
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
Back to top Frequency response function A frequency response function is a complex valued function of frequency. Let us see how it looks when we plot the real and imaginary parts in separate:
w, H = system.freqresp() fig, ax = plt.subplots(2, 1) fig.suptitle('Real and imaginary plots') # Real part plot ax[0].plot(w, H.real, label='FRF') ax[0].axvline(wn[0], color='k', label='First mode', linestyle='--') ax[0].axvline(wn[2], color='k', label='Second mode', linestyle='--') ax[0].set_ylabel('Real [-]') ax[0].g...
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
Back to top Nyquist plot A Nyquist plot represents the real and imaginary parts of the complex FRF in a single plot:
plt.figure() plt.title('Nyquist plot') plt.plot(H.real, H.imag, 'b') plt.plot(H.real, -H.imag, 'r') plt.xlabel('Real [-]') plt.ylabel('Imaginary[-]') plt.grid(True) plt.axis('equal') plt.show()
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
Back to top Bode plot A Bode plot represents the complex FRF in magnitude-phase versus frequency:
w, mag, phase = system.bode() fig, ax = plt.subplots(2, 1) fig.suptitle('Bode plot') # Magnitude plot ax[0].plot(w, mag, label='FRF') ax[0].axvline(wn[0], color='k', label='First mode', linestyle='--') ax[0].axvline(wn[2], color='k', label='Second mode', linestyle='--') ax[0].set_ylabel('Magnitude [dB]') ax[0].grid(Tru...
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
Back to top Nichols plot A Nichols plot combines the Bode plot in a single plot of magnitude versus phase:
plt.figure() plt.title('Nichols plot') plt.plot(phase*np.pi/180., mag) plt.xlabel('Phase [rad/s]') plt.ylabel('Magnitude [dB]') plt.grid(True) plt.show()
FRF_plots.ipynb
pxcandeias/py-notebooks
mit
Загрузим данные
from sklearn.datasets import load_boston bunch = load_boston() print(bunch.DESCR) X, y = pd.DataFrame(data=bunch.data, columns=bunch.feature_names.astype(str)), bunch.target X.head()
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
Зафиксируем генератор случайных чисел для воспроизводимости:
SEED = 22 np.random.seed = SEED
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
Домашка! Разделим данные на условно обучающую и отложенную выборки:
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=SEED) X_train.shape, y_train.shape, X_test.shape, y_test.shape
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
Измерять качество будем с помощью метрики среднеквадратичной ошибки:
from sklearn.metrics import mean_squared_error
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
<div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задача 1.</h3> </div> <div class="panel"> Обучите <b>LinearRegression</b> из пакета <b>sklearn.linear_model</b> на обучающей выборке (<i>X_train, y_train</i>) и измерьте качество на...
from sklearn.linear_model import LinearRegression from sklearn.model_selection import cross_val_score clf = LinearRegression() clf.fit(X_train, y_train); print('Вышла средняя ошибка, равная %5.4f' % \ (-np.mean(cross_val_score(clf, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
<div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задача 2. (с подвохом)</h3> </div> <div class="panel"> Обучите <b>SGDRegressor</b> из пакета <b>sklearn.linear_model</b> на обучающей выборке (<i>X_train, y_train</i>) и измерьте ка...
from sklearn.linear_model import SGDRegressor from sklearn.preprocessing import StandardScaler ss = StandardScaler() X_scaled = ss.fit_transform(X_train) y_scaled = ss.fit_transform(y_train) sgd = SGDRegressor() sgd.fit(X_scaled, y_scaled); print('Вышла средняя ошибка, равная %5.4f' % \ (-np.mean(cross_v...
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
<div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задача 3.</h3> </div> <div class="panel"> Попробуйте все остальные классы: <ul> <li>Ridge <li>Lasso <li>ElasticNet </ul> ...
from sklearn.preprocessing import StandardScaler, PolynomialFeatures from sklearn.pipeline import Pipeline, make_pipeline from sklearn.model_selection import GridSearchCV from sklearn.linear_model import RidgeCV ############Ridge params = { 'alpha': [10**x for x in range(-2,3)] } from sklearn.linear_model import...
2. Бостон.ipynb
lithiumdenis/MLSchool
mit
Understanding Gradient Descent In order to better understand gradient descent, let's implement it to solve a familiar problem - least-squares linear regression. While we are able to find the solution to ordinary least-squares linear regression analytically (recall its value as $\theta = (X^TX)^{−1}X^TY$), we can also f...
q1_answer = r""" Put your answer here, replacing this text. $$\frac{\partial}{\partial \theta_j} Loss(\theta) = \frac{1}{n} \sum_{i=1}^n \dots$$ """ display(Markdown(q1_answer)) q1_answer = r""" **SOLUTION:** $$\frac{\partial}{\partial \theta_j} Loss(\theta) = \frac{2}{n} \sum_{i=1}^n -x_{i,j} \left(y_i - f_\th...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 2: Now, try to write that formula in terms of the matricies $X$, $y$, and $\theta$.
q2_answer = r""" Put your answer here, replacing this text. $$\frac{\partial}{\partial \theta} Loss(X) = \dots$$ """ display(Markdown(q2_answer)) q2_answer = r""" **SOLUTION:** $$\frac{\partial}{\partial \theta} Loss(X) = -\frac{2}{n} X^T (y - X^T \theta)$$ """ display(Markdown(q2_answer))
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 3: Using this gradient function, complete the python function below which calculates the gradient for inputs $X$, $y$, and $\theta$. You should get a gradient of $[7, 48]$ on the simple data below.
def linear_regression_grad(X, y, theta): grad = -2/X.shape[0] * X.T @ (y - X @ theta) #SOLUTION return grad theta = [1, 4] simple_X = np.vstack([np.ones(10), np.arange(10)]).T simple_y = np.arange(10) * 3 + 2 linear_regression_grad(simple_X, simple_y, theta)
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 4: Before we perform gradient descent, let's visualize the surface we're attempting to descend over. Run the next few cells to plot the loss surface as a function of $\theta_0$ and $\theta_1$, for some toy data.
def plot_surface_3d(X, Y, Z, angle): highest_Z = max(Z.reshape(-1,1)) lowest_Z = min(Z.reshape(-1,1)) fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0, an...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
We create some toy data in two dimensions to perform our regressions on:
np.random.seed(100) X_1 = np.arange(50)/5 + 5 X = np.vstack([np.ones(50), X_1]).T y = (X_1 * 2 + 3) + np.random.normal(0, 2.5, size=50) plt.plot(X_1, y, ".")
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
And plot our loss:
angle_slider = widgets.FloatSlider(min=0, max=360, step=15, value=45) def plot_regression_loss(angle): t0_vals = np.linspace(-10,10,100) t1_vals = np.linspace(-2,5,100) theta_0,theta_1 = np.meshgrid(t0_vals, t1_vals) thetas = np.vstack((theta_0.flatten(), theta_1.flatten())) loss_vals = 2/X.shape[...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Consider: - What do you notice about the loss surface for this simple regression example? - Where are the optimal values $(\theta_0, \theta_1)$? - Do you think that the shape of this surface will make gradient descent a viable solution to find these optimal values? - What other loss surface shapes could you imagine...
def gradient_descent(X, y, theta0, gradient_function, learning_rate = 0.001, max_iter=1000000, epsilon=0.001): theta_hat = theta0 # Initial guess for t in range(1, max_iter): grad = gradient_function(X, y, theta_hat) # Now for the update step theta_hat = theta_hat...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Now let's visualize how our regression estimates change as we perform gradient descent:
theta_0s = [] theta_1s = [] plot_idx = [1, 5, 20, 100, 500, 2000, 10000] def plot_gradient_wrapper(X, y, theta): grad = linear_regression_grad(X, y, theta) theta_0s.append(theta[0]) theta_1s.append(theta[1]) t = len(theta_0s) if t in plot_idx: plt.subplot(121) plt.xlim([4, 12]) ...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 6: In Prof. Gonzalez's lecture, instead of using a constant learning rate, he used a learning rate that decreased over time, according to a function: $$\rho(t) = \frac{r}{t}$$ Where $r$ represents some initial learning rate. This has the feature of decreasing the learning rate as we get closer to the optimal s...
def sigmoid(t): return 1/(1 + np.e**-t)
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
And then complete the gradient function. You should get a gradient of about $[0.65, 0.61]$ for the given values $\theta$ on this example dataset.
def logistic_regression_grad(X, y, theta): grad = (sigmoid(X @ theta) - y) @ X #SOLUTION return grad theta = [0, 1] simple_X_1 = np.hstack([np.arange(10)/10, np.arange(10)/10 + 0.75]) simple_X = np.vstack([np.ones(20), simple_X_1]).T simple_y = np.hstack([np.zeros(10), np.ones(10)]) linear_regression_grad(simp...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Now let's see how we can use our gradient descent tools to fit a regression on some real data! First, let's load the breast cancer dataset from lecture, and plot breast mass radius versus category - malignant or benign. As in lecture, we jitter the response variable to avoid overplotting.
import sklearn.datasets data_dict = sklearn.datasets.load_breast_cancer() data = pd.DataFrame(data_dict['data'], columns=data_dict['feature_names']) data['malignant'] = (data_dict['target'] == 0) data['malignant'] = data['malignant'] + 0.1*np.random.rand(len(data['malignant'])) - 0.05 X_log_1 = data['mean radius'] X_l...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 8: Now, using our earlier defined gradient_descent function, find optimal parameters $(\theta_0, \theta_1)$ to fit the breast cancer data. You will have to tune the learning rate beyond the default of the function, and think of what a good initial guess for $\theta$ would be, in both dimensions.
theta_log = gradient_descent(X_log, y_log, [0, 1], logistic_regression_grad, learning_rate=0.0001) #SOLUTION theta_log
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
With optimal $\theta$ chosen, we can now plot our logistic curve and our decision boundary, and look at how our model categorizes our data:
y_lowX = X_log_1[sigmoid(X_log @ theta_log) < 0.5] y_lowy = y_log[sigmoid(X_log @ theta_log) < 0.5] y_highX = X_log_1[sigmoid(X_log @ theta_log) > 0.5] y_highy = y_log[sigmoid(X_log @ theta_log) > 0.5] sigrange = np.arange(5, 30, 0.05) sigrange_X = np.vstack([np.ones(500), sigrange]).T d_boundary = -theta_log[0]/theta...
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
And, we can calculate our classification accuracy.
n_errors = sum(y_lowy > 0.5) + sum(y_highy < 0.5) accuracy = round((len(y_log)-n_errors)/len(y_log) * 1000)/10 print("Classification Accuracy - {}%".format(accuracy))
sp17/disc/disc11/disc11_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Target distribution We use the peaks function from matlab, modified so it is positive: $$ p(x,y) \propto |3 (1-x)^2 e^{-x^2 - (y+1)^2} - 10 (\frac{x}{5} - x^3 - y^5) e^{-x^2 -y^2} - \frac{1}{3} e^{-(x+1)^2 - y^2} | $$
# Generate a pdf # the following steps generate a pdf; this is equivalent to the function "peaks(n)" in matlab n = 100 # number of dimension pdf = np.zeros([n, n]) sigma = np.zeros([n, n]) s = np.zeros([n, n]) x = -3.0 for i in range(0, n): y = -3.0 for j in range(0, n): pdf[j, i] = ( 3.0 ...
deprecated/simulated_annealing_2d_demo.ipynb
probml/pyprobml
mit
Heat bath The "heat bath" refers to a modified version of the distribution in which we vary the temperature.
Tplots = 10 # initial temperature for the plots stepT = 4 # how many steps should the Temperature be *0.2 for for i in range(0, stepT): sigma = np.exp(-(energy) / Tplots) sigma = sigma / sigma.max() ttl = "T={:0.2f}".format(Tplots) Tplots = Tplots * 0.2 X = np.arange(0, 100 + 100.0 / (n - 1), 10...
deprecated/simulated_annealing_2d_demo.ipynb
probml/pyprobml
mit
SA algorithm
def sim_anneal(proposal="gaussian", sigma=10): np.random.seed(42) xcur = np.array([np.floor(np.random.uniform(0, 100)), np.floor(np.random.uniform(0, 100))]) xcur = xcur.astype(int) ns = 300 # number of samples to keep T = 1 # start temperature alpha = 0.99999 # cooling schedule alpha = 0...
deprecated/simulated_annealing_2d_demo.ipynb
probml/pyprobml
mit
Run experiments
proposals = {"gaussian", "uniform"} x_hist = {} prob_hist = {} temp_hist = {} for proposal in proposals: print(proposal) x_hist[proposal], prob_hist[proposal], temp_hist[proposal] = sim_anneal(proposal=proposal) for proposal in proposals: plt.figure() plt.plot(temp_hist[proposal]) plt.title("temper...
deprecated/simulated_annealing_2d_demo.ipynb
probml/pyprobml
mit
Compute seed-based time-frequency connectivity in sensor space Computes the connectivity between a seed-gradiometer close to the visual cortex and all other gradiometers. The connectivity is computed in the time-frequency domain using Morlet wavelets and the debiased squared weighted phase lag index [1]_ is used as con...
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # # License: BSD (3-clause) import numpy as np import mne from mne import io from mne.connectivity import spectral_connectivity, seed_target_indices from mne.datasets import sample from mne.time_frequency import AverageTFR print(__doc__)
0.20/_downloads/bf3ad991f7c7776e245520709f49cb04/plot_cwt_sensor_connectivity.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Add a bad channel raw.info['bads'] ...
0.20/_downloads/bf3ad991f7c7776e245520709f49cb04/plot_cwt_sensor_connectivity.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders ove...
rides[:24*10].plot(x='dteday', y='cnt')
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
# Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_feat...
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:]
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Un...
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### iterations = 40000 learning_rate = 0.5 hidden_nodes = 35 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random...
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test...
Your_first_neural_network.ipynb
jorgedominguezchavez/dlnd_first_neural_network
mit
Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide rec...
learning_rate = 0.001 inputs_ = targets_ = ### Encoder conv1 = # Now 28x28x16 maxpool1 = # Now 14x14x16 conv2 = # Now 14x14x8 maxpool2 = # Now 7x7x8 conv3 = # Now 7x7x8 encoded = # Now 4x4x8 ### Decoder upsample1 = # Now 7x7x8 conv4 = # Now 7x7x8 upsample2 = # Now 14x14x8 conv5 = # Now 14x14x8 upsample3 =...
tutorials/autoencoder/Convolutional_Autoencoder.ipynb
WillenZh/deep-learning-project
mit
Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt...
tutorials/autoencoder/Convolutional_Autoencoder.ipynb
WillenZh/deep-learning-project
mit
Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then cl...
learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = # Now 28x28x32 maxpool1 = # Now 14x14x32 conv2 = # Now 14x14x32 maxpool2 = # Now 7x7x32 conv3 = # Now 7x7x16 encoded = # Now 4x...
tutorials/autoencoder/Convolutional_Autoencoder.ipynb
WillenZh/deep-learning-project
mit
Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))...
tutorials/autoencoder/Convolutional_Autoencoder.ipynb
WillenZh/deep-learning-project
mit
Download prediction files We release four groups of predictions: base: the base MultiBERTs models (bert-base-uncased), with 5 coreference runs for each of 25 pretraining checkpoints. cda_intervention-50k: as above, but with 50k steps of CDA applied to each checkpoint. 5 coreference runs for each of 25 pretraining chec...
#@title Download predictions and metadata scratch_dir = "/tmp/multiberts_coref" if not os.path.isdir(scratch_dir): os.mkdir(scratch_dir) preds_root = "https://storage.googleapis.com/multiberts/public/example-predictions/coref" GROUP_NAMES = [ 'base', 'base_extra_seeds', 'cda_intervention-50k', ...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Finally, load the occupations data from the U.S. Bureau of Labor Statistics, which we'll use to compute the bias correlation.
#@title Load occupations data occupation_tsv_path = os.path.join(data_root, "occupations-stats.tsv") # Link to BLS data occupation_data = pd.read_csv(occupation_tsv_path, sep="\t").set_index("occupation") occupation_pf = (occupation_data['bls_pct_female'] / 100.0).sort_index() occupation_pf
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Define metrics The values in preds.tsv represent binary predictions about whether each of our models predicts that the pronoun corresponds to the occupation term (0) or the other participant (1) in each Winogender example. With this, we can compute two metrics: - Accuracy against binary labels (whether the pronoun shou...
#@title Define metrics, test on one run def get_accuracy(answers, binary_preds): return np.mean(answers == binary_preds) def get_bias_score(preds_row): df = label_info.copy() df['pred_occupation'] = (preds_row == 0) m_pct = df[df["gender"] == "MASCULINE"].groupby(by="occupation")['pred_occupation'].agg...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Computing the bias scores can be slow because of the grouping operations, so we preprocess all runs before running the bootstrap. This gives us a [num_runs, 60] matrix, and we can compute the final bias correlation inside the multibootstrap routine.
bias_scores = np.stack([get_bias_score(p) for p in preds], axis=0) bias_scores.shape
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Finally, attach these to the run info dataframe - this will make it easier to filter by row later.
run_info['coref_preds'] = list(preds) run_info['bias_scores'] = list(bias_scores)
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Plot overall scores for each group Before we introduce the multibootstrap, let's get a high-level idea of what our metrics look like by just computing the mean scores for each group:
run_info['accuracy'] = [get_accuracy(label_info['answer'], p) for p in preds] rs, slopes = zip(*[get_bias_corr_and_slope(pf_bls, bs) for bs in bias_scores]) run_info['bias_r'] = rs run_info['bias_slope'] = slopes run_info.groupby(by='group_name')[['accuracy', 'bias_r']].agg('mean')
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Note that accuracy is very similar across all groups, while - as we might expect - the bias correlation (bias_r) decreases significantly for the CDA runs.
# Accuracy across runs data = run_info[run_info.group_name == 'base'] desc = data.groupby(by='pretrain_seed').agg(dict(accuracy='mean')).describe() print(f"{desc.accuracy['mean']:.1%} +/- {desc.accuracy['std']:.1%}")
language/multiberts/coref.ipynb
google-research/language
apache-2.0
You can also check how much this varies by pretraining seed. As it turns out, not a lot. Here's a plot showing this for the base runs:
#@title Accuracy variation by pretrain run fig = pyplot.figure(figsize=(15, 5)) ax = fig.gca() sns.boxplot(ax=ax, x='pretrain_seed', y='accuracy', data=run_info[run_info.group_name == 'base']) ax.set_title("Accuracy variation by pretrain seed, base") ax.set_ylim(0, 1.0) ax.axhline(0)
language/multiberts/coref.ipynb
google-research/language
apache-2.0
As a quick check, we can permute the seeds and see if much changes about our estimate:
# Accuracy across runs - randomized seed baseline rng = np.random.RandomState(42) data = run_info[run_info.group_name == 'base'].copy() bs = data.accuracy.to_numpy() data['accuracy_bs'] = rng.choice(bs, size=len(bs)) desc = data.groupby(by='pretrain_seed').agg(dict(accuracy_bs='mean')).describe() print(f"With replaceme...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Figure 5 (Appendix): Bias correlation for each pre-training seed Let's do the same as above, but for bias correlation. Again, this is on the whole run - no bootstrap yet - but should give us a sense of the variation you'd expect if you were to run this experiment ad-hoc on different pretraining seeds. As above, we'll j...
fig = pyplot.figure(figsize=(15, 7)) ax = fig.gca() base = sns.boxplot(ax=ax, x='pretrain_seed', y='bias_r', data=run_info[run_info.group_name == 'base'], palette=['darkslategray']) ax.set_title("Winogender bias correlation (r) by pretrain seed") ax.set_ylim(-0.2, 1.0) ax.axhline(0) legend_elements = [matplotlib.patch...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Figure 3: Bias correlation by pretrain seed, base and CDA intervention Now let's compare the base runs to running CDA for 50k steps. Again, no bootstrap yet - just plotting scores on full runs, to get a sense of how much difference we might expect to see if we did this ad-hoc and measured the effect size of CDA using j...
expt_group = "cda_intervention-50k" fig = pyplot.figure(figsize=(15, 7)) ax = fig.gca() base = sns.boxplot(ax=ax, x='pretrain_seed', y='bias_r', data=run_info[run_info.group_name == 'base'], palette=['darkslategray']) expt = sns.boxplot(ax=ax, x='pretrain_seed', y='bias_r', data=run_info[run_info.group_name == expt_gr...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Appendix D: Cross-Seed Variation You might ask: how much of this variation is actually due to the coreference task training? We can see decently large error bars for each pretraining seed above, and we only had five coreference runs each. One simple test is to ignore the pretraining seed. We'll create groups by randoml...
data = run_info[run_info.group_name == 'base'].copy() bs = data.bias_r.to_numpy() for i in range (5): rng = np.random.RandomState(i) data[f'bias_r_bs_{i}'] = rng.choice(bs, size=len(bs)) data.groupby(by='pretrain_seed').agg('mean').describe().loc[['mean', 'std']]
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Figure 6: Extra task runs Another way to test this is to look at the base_extra_seeds runs, where we ran 5 different pretraining seeds with 25 task runs. This gives us a better estimate of the mean for each pretraining seed.
fig = pyplot.figure(figsize=(8, 7)) ax = fig.gca() sns.boxplot(ax=ax, x='pretrain_seed', y='bias_r', data=run_info[run_info.group_name == 'base_extra_seeds']) ax.set_title("Bias variation by pretrain seed, base w/extra seeds") ax.set_ylim(-0.2, 1.0) ax.axhline(0) ax.title.set_fontsize(16) ax.set_xlabel("Pretraining Se...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Now we can also use the multibootstrap as a statistical test to check for differences between these seeds. We'll compare seed 0 to seed 1, and do an unpaired analysis:
#@title Bootstrap to test if seed 1 is different from seed 0 num_bootstrap_samples = 1000 #@param {type: "integer"} rseed=42 mask = (run_info.group_name == 'base_extra_seeds') mask &= (run_info.pretrain_seed == 0) | (run_info.pretrain_seed == 1) selected_runs = run_info[mask].copy() # Set intervention and seed colum...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Section 4.1 / Table 1: Paired analysis: base vs. CDA intervention We've seen how much variation there can be across pretraining checkpoints, so let's use the multibootstrap to help us get a better estimate of the effectiveness of CDA. Here, we'll look at CDA for 50k steps as an intervention on the base checkpoints, and...
num_bootstrap_samples = 1000 #@param {type: "integer"} rseed=42 expt_group = "cda_intervention-50k" mask = (run_info.group_name == 'base') mask |= (run_info.group_name == expt_group) selected_runs = run_info[mask].copy() # Set intervention and seed columns selected_runs['intervention'] = selected_runs.group_name ==...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Plot result distribution It can also be illustrative to look directly at the distribution of samples:
#@title Bias r columns = ['Base', 'CDA intervention'] var_name = 'Group Name' val_name = "Bias Correlation" samples = all_samples['bias_r'] fig, axs = pyplot.subplots(1, 2, gridspec_kw=dict(width_ratios=[2, 1]), figsize=(15, 7)) bdf = pd.DataFrame(samples, columns=columns).melt(var_name=var_name, value_name=val_name)...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Section 4.2 / Table 2: Unpaired analysis: CDA intervention vs. CDA from-scratch Here, we'll compare our CDA 50k intervention to a set of models trained from-scratch with CDA data. base (L) is the intevention CDA above, and expt (L') is a similar setup but pretraining from scratch with the counterfactually-augmented dat...
num_bootstrap_samples = 1000 #@param {type: "integer"} rseed=42 base_group = "cda_intervention-50k" expt_group = "from_scratch" mask = (run_info.group_name == base_group) mask |= (run_info.group_name == expt_group) selected_runs = run_info[mask].copy() # Set intervention and seed columns selected_runs['intervention...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
Do we actually need to do the full multiboostrap, where we sample over both seeds and examples simultaneously? We can check this with ablations where we sample over one axis only: Seeds only (sample_examples=False) Examples only (sample_seeds=False)
#@title As above, but sample seeds only rseed=42 metric = get_bias_corr samples = multibootstrap.multibootstrap(selected_runs, preds, labels, metric, nboot=num_bootstrap_samples, rng=rseed, paired_se...
language/multiberts/coref.ipynb
google-research/language
apache-2.0
50% ham in sample
make_plot(0.5)
examples/squiggle_classifier_1/Read_Until_Efficiency.ipynb
akloster/porekit-python
isc
10% ham in sample
make_plot(0.1)
examples/squiggle_classifier_1/Read_Until_Efficiency.ipynb
akloster/porekit-python
isc
1% ham in sample
make_plot(0.01)
examples/squiggle_classifier_1/Read_Until_Efficiency.ipynb
akloster/porekit-python
isc
Search Feature Sets Feature sets are the logical containers for genomic features that might be defined in a GFF3, or other file that describes features in genomic coordinates. They are mapped to a single reference set, and belong to specific datasets.
for feature_set in c.search_feature_sets(dataset_id=dataset.id): print feature_set if feature_set.name == "gencode_v24lift37": gencode = feature_set
python_notebooks/1kg_sequence_annotation_service.ipynb
david4096/bioapi-examples
apache-2.0
Get Feature Set by ID With the identifier to a specific Feature Set, one can retrieve that feature set by ID.
feature_set = c.get_feature_set(feature_set_id=gencode.id) print feature_set
python_notebooks/1kg_sequence_annotation_service.ipynb
david4096/bioapi-examples
apache-2.0
Search Features With a Feature Set ID, it becomes possible to construct a Search Features Request. In this request, we can find genomic features by position, type, or name. In this request we simply return all features in the Feature Set.
counter = 0 for features in c.search_features(feature_set_id=feature_set.id): if counter > 3: break counter += 1 print"Id: {},".format(features.id) print" Name: {},".format(features.name) print" Gene Symbol: {},".format(features.gene_symbol) print" Parent Id: {},".format(features.parent_...
python_notebooks/1kg_sequence_annotation_service.ipynb
david4096/bioapi-examples
apache-2.0
Note: Not all of the elements returned in the response are present in the example. All of the parameters will be shown in the get by id method. We can perform a similar search, this time restricting to a specific genomic region.
for feature in c.search_features(feature_set_id=feature_set.id, reference_name="chr17", start=42000000, end=42001000): print feature.name, feature.start, feature.end feature = c.get_feature(feature_id=features.id) print"Id: {},".format(feature.id) print" Name: {},".format(feature.name) print" Gene Symbol: {},".for...
python_notebooks/1kg_sequence_annotation_service.ipynb
david4096/bioapi-examples
apache-2.0
Try changing the message in the previous code cell and re-running it. Does it behave as you expect? You may remember from the Getting Started WIth Notebooks.ipynb notebook that if the last statement in a code cell returns a value, the value will be displayed as the output of the code cell when the cell contents have be...
message
robotVM/notebooks/Demo - Square 2 - Variables.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
You can assign whatever object you like to a variable. For example, we can assign numbers to them and do sums with them:
#Assign raw numbers to variables apples=5 oranges=10 #Do a sum with the values represented by the variables and assign the result to a new variable items_in_basket = apples + oranges #Display the resulting value as the cell output items_in_basket
robotVM/notebooks/Demo - Square 2 - Variables.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
See if you can add the count of a new set of purchases to the number of items in your basket in the cell above. For example, what if you also bought 3 pears. And a bunch of bananas. Making Use of Variables Let's look back at our simple attempt at the square drawing program, in which we repeated blocks of instructions a...
%run 'Set-up.ipynb' %run 'Loading scenes.ipynb' %run 'vrep_models/PioneerP3DX.ipynb'
robotVM/notebooks/Demo - Square 2 - Variables.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
The original programme appears in the code cell below. how many changes would you have to make to it in order to change the side length? can you see how you might be able to simplify the act of changing the side length? what would you need to change if you wanted to make the turns faster? Or slower? HINT: think vari...
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX import time #side 1 robot.move_forward() time.sleep(1) #turn 1 robot.rotate_left(1.8) time.sleep(0.45) #side 2 robot.move_forward() time.sleep(1) #turn 2 robot.rotate_left(1.8) time.sleep(0.45) #side 3 robot.move_forward() time.sleep(1) #turn 3 robot.rotate_left(1.8) ti...
robotVM/notebooks/Demo - Square 2 - Variables.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
Using the above programme as a guide, see if you can write a programme in the code cell below that makes it easier to maintin and simplifies the act of changing the numerical parameter values.
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX import time #YOUR CODE HERE
robotVM/notebooks/Demo - Square 2 - Variables.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
How did you get on? How easy is is to change the side length now? Or find a new combination of the turn speed and turn angle to turn through ninety degrees (or thereabouts?). Try it and see... Here's the programme I came up with: I used three variables, one for side length, one for turn time, and one for turn speed. Fe...
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX import time side_length_time=1 turn_speed=1.8 turn_time=0.45 #side 1 robot.move_forward() time.sleep(side_length_time) #turn 1 robot.rotate_left(turn_speed) time.sleep(turn_time) #side 2 robot.move_forward() time.sleep(side_length_time) #turn 2 robot.rotate_left(turn_s...
robotVM/notebooks/Demo - Square 2 - Variables.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radia...
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please speci...
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TO...
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s).
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping ...
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds).
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0