markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**I...
import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline
_____no_output_____
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RG...
# Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
_____no_output_____
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing...
# Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
y = [1], it's a 'cat' picture.
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) ...
### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_x_orig.shape[0] m_test = test_set_x_orig.shape[0] num_px = train_set_x_orig.shape[1] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width ...
Number of training examples: m_train = 209 Number of testing examples: m_test = 50 Height/Width of each image: num_px = 64 Each image is of size: (64, 64, 3) train_set_x shape: (209, 64, 64, 3) train_set_y shape: (1, 209) test_set_x shape: (50, 64, 64, 3) test_set_y shape: (1, 50)
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is...
# Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_f...
train_set_x_flatten shape: (12288, 209) train_set_y shape: (1, 209) test_set_x_flatten shape: (12288, 50) test_set_y shape: (1, 50) sanity check after reshaping: [17 31 56 22 33]
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue c...
train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255.
_____no_output_____
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the le...
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1 / (1 + (np.exp(-z))) ### END CODE HERE ### return s print ("sigmo...
sigmoid([0, 2]) = [ 1. 1.76159416]
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documen...
# GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of...
w = [[ 0.] [ 0.]] b = 0
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation s...
# GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) ...
dw = [[ 0.99845601] [ 2.39507239]] db = 0.00145557813678 cost = 5.801545319394553
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradien...
# GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shap...
w = [[ 0.19033591] [ 0.12259159]] b = 1.92535983008 dw = [[ 0.67752042] [ 1.41625495]] db = 0.219194504541
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Exercise:** The previous function will output the learned w and b. We are able to ...
# GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of example...
predictions = [[ 1. 1. 0.]]
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating t...
# GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of sh...
_____no_output_____
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
Run the following cell to train your model.
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2500, learning_rate = 0.009, print_cost = True)
Cost after iteration 0: 0.693147 Cost after iteration 100: 0.726194 Cost after iteration 200: 1.452277 Cost after iteration 300: 0.871654 Cost after iteration 400: 0.617655 Cost after iteration 500: 0.409132 Cost after iteration 600: 0.248640 Cost after iteration 700: 0.168364 Cost after iteration 800: 0.150399 Cost af...
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Expected Output**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a ...
# Example of a picture that was wrongly classified. index = 1 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
y = 1, you predicted that it is a "cat" picture.
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
Let's also plot the cost function and the gradients.
# Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show()
_____no_output_____
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the...
learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "--------------------------------------------...
learning rate is: 0.01 train accuracy: 99.52153110047847 % test accuracy: 68.0 % ------------------------------------------------------- learning rate is: 0.001 train accuracy: 88.99521531100478 % test accuracy: 64.0 % ------------------------------------------------------- learning rate is: 0.0001 train accuracy: ...
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
**Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn'...
## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "my_image.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num...
y = 0.0, your algorithm predicts a "non-cat" picture.
Apache-2.0
Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb
adyaan1989/Deep-Learning
1. 数据读入
import pandas as pd df = pd.read_csv('50_Startups.csv') df.head()
_____no_output_____
MIT
R&D_Profit-visualize-loss.ipynb
nwmsno1/tensorflow_base
2. 数据归一化
def normalize_feature(df): return df.apply(lambda column: (column - column.mean())/column.std()) df = normalize_feature(df[['R&D Spend', 'Marketing Spend', 'Profit']]) df.head() # 数据分析(3D) # import matplotlib.pyplot as plt # from mpl_toolkits import mplot3d # fig = plt.figure() # ax = plt.axes(projection='3d') # a...
_____no_output_____
MIT
R&D_Profit-visualize-loss.ipynb
nwmsno1/tensorflow_base
3. 数据处理
import numpy as np # 为了方便矩阵相乘,添加一列全为1的x0 ones = pd.DataFrame({'ones': np.ones(len(df))}) # ones是n行1列的数据框,表示x0恒为1 df = pd.concat([ones, df], axis=1) # 根据列合并数据 X_data = np.array(df[df.columns[0:3]]) Y_data = np.array(df[df.columns[-1]]).reshape(len(df), 1) print(X_data.shape, type(X_data)) print(Y_data.shape, type(Y...
(50, 3) <class 'numpy.ndarray'> (50, 1) <class 'numpy.ndarray'>
MIT
R&D_Profit-visualize-loss.ipynb
nwmsno1/tensorflow_base
4. 创建线性回归模型(数据流图)
import tensorflow as tf tf.reset_default_graph() # https://www.cnblogs.com/demo-deng/p/10365889.html alpha = 0.01 # 学习率 epoch = 500 # 训练全量数据集的轮数 # 创建线性回归模型(数据流图) with tf.name_scope('input'): # 输入X, 形状[50,3] X = tf.placeholder(tf.float32, X_data.shape, name='X') # 输入Y,形状[50,1] Y = tf.placeholder(tf....
_____no_output_____
MIT
R&D_Profit-visualize-loss.ipynb
nwmsno1/tensorflow_base
5. 创建会话(运行环境)
# 创建会话(运行环境) with tf.Session() as sess: # 初始化全局变量 sess.run(tf.global_variables_initializer()) # 创建 FileWriter 实例 writer = tf.summary.FileWriter("./summary/linear-regression-1/", sess.graph) loss_data = [] # 开始训练模型 # 因为训练集较小,所以不采用批梯度下降优化算法,每次都采用全量数据训练 for e in range(1, epoch+1):...
Epoch 10 Loss=0.3661 Model: y = 0.09726x1 + 0.07332x2 + 7.451e-11 Epoch 20 Loss=0.2701 Model: y = 0.1796x1 + 0.1324x2 + 7.078e-10 Epoch 30 Loss=0.2035 Model: y = 0.2495x1 + 0.1798x2 + 2.98e-10 Epoch 40 Loss=0.1571 Model: y = 0.3091x1 + 0.2174x2 + -1.155e-09 Epoch 50 Loss=0.1247 Model: y = 0.36x1 + 0...
MIT
R&D_Profit-visualize-loss.ipynb
nwmsno1/tensorflow_base
6. 可视化损失值
print(len(loss_data), loss_data) import seaborn as sns import matplotlib.pyplot as plt sns.set(context='notebook', style='whitegrid', palette='dark') ax = sns.lineplot(x='epoch', y='loss', data=pd.DataFrame({'loss': loss_data, 'epoch': np.arange(epoch/10)})) ax.set_xlabel('epoch') ax.set_ylabel('loss') plt.show()
_____no_output_____
MIT
R&D_Profit-visualize-loss.ipynb
nwmsno1/tensorflow_base
Figure 4 SM General
# default print properties multiplier = 2 pixel_cm_ration = 36.5 width_full = int(13.95 * pixel_cm_ration) * multiplier width_half = int(13.95/2 * pixel_cm_ration) * multiplier height_default_1 = int(4 * pixel_cm_ration) * multiplier # margins in pixel top_margin = 5 * multiplier left_margin = 35 * multiplier rig...
_____no_output_____
MIT
reproduce_paper_figures/make_sm_figure_4.ipynb
flowersteam/holmes
Dependence of diversity on number of bins
# General Functions to load data def calc_number_explored_bins(vectors, data_filter_inds, bin_config, ignore_out_of_range_values=True): number_explored_bins_per_step = [] step_idx = 0 # if there is a filter, fill all initial temsteps were there is no filtered entity with zero if data_filter_inds i...
_____no_output_____
MIT
reproduce_paper_figures/make_sm_figure_4.ipynb
flowersteam/holmes
BC Elliptical Fourier Analytic Space - SLP
# Collect Data new_data = dict() for cur_num_of_bins in num_of_bins_per_dimension: cur_diversity = calc_number_explored_bins_for_experiments( experiment_definitions, experiment_statistics, BC_ellipticalfourier_analytic_space_ranges, num_of_bins_per_dimension=cur_num_of_bins, ...
/home/mayalen/miniconda3/envs/holmes/lib/python3.6/site-packages/plotly/tools.py:465: DeprecationWarning: plotly.tools.make_subplots is deprecated, please use plotly.subplots.make_subplots instead
MIT
reproduce_paper_figures/make_sm_figure_4.ipynb
flowersteam/holmes
BC Lenia Statistics Analytic Space - TLP
# Collect Data new_data = dict() for cur_num_of_bins in num_of_bins_per_dimension: cur_diversity = calc_number_explored_bins_for_experiments( experiment_definitions, experiment_statistics, BC_leniastatistics_analytic_space_ranges, num_of_bins_per_dimension=cur_num_of_bins, ...
_____no_output_____
MIT
reproduce_paper_figures/make_sm_figure_4.ipynb
flowersteam/holmes
Compare the LRG Optical-selection sytematics between DES and DECaLS regions We do LASSO based training using a healpix map of `nside=128` but do the testing using a healpix map of `nside=32` to reduce variance.Also we fit the model **only** to the DECaLS region but testing is done on **both** DES and DECaLS
import pandas as pd import numpy as np import healpy as hp import matplotlib.pyplot as plt import matplotlib.lines as lines from astropy.table import Table as T from astropy.coordinates import SkyCoord from scipy.stats import binned_statistic, iqr from sklearn.linear_model import SGDRegressor from sklearn.preproces...
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
Load the data and select only `DECaLS`
hpTable = T.read("/home/bid13/code/desi/DESI-LASSO/data_new/heapix_map_lrg_optical_nominal_20191024_clean_combined_128.fits") pix_area = hp.pixelfunc.nside2pixarea(128, degrees=True) #Moving to pandas data=hpTable.to_pandas() data=data.dropna() data=data.reset_index(drop=True) data["region"] = data["region"].str.decod...
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
Create a linear model to predict surface density while performing variable selection using LASSO **Weighted LASSO trained using Stochastic Gradient Descent** LASSO is a regularized linear regression method which sets slopes of un-important predictors to zero. The penalizing coefficient $\alpha$ is fixed using a grid ...
alpha_sel = 0.8 #Weighted LASSO lasso_sgd = SGDRegressor(loss="squared_loss", penalty="l1", l1_ratio=1, alpha =0.8, random_state=200, tol=1e-6, max_iter=1e5, eta0=1e-4) lasso_sgd.fit(scaled_data, data.density, sample_weight=data["weight"])
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
Test the trained model with DES region with `nside=32` Load the data and select only `DES+DECaLS`
hpTable_32 = T.read("/home/bid13/code/desi/DESI-LASSO/data_new/heapix_map_lrg_optical_nominal_20191024_clean_combined_32.fits") data_32 = hpTable_32.to_pandas() data_32 = data_32.dropna() data_32 = data_32.reset_index(drop=True) data_32["region"] = data_32["region"].str.decode("utf-8") data_32 = data_32[data_32["regio...
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
The distribution of fractional total residuals Total fractional residuals are defined as: $\dfrac{\text{(observed density - predicted density)}}{\text{observed density}}$ Here we compare the total fractional residuals for the linear model trained on `DECaLS` vs. the fractional deviation from a ‘constant-only’ model ...
#Linear Model data_32["lin_res"] = (data_32["density"] - lasso_sgd.predict(scaled_data_32)) data_32["frac_lin_res"] = data_32["lin_res"]/data_32["density"] #constant-only Model data_32["cons_res"] = (data_32["density"] - np.mean(data_32[data_32["region"]=="decals"]["density"])) data_32["frac_cons_res"] = data_32["cons...
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
Print statistics for the fractional residuals **Linear Model**
print("DES+Decals:") print("Mean:", round(data_32["lin_res"].sum()/data_32["density"].sum(), 4), "Median:", round(np.median(data_32["frac_lin_res"]),4)) print() print("DES:") print("Mean:", round(data_32_des["lin_res"].sum()/data_32_des["density"].sum(),4), "Median:", round(np.median(data_32_des["frac_lin_res"]),4)) pr...
DES+Decals: Mean: -0.001 Median: -0.0062 DES: Mean: -0.0091 Median: -0.0196 DECaLS: Mean: 0.0002 Median: -0.0049
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
**Constant-only (i.e., mean density) Model**
print("DES+Decals:") print("Mean:", round(data_32["cons_res"].sum()/data_32["density"].sum(), 4), "Median:", round(np.median(data_32["frac_cons_res"]),4)) print() print("DES:") print("Mean:", round(data_32_des["cons_res"].sum()/data_32_des["density"].sum(),4), "Median:", round(np.median(data_32_des["frac_cons_res"]),4)...
DES+Decals: Mean: 0.0001 Median: -0.0053 DES: Mean: 0.0006 Median: -0.0091 DECaLS: Mean: 0.0 Median: -0.0051
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
**Summary:** There is about 0.1% difference in the average density of LRGs between the `DES` and `DECaLS` regions. However, if one fits a linear model for the dependence of LRG density on imaging properties and systematics using only the `DECaLS` area, one predicts a density difference comparable to this. Residuals...
fig, axs = plt.subplots(3,4, figsize = (18,12)) fig.delaxes(axs[2][3]) axs = axs.flatten() axs_twin = [ax.twinx() for ax in axs] fig.delaxes(axs_twin[-1]) scaled_32_des = scaler.transform(data_32_des[columns]) array_des = np.array(data_32_des[columns]) array_decals = np.array(data_32_decals[columns]) array_data = np...
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
**Summary:** We see that the linear model fitted to `DECaLS` tends to describe the offsets to `DES` pretty well. This also shows that selection is quite uniform accros the two regions. Plot Healpix maps of the residulas **Fractional residuals from linear model**
hp_map = plot_hpix(data_32, 32, "frac_lin_res", region="bm")
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
**Fractional Residuals from constant-only model**
hp_map = plot_hpix(data_32, 32, "frac_cons_res", region="bm")
_____no_output_____
MIT
targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb
biprateep/DESI-notebooks
Building a csv of all your PBs-- a short story by baldnate Get user id
import json import pandas as pd import requests import math users = {} def getUserId(username): if username not in users: url = "https://www.speedrun.com/api/v1/users?name=" + username data = requests.get(url).json()['data'] if len(data) == 1: users[username] = data[0]['id'] else: raise E...
_____no_output_____
BSD-3-Clause
playground.ipynb
baldnate/src-pbs-to-csv
Get PBs
def getPBs(userid, all = False): url = "https://www.speedrun.com/api/v1/users/" + userid + "/personal-bests?embed=game,category,region,platform,players" data = requests.get(url).json()['data'] pbdf = pd.DataFrame(data) pbdf = pbdf.join(pbdf['run'].apply(pd.Series), rsuffix='run') pbdf.drop(axis=1, columns=['r...
_____no_output_____
BSD-3-Clause
playground.ipynb
baldnate/src-pbs-to-csv
Co-Op - aka: write player(s) to a column "Simple" Columns
runsdf['place'] = rawdf['place'] runsdf['gamename'] = rawdf.apply(lambda x: x.game['data']['names']['international'], axis=1) runsdf['categoryname'] = rawdf.apply(lambda x: x.category['data']['name'], axis=1) runsdf['time'] = rawdf.apply(lambda x: x.times['primary_t'], axis=1) runsdf['date'] = rawdf.apply(lambda x: x.d...
_____no_output_____
BSD-3-Clause
playground.ipynb
baldnate/src-pbs-to-csv
Columns that need optional handling
def getRegion(x): if len(x.region['data']) == 0: return "None" else: return x.region['data']['name'] runsdf['regionname'] = rawdf.apply(lambda x: getRegion(x), axis=1) def getPlatform(x): if len(x.platform['data']) == 0: return "None" else: return x.platform['data']['name'] runsdf['platformna...
_____no_output_____
BSD-3-Clause
playground.ipynb
baldnate/src-pbs-to-csv
Sub-CategoriesMemoized for speed and kindness.
varMemo = {} def getVariable(variableid): if variableid not in varMemo: url = "https://www.speedrun.com/api/v1/variables/" + variableid response = requests.get(url) varMemo[variableid] = response.json()['data'] return varMemo[variableid] def getValue(variableid, valueid): var = getVariable(variableid...
_____no_output_____
BSD-3-Clause
playground.ipynb
baldnate/src-pbs-to-csv
Dump to a csv
def getPlayers(x): players = [] for p in x.players['data']: players.append(p['names']['international']) return ", ".join(players) runsdf['players'] = rawdf.apply(lambda x: getPlayers(x), axis=1) import csv runsdf.fillna(value="").to_csv('runs.csv', index=False, quoting=csv.QUOTE_NONNUMERIC) print('csv expor...
_____no_output_____
BSD-3-Clause
playground.ipynb
baldnate/src-pbs-to-csv
Eksempel i R Intro til notatbøker- Jupyter notebooks - Kjør celle med ctrl + enter - Shift + enter for kjøre og gå til neste Steg 1Først må vi hente inn modulene vi trenger i RMarker neste celle og trykk Shift + enter
# Henter inn bibliotek library(httr) # Bibiotek for spørringer library(rjstat) # Bibliotek for håntering av json-stat formatet
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
Steg 2Nå skal vi lage spørringen som består av URL (som peker mot applikasjonen som skal kjøre programmet) og spørringsteksten (som vi henter fra SSB)Her er det mulig å bytte ut spørring med egne spørringer. Husk: - bruke `'` i starten og slutten av spørringsteksten- endre tabellnummer i URL (de fem tallene på slutten...
# Pendling url <-'https://data.ssb.no/api/v0/no/table/11616/' data <- '{ "query": [ { "code": "Region", "selection": { "filter": "all", "values": [ "*" ] } }, { "code": "ContentsCode", "selection": { "filter": "item", "value...
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
Steg 3Nå sender vi spørringen til SSB og mottar respons. Responsen vil bestå av de data vi har bestilt, dersom vi har gjort alt rett.
temp <- POST(url , body = data, encode = "json", verbose())
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
Vi kan også sjekke metadata for respons:
print(temp)
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
Steg 4Når vi har fått status 200 kan vi se på datasettet vi har lastet ned. Prøv å skifte ut `naming = "label"` med `naming = "label"` og kjør cellen igjen. Hva skjer?
# Vi lagrer responsen til en variabel tabell <- fromJSONstat(content(temp, "text"), naming = "id", use_factors = F) # Vis de første radene av tabellen head(tabell)
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
Steg 5Hvis vi vil lagre data i lokalt, kan vi skrive den til en tekstfil. Den vil lagres i samme mappe som dette skriptet ligger i. Det er også mulig å skrive inn en filbane på din maskin, dersom du vil prøve å lagre den. Hvis ikke kan den lastes ned fra menyen til venstre (under mappesymbolet)
write.csv(tabell, "11616.csv")
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
Bonus - hent metadata for tabell
content(temp)$updated # Viser når dataene sist ble oppdatert av SSB content(temp)$label # Viser tittelen på tabellen content(temp)$source # Viser hvem som eier tabellen
_____no_output_____
MIT
.ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb
pandaAPIkurs/kursdata
How specutils and muler propagate uncertaintyIn this notebook we explore how `specutils` does uncertainty propagation, and how to combine two spectra with different---but overlapping---extents. That is, if a spectrum has close to, but not exactly the same wavelengths, how does specutils combine them? These two issue...
from specutils import Spectrum1D import numpy as np from astropy.nddata import StdDevUncertainty import astropy.units as u import matplotlib.pyplot as plt %config InlineBackend.figure_format='retina'
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Spectrum 1: $S/N=20$First we'll make a spectrum with signal-to-noise ratio equal to 20, and mean of 1.0.
N_points = 300 fake_wavelength = np.linspace(500, 600, num=N_points)*u.nm mean_val, sigma = 1.0, 0.05 snr = mean_val / sigma known_uncertainties = np.repeat(sigma, N_points) * u.Watt / u.cm**2 fake_flux = np.random.normal(loc=mean_val, scale=known_uncertainties) * u.Watt / u.cm**2 spec1 = Spectrum1D(spectral_axis=fake_...
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Spectrum 2: $S/N = 50$ and *conspicuously* offset in wavelengthNow we'll make a spectrum with signal-to-noise ratio equal to 50, and mean of 0.5. The wavelength axes are *offset* by 10 nanometers.
N_points2 = N_points fake_wavelength2 = np.linspace(510, 610, num=N_points2)*u.nm mean_val2, sigma2 = 0.5, 0.01 snr = mean_val2 / sigma2 known_uncertainties2 = np.repeat(sigma2, N_points2) * u.Watt / u.cm**2 fake_flux2 = np.random.normal(loc=mean_val2, scale=known_uncertainties2) * u.Watt / u.cm**2 spec2 = Spectrum1D(s...
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Add Spectrum 1 and Spectrum 2: What happens?We expect the uncertainties to add *in quadrature*: $$ \sigma_{net} = \sqrt{\sigma_1^2 + \sigma2^2}$$$$ \sigma_{net} = \sqrt{0.05^2 + 0.01^2} $$$$=$$
np.hypot(0.05, 0.01) spec_net = spec1 + spec2 spec_net.uncertainty[0:7]
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Woohoo! Specutils *automatically* propagates the error correctly! You can turn this error propagation *off* (I'm not sure why you would want to) by calling the method with a kwarg:
spec_net_no_error_propagation = spec1.add(spec2, propagate_uncertainties=False) spec_net_no_error_propagation.uncertainty[0:7]
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Wait, but what about the offset? How did it deal with the non-overlapping edges?
plt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, linestyle='none', marker='o', label='Spec1: $S/N=20$') plt.errorbar(spec2.wavelength.value, spec2.flux.value, yerr=spec2.uncertainty.array, linestyle='none', marker='o', markersize=1, label='Spec2: $S/N=50$...
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Whoa! Specutils pretends like the signals are *aligned to Spectrum 1*. That is probably not the desired behavior for such an extreme offset as this one, but may be "good enough" for spectra that are either exactly aligned or within a pixel. It depends on your science application. PRV applications should not just rou...
spec_alt = spec2 + spec1 plt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, linestyle='none', marker='o', label='Spec1: $S/N=20$') plt.errorbar(spec2.wavelength.value, spec2.flux.value, yerr=spec2.uncertainty.array, linestyle='none', marker='o', markersize=...
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Weird, so the wavelengths of the result are taken from the bounds of the *first* argument. This means "addition is not commutative" in specutils. Let's see why:
spec_net.spectral_axis spec_alt.spectral_axis spec1.add(spec2, compare_wcs=None).spectral_axis
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Pixels instead of nanometers!
spec2.add(spec1, compare_wcs='first_found').spectral_axis spec1.add(spec2, compare_wcs='first_found').spectral_axis
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
A ha! The `compare_wcs` kwarg controls what-to-do with the mis-matched spectral axes. Basically the sum of the two spectra just takes the wavelength labels from the first spectrum and uses those, when `compare_wcs='first_found'`---the default---is provided. It doesn't actually interpolate or anything fancy... Resam...
from specutils.manipulation import FluxConservingResampler, LinearInterpolatedResampler resampler = FluxConservingResampler(extrapolation_treatment='nan_fill') %%capture #This method throws a warning for an unknown reason... resampled_spec2 = resampler(spec2, spec1.spectral_axis) np.all(resampled_spec2.wavelength == s...
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Hmmm... this process causes a new type of uncertainty "inverse variance" instead of std deviation... I'm not sure why! Hmm... We'll manually convert? Variance is just standard-deviation squared. Inverse is just one-over-that...
new_sigma = np.sqrt(1/resampled_spec2.uncertainty.array) resampled_spec2.uncertainty = StdDevUncertainty(new_sigma) spec_final = spec1.add(resampled_spec2, propagate_uncertainties=True) spec_final.uncertainty[0:50]
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Voila! It worked! I am not sure why specutils does not go the whole way and do this step manually.
plt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, linestyle='none', marker='o', label='Spec1: $S/N=20$') plt.errorbar(spec2.wavelength.value, spec2.flux.value, yerr=spec2.uncertainty.array, linestyle='none', marker='o', markersize=1, label='Spec2: $S/N=50$...
_____no_output_____
MIT
docs/tutorials/Combining_uncertainties_with_specutils.ipynb
dkrolikowski/muler
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Common patterns Run in Google Colab View source on GitHub Setup
from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import matplotlib.pyplot as plt def plot_series(time, series, format="-", start=0, end=None, label=None): plt.plot(time[start:end], series[start:end], format, label=label) plt.xlabel("Time") plt.ylabel("Val...
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Trend and Seasonality
def trend(time, slope=0): return slope * time
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Let's create a time series that just trends upward:
time = np.arange(4 * 365 + 1) baseline = 10 series = baseline + trend(time, 0.1) plt.figure(figsize=(10, 6)) plot_series(time, series) plt.show() time series
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Now let's generate a time series with a seasonal pattern:
def seasonal_pattern(season_time): """Just an arbitrary pattern, you can change it if you wish""" return np.where(season_time < 0.4, np.cos(season_time * 2 * np.pi), 1 / np.exp(3 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same...
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Now let's create a time series with both trend and seasonality:
slope = 0.05 series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) plt.figure(figsize=(10, 6)) plot_series(time, series) plt.show()
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Noise In practice few real-life time series have such a smooth signal. They usually have some noise, and the signal-to-noise ratio can sometimes be very low. Let's generate some white noise:
def white_noise(time, noise_level=1, seed=None): rnd = np.random.RandomState(seed) return rnd.randn(len(time)) * noise_level noise_level = 5 noise = white_noise(time, noise_level, seed=42) plt.figure(figsize=(10, 6)) plot_series(time, noise) plt.show()
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
Now let's add this white noise to the time series:
series += noise plt.figure(figsize=(10, 6)) plot_series(time, series) plt.show()
_____no_output_____
MIT
Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb
Shahid1993/udacity-courses
There are some missing values for product_description attribute in the training set.
# structure of the test set crowd_test.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 22513 entries, 3 to 32671 Data columns (total 3 columns): query 22513 non-null object product_title 22513 non-null object product_description 17086 non-null object dtypes: object(3) memory usage: 439.7+ KB
MIT
Kaggle-Competitions/CrowdFlower/Initial Analysis.ipynb
gopala-kr/ds-notebooks
There are missing values for product_description attribute in the test set.
# figuring out unique query terms in training set train_query = crowd_train['query'].unique() # figuring out unqiue query terms in test set test_query = crowd_test['query'].unique() # unique values in the training set train_query[:10] # unique values in the test set test_query[:10] # lets find out those queries that ov...
_____no_output_____
MIT
Kaggle-Competitions/CrowdFlower/Initial Analysis.ipynb
gopala-kr/ds-notebooks
*This notebook contains course material from [CBE30338](https://jckantor.github.io/CBE30338)by Jeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE30338.git).The text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalc...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint # parameter values mumax = 0.20 # 1/hour Ks = 1.00 # g/liter Yxs = 0.5 # g/g Ypx = 0.2 # g/g Sf = 10.0 # g/liter # inlet flowrate def F(t): return 0.05 # reaction rates...
_____no_output_____
MIT
Mathematics/Mathematical Modeling/02.07-Fed-Batch-Bioreactor.ipynb
okara83/Becoming-a-Data-Scientist
Simulation
IC = [0.05, 0.0, 10.0, 1.0] t = np.linspace(0,50) sol = odeint(xdot,IC,t) X,P,S,V = sol.transpose() plt.plot(t,X) plt.plot(t,P) plt.plot(t,S) plt.plot(t,V) plt.xlabel('Time [hr]') plt.ylabel('Concentration [g/liter]') plt.legend(['Cell Conc.', 'Product Conc.', 'Substrate Conc.', '...
_____no_output_____
MIT
Mathematics/Mathematical Modeling/02.07-Fed-Batch-Bioreactor.ipynb
okara83/Becoming-a-Data-Scientist
Movie with u, v, w, $\rho$, tr, vorticity alongshore section
#KRM import matplotlib.pyplot as plt import matplotlib.colors as mcolors import matplotlib as mpl #from MITgcmutils import rdmds # not working #%matplotlib inline import os from netCDF4 import Dataset import numpy as np import pandas as pd import seaborn as sns import struct import xarray as xr import canyon_tools.read...
_____no_output_____
Apache-2.0
MoviesNotebooks/MoviesFlowDescription.ipynb
UBC-MOAD/outputanalysisnotebooks
Functions
def rel_vort(x,y,u,v): """----------------------------------------------------------------------------- rel_vort calculates the z component of relative vorticity. INPUT: x,y,u,v should be at least 2D arrays in coordinate order (..., Y , X ) OUTPUT: relvort - z-relative vorticity array...
_____no_output_____
Apache-2.0
MoviesNotebooks/MoviesFlowDescription.ipynb
UBC-MOAD/outputanalysisnotebooks
Frame functions
# if y = 230, z from 0 to 56 # if y=245, z from 0 to 47 # if y=260, z from 0 to 30 sns.set_style('dark') # ALONGSHORE VELOCITY def Plot1(t,ax1,UU): umin = -0.55 # 0.50 umax= 0.55 Uplot=np.ma.array(UU.isel(Y=yind).data,mask=MaskC[:,yind,:]) csU = np.linspace(umin,umax,num=20) csU2 = np.linspace...
_____no_output_____
Apache-2.0
MoviesNotebooks/MoviesFlowDescription.ipynb
UBC-MOAD/outputanalysisnotebooks
Set-up
# Grid, state and tracers datasets of base case grid_file = '/data/kramosmu/results/TracerExperiments/3DVISC_REALISTIC/run01/gridGlob.nc' grid = xr.open_dataset(grid_file) state_file = '/data/kramosmu/results/TracerExperiments/3DVISC_REALISTIC/run01/stateGlob.nc' state = xr.open_dataset(state_file) ptracers_file = '...
_____no_output_____
Apache-2.0
MoviesNotebooks/MoviesFlowDescription.ipynb
UBC-MOAD/outputanalysisnotebooks
Data management with Pandas An overview of some of the data management tools in Python's [Pandas package](http://pandas.pydata.org/pandas-docs/version/0.17.1/). Includes:* Selecting variables * Selecting observations * Indexing * Groupby * Stacking * Doubly indexed dataframes * Combining dataframes (concat) * Merging...
import pandas as pd %matplotlib inline
_____no_output_____
MIT
Code/IPython/bootcamp_data_management.ipynb
ljsun88/data_bootcamp_nyu
Reminders* Dataframes * Index and columns Selecting variables DatasetsWe take these examples from the data input chapter: * Penn World Table * World Economic Outlook * UN Population DataAll of them come in an unfriendly form; our goal is to fix them. Here we extract small subsets to work with so that we can foll...
data = {'countrycode': ['CHN', 'CHN', 'CHN', 'FRA', 'FRA', 'FRA', 'USA', 'USA', 'USA'], 'pop': [1124.7939240000001, 1246.8400649999999, 1318.1701519999999, 58.183173999999994, 60.764324999999999, 64.731126000000003, 253.33909699999998, 282.49630999999999, 310.38394799999998], 'rgdpe': [2611027.0, 49...
_____no_output_____
MIT
Code/IPython/bootcamp_data_management.ipynb
ljsun88/data_bootcamp_nyu
Define vector 3D class
import math import numpy as np class Vector3D: def __init__(self, initial_x = 0.0, initial_y = 0.0, initial_z = 0.0): self.x = initial_x self.y = initial_y self.z = initial_z def magnitude(self): return math.sqrt(self.x**2 + self.y**2 + self.z**2) def sqd_magnitude...
_____no_output_____
Apache-2.0
src/md-codes/single_particle-x4-x2-OV-Operator.ipynb
kadupitiya/RNN-MD
Define Particle class
import math from decimal import * class Particle: def __init__(self, initial_m = 1.0, diameter = 2.0, initial_position = Vector3D(0.0, 0.0, 0.0), initial_velocity = Vector3D(0.0, 0.0, 0.0)): self.m = initial_m self.d = diameter self.position = Vector3D(initial_position.x, 0.0, 0.0) ...
_____no_output_____
Apache-2.0
src/md-codes/single_particle-x4-x2-OV-Operator.ipynb
kadupitiya/RNN-MD
Velocity verlet code
import math import time def velocity_verlet(mass=None, initial_pos=2.0, time=100, deltaT=0.01): print("Modeling the block-spring system") print("Need a useful abstraction of the problem: a point particle") print("Make a Particle class") print("Set up initial conditions") sphere = Particl...
_____no_output_____
Apache-2.0
src/md-codes/single_particle-x4-x2-OV-Operator.ipynb
kadupitiya/RNN-MD
Run the code
import time start = time.time() # Run the program # mass=1.0, initial_pos=1.0, time=100, deltaT=0.01 params__ = (2.0, -2.0, 100, 0.01) velocity_verlet(*params__) end = time.time() print("Time: "+str(end - start))
Modeling the block-spring system Need a useful abstraction of the problem: a point particle Make a Particle class Set up initial conditions volume of a unit (radius = 1) sphere is None mass of the block is 2.0 initial position of the block is -2.0 initial velocity of the block is 0.0 initial force on the block is 6.0 S...
Apache-2.0
src/md-codes/single_particle-x4-x2-OV-Operator.ipynb
kadupitiya/RNN-MD
Plot the graphs
# Visualize the data ''' GNUPlot plot 'exact_dynamics.out' with lines, 'simulated_dynamics.out' using 1:2 with lp pt 6 title "position", 'simulated_dynamics.out' using 1:3 with p pt 4 title "velocity", 'simulated_dynamics.out' u 1:4 w p title "kinetic", 'simulated_dynamics.out' u 1:5 w p title "potential", "simulated_d...
_____no_output_____
Apache-2.0
src/md-codes/single_particle-x4-x2-OV-Operator.ipynb
kadupitiya/RNN-MD
Installing Python and GraphLab Create Please follow the installation instructions here before getting started:We have done* Installed Python* Started Ipython Notebook Getting started with Python
print 'Hello World!'
Hello World!
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Create some variables in Python
i = 4 #int type(i) f = 4.1 #float type(f) b = True #boolean variable s = "This is a string!" print s
This is a string!
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Advanced python types
l = [3,1,2] #list print l d = {'foo':1, 'bar':2.3, 's':'my first dictionary'} #dictionary print d print d['foo'] #element of a dictionary n = None #Python's null type type(n)
_____no_output_____
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Advanced printing
print "Our float value is %s. Our int value is %s." % (f,i) #Python is pretty good with strings
Our float value is 4.1. Our int value is 4.
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Conditional statements in python
if i == 1 and f > 4: print "The value of i is 1 and f is greater than 4." elif i > 4 or f > 4: print "i or f are both greater than 4." else: print "both i and f are less than or equal to 4"
i or f are both greater than 4.
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Conditional loops
print l for e in l: print e
3 1 2
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
counter = 6 while counter < 10: print counter counter += 1
6 7 8 9
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Creating functions in PythonAgain, we don't use {}, but just indent the lines that are part of the function.
def add2(x): y = x + 2 return y i = 5 add2(i)
_____no_output_____
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
We can also define simple functions with lambdas:
square = lambda x: x*x
_____no_output_____
MIT
Programming Assignment 1/Getting started with iPython Notebook.ipynb
Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach
Validation, regularisation and callbacks Coding tutorials [1. Validation sets](coding_tutorial_1) [2. Model regularisation](coding_tutorial_2) [3. Introduction to callbacks](coding_tutorial_3) [4. Early stopping / patience](coding_tutorial_4) *** Validation sets Load the data
# Load the diabetes dataset from sklearn.datasets import load_diabetes diabetes_dataset = load_diabetes() print(diabetes_dataset.keys()) print(diabetes_dataset['DESCR']) # Save the input and target variables data = diabetes_dataset['data'] targets = diabetes_dataset['target'] data.shape, targets.shape # Normalise the...
(397, 10) (45, 10) (397,) (45,)
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
Train a feedforward neural network model
# Build the model from tensorflow.keras.layers import Dense from tensorflow.keras.models import Sequential def get_model(): model = Sequential([ Dense(128, activation='relu', input_shape=(train_data.shape[1], 1)), Dense(128, activation='relu'), ...
2/2 [==============================] - 0s 5ms/step - loss: 0.8980 - mae: 0.8980
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
Plot the learning curves
import matplotlib.pyplot as plt %matplotlib inline # Plot the training and validation loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show()
_____no_output_____
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
*** Model regularisation Adding regularisation with weight decay and dropout
from tensorflow.keras.layers import Dropout from tensorflow.keras import regularizers def get_regularised_model(wd, rate): model = Sequential([ Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu", input_shape=(train_data.shape[1],)), Dropout(rate), Dense(128, kernel_regulari...
2/2 [==============================] - 0s 5ms/step - loss: 0.6260 - mae: 0.6243
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
Plot the learning curves
# Plot the training and validation loss import matplotlib.pyplot as plt plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show()
_____no_output_____
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
*** Introduction to callbacks Example training callback
# Write a custom callback # https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback from tensorflow.keras.callbacks import Callback class TrainingCallback(Callback): def on_training_begin(self, logs=None): print('Starting training...') def on_training_end(self, logs=None): ...
_____no_output_____
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London