markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Load diabetes data setFrom [sklearn diabetes data set](https://scikit-learn.org/stable/datasets/toy_dataset.htmldiabetes-dataset):"Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline."So, the goal is to predict disease progression based upon all of these features.
d = load_diabetes() len(d.data) df = pd.DataFrame(d.data, columns=d.feature_names) df['disease'] = d.target # "quantitative measure of disease progression one year after baseline" df.head(3)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Split data into train, validation setsAny sufficiently powerful model is able to effectively drive down the training loss (error). What we really care about, though, is how well the model generalizes. That means we have to look at the validation or test error, computed from records the model was not trained on. (We'll use "test" as shorthand for "validation" often, but technically they are not the same.) For non-time-sensitive data sets, we can simply randomize and hold out 20% of our data as our validation set:
np.random.seed(1) # set a random seed for consistency across runs n = len(df) X = df.drop('disease',axis=1).values y = df['disease'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20) # hold out 20% len(X), len(X_train), len(X_test)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Let's also make sure to normalize the data to make training easier:
m = np.mean(X_train,axis=0) std = np.std(X_train,axis=0) X_train = (X_train-m)/std X_test = (X_test-m)/std # use training data only when prepping test sets
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Baseline with random forestWhen building machine learning models, it's always important to ask how good your model is. One of the best ways is to choose a baseline model, such as a random forest or a linear regression model, and compare your new model to make sure it can beat the old model. Random forests are easy to use, understand, and train so they are a good baseline. Training the model is as simple as calling `fit()` (`min_samples_leaf=20` gives a bit more generality):
rf = RandomForestRegressor(n_estimators=100, n_jobs=-1, min_samples_leaf=20) rf.fit(X_train, y_train.reshape(-1))
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
To evaluate our models, let's compute the mean squared error (MSE) for both training and validation sets:
y_pred = rf.predict(X_train) mse = np.mean((y_pred - y_train.reshape(-1))**2) y_pred_test = rf.predict(X_test) mse_test = np.mean((y_pred_test - y_test.reshape(-1))**2) print(f"Training MSE {mse:.2f} validation MSE {mse_test:.2f}")
Training MSE 2533.91 validation MSE 3473.22
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Let's check $R^2$ as well.
rf.score(X_train, y_train), rf.score(X_test, y_test)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
ExerciseWhy is the validation error much larger than the training error?Solution Because the model was trained on the training set, one would expect it to generally perform better on it than any other data set. The more the validation error diverges from the training error, the less general you should assume your model is. Train neural network modelOk, so now we have a baseline and an understanding of how well a decent model performs on this data set. Let's see if we can beat that baseline with a neural network. First we will see how easy it is to drive the training error down and then show how the validation error is not usually very good in that case. We will finish by considering ways to get better validation errors, which means more general models. Most basic network trainingA basic training loop for a neural network model simply measures and tracks the training loss or error/metric. (In this case, our loss and metric are the same.) The following function embodies such a training loop:
def train0(model, X_train, y_train, learning_rate = .5, nepochs=2000): optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(nepochs+1): y_pred = model(X_train) loss = torch.mean((y_pred - y_train)**2) if epoch % (nepochs//10) == 0: print(f"Epoch {epoch:4d} MSE train loss {loss:12.3f}") optimizer.zero_grad() loss.backward() # autograd computes w1.grad, b1.grad, ... optimizer.step()
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
To use this method, we have to convert the training and validation data sets to pytorch tensors from numpy (they are already normalized):
X_train = torch.tensor(X_train).float() X_test = torch.tensor(X_test).float() y_train = torch.tensor(y_train).float().reshape(-1,1) # column vector y_test = torch.tensor(y_test).float().reshape(-1,1)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Let's create a model with one hidden layer and an output layer, glued together with a ReLU nonlinearity. The network looks something like the following except of course we have many more input features and neurons than shown here:There is an implied input layer which is really just the input vector of features. The output layer takes the output of the hidden layer and generates a single output, our $\hat{y}$:
ncols = X.shape[1] n_neurons = 150 model = nn.Sequential( nn.Linear(ncols, n_neurons), # hidden layer nn.ReLU(), # nonlinearity nn.Linear(n_neurons, 1) # output layer ) train0(model, X_train, y_train, learning_rate=.08, nepochs=5000)
Epoch 0 MSE train loss 29693.615 Epoch 500 MSE train loss 552.236 Epoch 1000 MSE train loss 85.343 Epoch 1500 MSE train loss 25.167 Epoch 2000 MSE train loss 120.147 Epoch 2500 MSE train loss 9.741 Epoch 3000 MSE train loss 14.483 Epoch 3500 MSE train loss 3.674 Epoch 4000 MSE train loss 13.071 Epoch 4500 MSE train loss 1.539 Epoch 5000 MSE train loss 1.087
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Run this a few times and you'll see that we can drive the training error very close to zero with 150 neurons and many iterations (epochs). Compare this to the RF training MSE which is orders of magnitude bigger (partly due to the `min_samples_leaf` hyperparameter). ExerciseWhy does the training loss sometimes pop up and then go back down? Why is it not monotonically decreasing?Solution The only source of randomness is the initialization of the model parameters, but that does not explain the lack of monotonicity. In this situation, it is likely that the learning rate is too high and therefore, as we approach the minimum of the lost function, our steps are too big. We are jumping back and forth across the location of the minimum in parameter space. ExerciseChange the learning rate from 0.08 to 0.001 and rerun the example. What happens to the training loss? Is it better or worse than the baseline random forest and the model trained with learning rate 0.08?Solution The training loss continues to decrease but much lower than before and stops long before reaching a loss near zero. On the other hand, it is better than the training error from the baseline random forest. Reducing the learning rate to zero in on the minimumIn one of the above exercises we discussed that the learning rate was probably too high in the vicinity of the lost function minimum. There are ways to throttle the learning rate down as we approach the minimum, but we are using a fixed learning rate here. In order to get a smooth, monotonic reduction in loss function let's start with a smaller learning rate, but that means increasing the number of epochs:
ncols = X.shape[1] n_neurons = 150 model = nn.Sequential( nn.Linear(ncols, n_neurons), # hidden layer nn.ReLU(), # nonlinearity nn.Linear(n_neurons, 1) # output layer ) train0(model, X_train, y_train, learning_rate=.017, nepochs=15000)
Epoch 0 MSE train loss 29626.404 Epoch 1500 MSE train loss 1588.800 Epoch 3000 MSE train loss 264.717 Epoch 4500 MSE train loss 64.575 Epoch 6000 MSE train loss 17.353 Epoch 7500 MSE train loss 6.367 Epoch 9000 MSE train loss 2.682 Epoch 10500 MSE train loss 0.691 Epoch 12000 MSE train loss 0.700 Epoch 13500 MSE train loss 0.190 Epoch 15000 MSE train loss 0.067
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Notice now that we can reliably drive that training error down to zero without bouncing around, although it takes longer with the smaller learning rate. ExercisePlay around with the learning rate and nepochs to see how fast you can reliably get MSE down to 0. Tracking validation lossA low training error doesn't really tell us that much, other than the model is able to capture the relationship between the features and the target variable. What we really want is a general model, which means evaluating the model's performance on a validation set. We have both sets, so let's now track the training and validation error in the loop. We will see that our model performs much worse on the records in the validation set (on which the model was not trained).
def train1(model, X_train, X_test, y_train, y_test, learning_rate = .5, nepochs=2000): optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) history = [] # track training and validation loss for epoch in range(nepochs+1): y_pred = model(X_train) loss = torch.mean((y_pred - y_train)**2) y_pred_test = model(X_test) loss_test = torch.mean((y_pred_test - y_test)**2) history.append((loss, loss_test)) if epoch % (nepochs//10) == 0: print(f"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}") optimizer.zero_grad() loss.backward() # autograd computes w1.grad, b1.grad, ... optimizer.step() return torch.tensor(history)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Let's create the exact same model that we had before but plot train/validation errors against the number of epochs:
ncols = X.shape[1] n_neurons = 150 model = nn.Sequential( nn.Linear(ncols, n_neurons), nn.ReLU(), nn.Linear(n_neurons, 1) ) history = train1(model, X_train, X_test, y_train, y_test, learning_rate=.02, nepochs=8000) plot_history(torch.clamp(history, 0, 12000), file="train-test")
Epoch 0 MSE train loss 29654.730 test loss 27063.545 Epoch 800 MSE train loss 1841.183 test loss 3583.445 Epoch 1600 MSE train loss 806.315 test loss 5304.467 Epoch 2400 MSE train loss 286.301 test loss 7847.571 Epoch 3200 MSE train loss 98.467 test loss 9669.961 Epoch 4000 MSE train loss 29.252 test loss 11196.380 Epoch 4800 MSE train loss 9.511 test loss 11814.456 Epoch 5600 MSE train loss 4.533 test loss 12238.104 Epoch 6400 MSE train loss 1.763 test loss 12479.471 Epoch 7200 MSE train loss 16.615 test loss 12586.157 Epoch 8000 MSE train loss 0.282 test loss 12712.996
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Wow. The validation error is much much worse than the training error, which is almost 0. That tells us that the model is severely overfit to the training data and is not general at all. Well, the validation error actually makes a lot of progress initially but then after a few thousand epochs immediately starts to grow (we'll use this fact later). Unless we do something fancier, the best solution can be obtained by selecting the model parameters that gives us the lowest validation loss. Track best loss and choose best modelWe saw in the previous section that the most general model appears fairly soon in the training cycle. So, despite being able to drive the training error to zero if we keep going long enough, the most general model actually is known very early in the training process. This is not always the case, but it certainly is here for this data. Let's exploit this by tracking the best model, the one with the lowest validation error. There is [some indication](https://moultano.wordpress.com/2020/10/18/why-deep-learning-works-even-though-it-shouldnt/) that a good approach is to (sometimes crank up the power of the model and then) just stop early, or at least pick the model with the lowest validation error. The following function embodies that by making a copy of our neural net model when it finds an improved version.
def train2(model, X_train, X_test, y_train, y_test, learning_rate = .5, nepochs=2000, weight_decay=0): optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay) history = [] # track training and validation loss best_loss = 1e10 best_model = None for epoch in range(nepochs+1): y_pred = model(X_train) loss = torch.mean((y_pred - y_train)**2) y_pred_test = model(X_test) loss_test = torch.mean((y_pred_test - y_test)**2) history.append((loss, loss_test)) if loss_test < best_loss: best_loss = loss_test best_model = copy.deepcopy(model) best_epoch = epoch if epoch % (nepochs//10) == 0: print(f"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}") optimizer.zero_grad() loss.backward() # autograd computes w1.grad, b1.grad, ... optimizer.step() print(f"BEST MSE test loss {best_loss:.3f} at epoch {best_epoch}") return torch.tensor(history), best_model
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Let's use the exact same model and learning rate with no weight decay and see what happens.
ncols = X.shape[1] n_neurons = 150 model = nn.Sequential( nn.Linear(ncols, n_neurons), nn.ReLU(), nn.Linear(n_neurons, 1) ) history, best_model = train2(model, X_train, X_test, y_train, y_test, learning_rate=.02, nepochs=1000, weight_decay=0) # verify we got the best model out y_pred = best_model(X_test) loss_test = torch.mean((y_pred - y_test)**2) plot_history(torch.clamp(history, 0, 12000))
Epoch 0 MSE train loss 29607.461 test loss 27006.693 Epoch 100 MSE train loss 2629.585 test loss 3087.739 Epoch 200 MSE train loss 2384.102 test loss 3186.564 Epoch 300 MSE train loss 2285.357 test loss 3230.039 Epoch 400 MSE train loss 2208.642 test loss 3257.470 Epoch 500 MSE train loss 2133.422 test loss 3275.208 Epoch 600 MSE train loss 2077.234 test loss 3265.872 Epoch 700 MSE train loss 2031.759 test loss 3267.592 Epoch 800 MSE train loss 1977.212 test loss 3271.320 Epoch 900 MSE train loss 1920.043 test loss 3249.735 Epoch 1000 MSE train loss 1870.694 test loss 3281.241 BEST MSE test loss 3071.237 at epoch 90
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Let's also look at $R^2$:
y_pred = best_model(X_train).detach().numpy() y_pred_test = best_model(X_test).detach().numpy() r2_score(y_train, y_pred), r2_score(y_test, y_pred_test)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
The best MSE bounces around a loss value of 3000 from run to run, a bit above it or a bit below, depending on the run. And this decent result occurs without having to understand or use weight decay (more on this next). Compare the validation R^2 to that of the RF; the network does much better! Weight decay to reduce overfittingOther than stopping early, one of the most common ways to reduce model overfitting is to use weight decay, otherwise known as L2 (Ridge) regression, to constrain the model parameters. Without constraints, model parameters can get very large, which typically leads to a lack of generality. Using the `Adam` optimizer, we turn on weight decay with parameter `weight_decay`, but otherwise the training loop is the same:
def train3(model, X_train, X_test, y_train, y_test, learning_rate = .5, nepochs=2000, weight_decay=0, trace=True): optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay) history = [] # track training and validation loss for epoch in range(nepochs+1): y_pred = model(X_train) loss = torch.mean((y_pred - y_train)**2) y_pred_test = model(X_test) loss_test = torch.mean((y_pred_test - y_test)**2) history.append((loss, loss_test)) if trace and epoch % (nepochs//10) == 0: print(f"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}") optimizer.zero_grad() loss.backward() # autograd computes w1.grad, b1.grad, ... optimizer.step() return torch.tensor(history)
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
How do we know what the right value of the weight decay is? Typically we try a variety of weight decay values and then see which one gives us the best validation error, so let's do that using a grid of images. The following loop uses the same network and learning rate for each run but varies the weight decay:
ncols = X.shape[1] n_neurons = 150 fig, axes = plt.subplots(1, 4, figsize=(12.5,2.5)) for wd,ax in zip([0,.3,.6,1.5],axes): model = nn.Sequential( nn.Linear(ncols, n_neurons), nn.ReLU(), nn.Linear(n_neurons, 1) ) history = train3(model, X_train, X_test, y_train, y_test, learning_rate=.05, nepochs=1000, weight_decay=wd, trace=False) mse_valid = history[-1][1] ax.set_title(f"wd={wd:.1f}, valid MSE {mse_valid:.0f}") plot_history(torch.clamp(history, 0, 10000), ax=ax, maxy=10_000) plt.tight_layout() plt.show()
_____no_output_____
MIT
notebooks/deep-learning/3.train-test-diabetes.ipynb
edithlee972/msds621
Simple AttackIn this notebook, we will examine perhaps the simplest possible attack on an individual's private data and what the OpenDP library can do to mitigate it. Loading the dataThe vetting process is currently underway for the code in the OpenDP Library.Any constructors that have not been vetted may still be accessed if you opt-in to "contrib".
import numpy as np from opendp.mod import enable_features enable_features('contrib')
_____no_output_____
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
We begin with loading up the data.
import os data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv') with open(data_path) as input_file: data = input_file.read() col_names = ["age", "sex", "educ", "race", "income", "married"] print(col_names) print('\n'.join(data.split('\n')[:6]))
['age', 'sex', 'educ', 'race', 'income', 'married'] 59,1,9,1,0,1 31,0,1,3,17000,0 36,1,11,1,0,1 54,1,11,1,9100,1 39,0,5,3,37000,0 34,0,9,1,0,1
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
The following code parses the data into a vector of incomes.More details on preprocessing can be found [here](https://github.com/opendp/opendp/blob/main/python/example/basic_data_analysis.ipynb).
from opendp.trans import make_split_dataframe, make_select_column, make_cast, make_impute_constant income_preprocessor = ( # Convert data into a dataframe where columns are of type Vec<str> make_split_dataframe(separator=",", col_names=col_names) >> # Selects a column of df, Vec<str> make_select_column(key="income", TOA=str) ) # make a transformation that casts from a vector of strings to a vector of floats cast_str_float = ( # Cast Vec<str> to Vec<Option<floats>> make_cast(TIA=str, TOA=float) >> # Replace any elements that failed to parse with 0., emitting a Vec<float> make_impute_constant(0.) ) # replace the previous preprocessor: extend it with the caster income_preprocessor = income_preprocessor >> cast_str_float incomes = income_preprocessor(data) print(incomes[:7])
[0.0, 17000.0, 0.0, 9100.0, 37000.0, 0.0, 6000.0]
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
A simple attackSay there's an attacker who's target is the income of the first person in our data (i.e. the first income in the csv). In our case, its simply `0` (but any number is fine, i.e. 5000).
person_of_interest = incomes[0] print('person of interest:\n\n{0}'.format(person_of_interest))
person of interest: 0.0
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
Now consider the case an attacker that doesn't know the POI income, but do know the following: (1) the average income without the POI income, and (2) the number of persons in the database.As we show next, if he would also get the average income (including the POI's one), by simple manipulation he can easily back out the individual's income.
# attacker information: everyone's else mean, and their count. known_mean = np.mean(incomes[1:]) known_obs = len(incomes) - 1 # assume the attackers know legitimately get the overall mean (and hence can infer the total count) overall_mean = np.mean(incomes) n_obs = len(incomes) # back out POI's income poi_income = overall_mean * n_obs - known_obs * known_mean print('poi_income: {0}'.format(poi_income))
poi_income: 0.0
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
The attacker now knows with certainty that the POI has an income of $0. Using OpenDPLet's see what happens if the attacker were made to interact with the data through OpenDP and was given a privacy budget of $\epsilon = 1$.We will assume that the attacker is reasonably familiar with differential privacy and believes that they should use tighter data bounds than they would anticipate being in the data in order to get a less noisy estimate.They will need to update their `known_mean` accordingly.
from opendp.trans import make_clamp, make_sized_bounded_mean, make_bounded_resize from opendp.meas import make_base_laplace enable_features("floating-point") max_influence = 1 count_release = 100 income_bounds = (0.0, 100_000.0) clamp_and_resize_data = ( make_clamp(bounds=income_bounds) >> make_bounded_resize(size=count_release, bounds=income_bounds, constant=10_000.0) ) known_mean = np.mean(clamp_and_resize_data(incomes)[1:]) mean_measurement = ( clamp_and_resize_data >> make_sized_bounded_mean(size=count_release, bounds=income_bounds) >> make_base_laplace(scale=1.0) ) dp_mean = mean_measurement(incomes) print("DP mean:", dp_mean) print("Known mean:", known_mean)
DP mean: 28203.570278867388 Known mean: 28488.08080808081
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
We will be using `n_sims` to simulate the process a number of times to get a sense for various possible outcomes for the attacker.In practice, they would see the result of only one simulation.
# initialize vector to store estimated overall means n_sims = 10_000 n_queries = 1 poi_income_ests = [] estimated_means = [] # get estimates of overall means for i in range(n_sims): query_means = [mean_measurement(incomes) for j in range(n_queries)] # get estimates of POI income estimated_means.append(np.mean(query_means)) poi_income_ests.append(estimated_means[i] * count_release - (count_release - 1) * known_mean) # get mean of estimates print('Known Mean Income (after truncation): {0}'.format(known_mean)) print('Observed Mean Income: {0}'.format(np.mean(estimated_means))) print('Estimated POI Income: {0}'.format(np.mean(poi_income_ests))) print('True POI Income: {0}'.format(person_of_interest))
Known Mean Income (after truncation): 28488.08080808081 Observed Mean Income: 28203.193459994138 Estimated POI Income: -0.6540005867157132 True POI Income: 0.0
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
We see empirically that, in expectation, the attacker can get a reasonably good estimate of POI's income. However, they will rarely (if ever) get it exactly and would have no way of knowing if they did.In our case, indeed the mean estimated POI income approaches the true income, as the number of simulations `n_sims` increases.Below is a plot showing the empirical distribution of estimates of POI income. Notice about its concentration around `0`, and the Laplacian curve of the graph.
import warnings import seaborn as sns # hide warning created by outstanding scipy.stats issue warnings.simplefilter(action='ignore', category=FutureWarning) # distribution of POI income ax = sns.distplot(poi_income_ests, kde = False, hist_kws = dict(edgecolor = 'black', linewidth = 1)) ax.set(xlabel = 'Estimated POI income')
_____no_output_____
MIT
python/example/attack_simple.ipynb
souravrhythm/opendp
ConvPool_CNN Model
def ConvPool_CNN_C(): model = Sequential() model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same')) model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same')) model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same')) model.add(MaxPooling2D(pool_size=(3,3),strides=2)) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(MaxPooling2D(pool_size=(3,3),strides=2)) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(1,1),activation='relu')) model.add(Conv2D(5,(1,1))) model.add(GlobalAveragePooling2D()) model.add(Flatten()) model.add(Dense(5, activation='softmax')) model.build(input_shape) model.compile(loss=categorical_crossentropy,optimizer=keras.optimizers.Adam(0.001),metrics=['accuracy']) return model
_____no_output_____
MIT
kneeoa/Preprocessing/Model/Model.ipynb
Tommy-Ngx/oa_rerun2
ALL_CNN_MODEL
def all_cnn_c(X,y,learningRate=0.001,lossFunction='categorical_crossentropy'): model = Sequential() model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same')) model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same')) model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(3,3),activation='relu',padding='same')) model.add(Conv2D(192,(1,1),activation='relu')) model.add(GlobalAveragePooling2D()) model.add(Dense(5, activation='softmax')) model.build(input_shape) model.compile(loss=categorical_crossentropy,optimizer=Adam(0.001),metrics=['accuracy']) return model
_____no_output_____
MIT
kneeoa/Preprocessing/Model/Model.ipynb
Tommy-Ngx/oa_rerun2
NIN_CNN_MODEL
def nin_cnn_c(): model = Sequential() model.add(Conv2D(32,kernel_size=(5,5),activation='relu',padding='valid')) model.add(Conv2D(32,kernel_size=(5,5),activation='relu')) model.add(Conv2D(32,kernel_size=(5,5),activation='relu')) model.add(MaxPooling2D(pool_size=(3,3),strides=2)) model.add(Dropout(0.5)) model.add(Conv2D(64,(3,3),activation='relu',padding='same')) model.add(Conv2D(64,(1,1),activation='relu',padding='same')) model.add(Conv2D(64,(1,1),activation='relu',padding='same')) model.add(MaxPooling2D(pool_size=(3,3),strides=2)) model.add(Dropout(0.5)) model.add(Conv2D(128,(3,3),activation='relu',padding='same')) model.add(Conv2D(32,(1,1),activation='relu')) model.add(Conv2D(5,(1,1))) model.add(GlobalAveragePooling2D()) model.add(Flatten()) model.add(Dense(5, activation='softmax')) model.build(input_shape) model.compile(loss=categorical_crossentropy,optimizer=Adam(0.001),metrics=['accuracy']) return model
_____no_output_____
MIT
kneeoa/Preprocessing/Model/Model.ipynb
Tommy-Ngx/oa_rerun2
© 2018 Suzy Beeler and Vahe Galstyan. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT) This exercise was generated from a Jupyter notebook. You can download the notebook [here](diffusion_via_coin_flips.ipynb).___ Objective In this tutorial, we will computationally simulate the process of diffusion with "coin flips," where at each time step, the particle can either move to the left or the right, each with probability $0.5$. From here, we can see how the distance a diffusing particle travels scale with time. Modeling 1-D diffusion with coin flips Diffusion can be understood as random motion in space caused by thermal fluctuations in the environment. In the cytoplasm of the cell different molecules undergo a 3-dimensional diffusive motion. On the other hand, diffusion on the cell membrane is chiefly 2-dimensional. Here we will consider a 1-dimensional diffusion motion to make the treatment simpler, but the ideas can be extended into higher dimensions.
# Import modules import numpy as np import matplotlib.pyplot as plt # Show figures in the notebook %matplotlib inline # For pretty plots import seaborn as sns rc={'lines.linewidth': 2, 'axes.labelsize': 14, 'axes.titlesize': 14, \ 'xtick.labelsize' : 14, 'ytick.labelsize' : 14} sns.set(rc=rc)
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
To simulate the flipping of a coin, we will make use of `numpy`'s `random.uniform()` function that produces a random number between $0$ and $1$. Let's see it in action by printing a few random numbers:
for i in range(10): print(np.random.uniform())
0.8342160184177014 0.9430314233421397 0.007367989009862352 0.09402902047181094 0.4641922790100985 0.9895482225120592 0.1652578614128407 0.6038851928675221 0.675556259583081 0.2558372886656155
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
We can now use these randomly generated numbers to simulate the process of a diffusing particle moving in one dimension, where any value below $0.5$ corresponds to step to the left and any value above $0.5$ corresponds to a step to the right. Below, we keep track of the position of a particle for $1000$ steps, where each position is $+1$ or $-1$ from the previous position, as determined by the result of a coin flip.
# Number of steps n_steps = 1000 # Array to store walker positions positions = np.zeros(n_steps) # simulate the particle moving and store the new position for i in range(1, n_steps): # generate random number rand = np.random.uniform() # step in the positive direction if rand > 0.5: positions[i] = positions[i-1] + 1 # step in the negative direction else: positions[i] = positions[i-1] - 1 # Show the trajectory plt.plot(positions) plt.xlabel('steps') plt.ylabel('position');
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
As we can see, the position of the particle moves about the origin in an undirected fashion as a result of the randomness of the steps taken. However, it's hard to conclude anything from this single trace. Only by simulating many of these trajectories can we begin to conclude some of the scaling properties of diffusing particles. Average behavior of diffusing particles Now let's generate multiple random trajectories and see their collective behavior. To do that, we will create a 2-dimensional `numpy` array where each row will be a different trajectory. 2D arrays can be sliced such that `[i,:]` refers to all the values in the `i`th row, and `[:,j]` refers to all the values in `j`th column.
# Number of trajectories n_traj = 1000 # 2d array for storing the trajectories positions_2D = np.zeros([n_traj, n_steps]) # first iterate through the trajectories for i in range(n_traj): # then iterate through the steps for j in range(1, n_steps): # generate random number rand = np.random.uniform() # step in the positive direction if rand > 0.5: positions_2D[i, j] = positions_2D[i, j-1] + 1 # step in the negative direction else: positions_2D[i, j] = positions_2D[i, j-1] - 1
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
Now let's plot the results, once again by looping.
# iterate through each trajectory and plot for i in range(n_traj): plt.plot(positions_2D[i,:]) # label plt.xlabel('steps') plt.ylabel('position');
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
The overall tendency is that the average displacement from the origin increases with the number of time steps. Because each trajectory is assigned a solid color and all trajectories are overlaid on top of each other, it's hard to see the distribution of the walker position at a given number of times steps. To get a better intuition about the distribution of the walker's position at different steps, we will assign the same color to each trajectory and add transparency to each of them so that the more densely populated regions have a darker color.
# iterate through each trajectory and plot for i in range(n_traj): # lower alpha corresponds to lighter lines plt.plot(positions_2D[i,:], alpha=0.01, color='k') # label plt.xlabel('steps') plt.ylabel('position');
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
As we can see, over the course of diffusion the distribution of the walker's position becomes wider but remains centered around the origin, indicative of the unbiased nature of the random walk. To see how the walkers are distributed at this last time point, let's make a histogram of the walker's final positions.
# Make a histogram of final positions _ = plt.hist(positions_2D[:,-1], bins=20) plt.xlabel('final position') plt.ylabel('frequency');
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
As expected, the distribution is centered around the origin and has a Gaussian-like shape. The more trajectories we sample, the "more Gaussian" the distribution will become. However, we may notice that the distribution appears to change depending on the number of bins we choose. This is known as *bin bias* and doesn't reflect anything about our data itself, just how we choose to represent it. An alternative (and arguably better) way to present the data is as a *empirical cumulative distribution function* (or ECDF), where we don't specify a number of bins, but instead plot each data point. For our cumulative frequency distribution, the $x$-axis corresponds to the final position of a particle and the $y$-axis corresponds to the proportion of particles that ended at this position or a more negative position.
# sort the final positions sorted_positons = np.sort(positions_2D[:,-1]) # make the corresponding y_values (i.e. percentiles) y_values = np.linspace(start=0, stop=1, num=len(sorted_positons)) # plot the cumulative histogram plt.plot(sorted_positons, y_values, '.') plt.xlabel("final position") plt.ylabel("cumulative frequency");
_____no_output_____
MIT
code/diffusion_via_coin_flips.ipynb
RPGroup-PBoC/gist_pboc_2019
1. Python and notebook basicsIn this first chapter, we will cover the very essentials of Python and notebooks such as creating a variable, importing packages, using functions, seeing how variables behave in the notebook etc. We will see more details on some of these topics, but this very short introduction will then allow us to quickly dive into more applied and image processing specific topics without having to go through a full Python introduction. VariablesLike we would do in mathematics when we define variables in equations such as $x=3$, we can do the same in all programming languages. Python has one of the simplest syntax for this, i.e. exactly as we would do it naturally. Let's define a variable in the next cell:
a = 3
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
As long as we **don't execute the cell** using Shift+Enter or the play button in the menu, the above cell is **purely text**. We can close our Jupyter session and then re-start it and this line of text will still be there. However other parts of the notebook are not "aware" that this variable has been defined and so we can't re-use anywhere else. For example if we type ```a``` again and execute the cell, we get an error:
a
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
So we actually need to **execute** the cell so that Python reads that line and executes the command. Here it's a very simple command that just says that the value of the variable ```a``` is three. So let's go back to the cell that defined ```a``` and now execute it (click in the cell and hit Shift+Enter). Now this variable is **stored in the computing memory** of the computer and we can re-use it anywhere in the notebook (but only in **this** notebook)!We can again just type ```a```
a
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
We see that now we get an *output* with the value three. Most variables display an output when they are not involved in an operation. For example the line ```a=3``` didn't have an output.Now we can define other variables in a new cell. Note that we can put as **many lines** of commands as we want in a single cell. Each command just need to be on a new line.
b = 5 c = 2
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
As variables are defined for the entire notebook we can combine information that comes from multiple cells. Here we do some basic mathematics:
a + b
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Here we only see the output. We can't re-use that ouput for further calculations as we didn't define a new variable to contain it. Here we do it:
d = a + b d
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
```d``` is now a new variable. It is purely numerical and not a mathematical formula as the above cell could make you believe. For example if we change the value of ```a```:
a = 100
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
and check the value of ```d```:
d
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
it has not change. We would have to rerun the operation and assign it again to ```d``` for it to update:
d = a + b d
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
We will see many other types of variables during the course. Some are just other types of data, for example we can define a **text** variable by using quotes ```' '``` around a given text:
my_text = 'This is my text' my_text
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Others can contain multiple elements like lists:
my_list = [3, 8, 5, 9] my_list
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
but more on these data structures later... FunctionsWe have seen that we could define variables and do some basic operations with them. If we want to go beyond simple arithmetic we need more **complex functions** that can operate on variables. Imagine for example that we need a function $f(x, a, b) = a * x + b$. For this we can use and **define functions**. Here's how we can define the previous function:
def my_fun(x, a, b): out = a * x + b return out
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
We see a series of Python rules to define a function:- we use the word **```def```** to signal that we are creating a function- we pick a **function name**, here ```my_fun```- we open the **parenthesis** and put all our **variables ```x```, ```a```, ```b```** in there, just like when we do mathematics- we do some operation inside the function. **Inside** the function is signal with the **indentation**: everything that belong inside the function (there could be many more lines) is shifted by a *single tab* or *three space* to the right- we use the word **```return```** to tell what is the output of the function, here the variable ```out```We can now use this function as if we were doing mathematics: we pick a a value for the three parameters e.g. $f(3, 2, 5)$
my_fun(3, 2, 5)
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Note that **some functions are defined by default** in Python. For example if I define a variable which is a string:
my_text = 'This is my text'
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
I can count the number of characters in this text using the ```len()``` function which comes from base Python:
len(my_text)
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
The ```len``` function has not been manually defined within a ```def``` statement, it simply exist by default in the Python language. Variables as objectsIn the Python world, variables are not "just" variables, they are actually more complex objects. So for example our variable ```my_text``` does indeed contain the text ```This is my text``` but it contains also additional features. The way to access those features is to use the dot notation ```my_text.some_feature```. There are two types of featues:- functions, called here methods, that do some computation or modify the variable itself- properties, that contain information about the variableFor example the object ```my_text``` has a function attached to it that allows us to put all letters to lower case:
my_text my_text.lower()
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
If we define a complex number:
a = 3 + 5j
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
then we can access the property ```real``` that gives us only the real part of the number:
a.real
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Note that when we use a method (function) we need to use the parenthesis, just like for regular functions, while for properties we don't. PackagesIn the examples above, we either defined a function ourselves or used one generally accessible in base Python but there is a third solution: **external packages**. These packages are collections of functions used in a specific domain that are made available to everyone via specialized online repositories. For example we will be using in this course a package called [scikit-image](https://scikit-image.org/) that implements a large number of functions for image processing. For example if we want to filter an image stored in a variable ```im_in``` with a median filter, we can then just use the ```median()``` function of scikit-image and apply it to an image ```im_out = median(im_in)```. The question is now: how do we access these functions? Importing functionsThe answer is that we have to **import** the functions we want to use in a *given notebook* from a package to be able to use them. First the package needs to be **installed**. One of the most popular place where to find such packages is the PyPi repository. We can install packages from there using the following command either in a **terminal or directly in the notebook**. For example for [scikit-image](https://pypi.org/project/scikit-image/):
pip install scikit-image
Requirement already satisfied: scikit-image in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (0.19.2) Requirement already satisfied: networkx>=2.2 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (2.7.1) Requirement already satisfied: tifffile>=2019.7.26 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (2022.2.9) Requirement already satisfied: PyWavelets>=1.1.1 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (1.2.0) Requirement already satisfied: pillow!=7.1.0,!=7.1.1,!=8.3.0,>=6.1.0 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (9.0.1) Requirement already satisfied: scipy>=1.4.1 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (1.8.0) Requirement already satisfied: packaging>=20.0 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (21.3) Requirement already satisfied: imageio>=2.4.1 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (2.16.1) Requirement already satisfied: numpy>=1.17.0 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (1.22.2) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from packaging>=20.0->scikit-image) (3.0.7) Note: you may need to restart the kernel to use updated packages.
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Once installed we can **import** the packakge in a notebook in the following way (note that the name of the package is scikit-image, but in code we use an abbreviated name ```skimage```):
import skimage
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
The import is valid for the **entire notebook**, we don't need that line in each cell. Now that we have imported the package we can access all function that we define in it using a *dot notation* ```skimage.myfun```. Most packages are organized into submodules and in that case to access functions of a submodule we use ```skimage.my_submodule.myfun```.To come back to the previous example: the ```median``` filtering function is in the ```filters``` submodule that we could now use as:```pythonim_out = skimage.filters.median(im_in)``` We cannot execute this command as the variables ```im_in``` and ```im_out``` are not yet defined.Note that there are multiple ways to import packages. For example we could give another name to the package, using the ```as``` statement:
import skimage as sk
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Nowe if we want to use the ```median``` function in the filters sumodule we would write:```pythonim_out = sk.filters.median(im_in)``` We can also import only a certain submodule using:
from skimage import filters
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Now we have to write:```pythonim_out = filters.median(im_in)``` Finally, we can import a **single** function like this:
from skimage.filters import median
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
and now we have to write:```pythonim_out = median(im_in)``` StructuresAs mentioned above we cannot execute those various lines like ```im_out = median(im_in)``` because the image variable ```im_in``` is not yet defined. This variable should be an image, i.e. it cannot be a single number like in ```a=3``` but an entire grid of values, each value being one pixel. We therefore need a specific variable type that can contain such a structure.We have already seen that we can define different types of variables. Single numbers:
a = 3
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Text:
b = 'my text'
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
or even lists of numbers:
c = [6,2,8,9]
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
This last type of variable is called a ```list``` in Python and is one of the **structures** that is available in Python. If we think of an image that has multiple lines and columns of pixels, we could now imagine that we can represent it as a list of lists, each single list being e.g. one row pf pixels. For example a 3 x 3 image could be:
my_image = [[4,8,7], [6,4,3], [5,3,7]] my_image
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
While in principle we could use a ```list``` for this, computations on such objects would be very slow. For example if we wanted to do background correction and subtract a given value from our image, effectively we would have to go through each element of our list (each pixel) one by one and sequentially remove the background from each pixel. If the background is 3 we would have therefore to compute:- 4-3- 8-3- 7-3- 6-3etc. Since operations are done sequentially this would be very slow as we couldn't exploit the fact that most computers have multiple processors. Also it would be tedious to write such an operation.To fix this, most scientific areas that use lists of numbers of some kind (time-series, images, measurements etc.) resort to an **external package** called ```Numpy``` which offers a **computationally efficient list** called an **array**.To make this clearer we now import an image in our notebook to see such a structure. We will use a **function** from the scikit-image package to do this import. That function called ```imread``` is located in the submodule called ```io```. Remember that we can then access this function with ```skimage.io.imread()```. Just like we previously defined a function $f(x, a, b)$ that took inputs $x, a, b$, this ```imread()``` function also needs an input. Here it is just the **location of the image**, and that location can either be the **path** to the file on our computer or a **url** of an online place where the image is stored. Here we use an image that can be found at https://github.com/guiwitz/PyImageCourse_beginner/raw/master/images/19838_1252_F8_1.tif. As you can see it is a tif file. This address that we are using as an input should be formatted as text:
my_address = 'https://github.com/guiwitz/PyImageCourse_beginner/raw/master/images/19838_1252_F8_1.tif'
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Now we can call our function:
skimage.io.imread(my_address)
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
We see here an output which is what is returned by our function. It is as expected a list of numbers, and not all numbers are shown because the list is too long. We see that we also have ```[]``` to specify rows, columns etc. The main difference compared to our list of lists that we defined previously is the ```array``` indication at the very beginning of the list of numbers. This ```array``` indication tells us that we are dealing with a ```Numpy``` array, this alternative type of list of lists that will allow us to do efficient computations. PlottingWe will see a few ways to represent data during the course. Here we just want to have a quick look at the image we just imported. For plotting we will use yet another **external library** called Matplotlib. That library is extensively used in the Python world and offers extensive choices of plots. We will mainly use one **function** from the library to display images: ```imshow```. Again, to access that function, we first need to import the package. Here we need a specific submodule:
import matplotlib.pyplot as plt
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Now we can use the ```plt.imshow()``` function. There are many options for plot, but we can use that function already by just passing an ```array``` as an input. First we need to assign the imported array to a variable:
import skimage.io image = skimage.io.imread(my_address) plt.imshow(image);
_____no_output_____
BSD-3-Clause
01-Python_essentials.ipynb
guiwitz/Python_image_processing_beginner
Multiple Linear Regression Objective How to make the prediction for multiple inputs. How to use linear class to build more complex models. How to build a custom module. Table of ContentsIn this lab, you will review how to make a prediction in several different ways by using PyTorch. Prediction Class Linear Build Custom ModulesEstimated Time Needed: 15 min Preparation Import the libraries and set the random seed.
# Import the libraries and set the random seed from torch import nn import torch torch.manual_seed(1)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Prediction Set weight and bias.
# Set the weight and bias w = torch.tensor([[2.0], [3.0]], requires_grad=True) b = torch.tensor([[1.0]], requires_grad=True)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Define the parameters. torch.mm uses matrix multiplication instead of scaler multiplication.
# Define Prediction Function def forward(x): yhat = torch.mm(x, w) + b return yhat
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
The function forward implements the following equation: If we input a 1x2 tensor, because we have a 2x1 tensor as w, we will get a 1x1 tensor:
# Calculate yhat x = torch.tensor([[1.0, 2.0]]) yhat = forward(x) print("The result: ", yhat)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Each row of the following tensor represents a sample:
# Sample tensor X X = torch.tensor([[1.0, 1.0], [1.0, 2.0], [1.0, 3.0]]) # Make the prediction of X yhat = forward(X) print("The result: ", yhat)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Class Linear We can use the linear class to make a prediction. You'll also use the linear class to build more complex models. Let us create a model.
# Make a linear regression model using build-in function model = nn.Linear(2, 1)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Make a prediction with the first sample:
# Make a prediction of x yhat = model(x) print("The result: ", yhat)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Predict with multiple samples X:
# Make a prediction of X yhat = model(X) print("The result: ", yhat)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
The function performs matrix multiplication as shown in this image: Build Custom Modules Now, you'll build a custom module. You can make more complex models by using this method later.
# Create linear_regression Class class linear_regression(nn.Module): # Constructor def __init__(self, input_size, output_size): super(linear_regression, self).__init__() self.linear = nn.Linear(input_size, output_size) # Prediction function def forward(self, x): yhat = self.linear(x) return yhat
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Build a linear regression object. The input feature size is two.
model = linear_regression(2, 1)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
This will input the following equation: You can see the randomly initialized parameters by using the parameters() method:
# Print model parameters print("The parameters: ", list(model.parameters()))
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
You can also see the parameters by using the state_dict() method:
# Print model parameters print("The parameters: ", model.state_dict())
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Now we input a 1x2 tensor, and we will get a 1x1 tensor.
# Make a prediction of x yhat = model(x) print("The result: ", yhat)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
The shape of the output is shown in the following image: Make a prediction for multiple samples:
# Make a prediction of X yhat = model(X) print("The result: ", yhat)
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
The shape is shown in the following image: Practice Build a model or object of type linear_regression. Using the linear_regression object will predict the following tensor:
# Practice: Build a model to predict the follow tensor. X = torch.tensor([[11.0, 12.0, 13, 14], [11, 12, 13, 14]])
_____no_output_____
MIT
4.1.multiple_linear_regression_prediction_v2.ipynb
indervirbanipal/deep-neural-networks-with-pytorch
Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.First off, let's load the dataset through torchvision.
import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) trainset.classes trainset
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
philip-le/deep-learning-v2-pytorch
Here we can see one of the images.
image, label = next(iter(trainloader)) print(image.shape, label.shape) helper.imshow(image[0,:]);
torch.Size([64, 1, 28, 28]) torch.Size([64])
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
philip-le/deep-learning-v2-pytorch
Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
# TODO: Define your network architecture here from torch import nn model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128,32), nn.ReLU(), nn.Linear(32,10)) model.parameters
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
philip-le/deep-learning-v2-pytorch
Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write the training code. Remember the training pass is a fairly straightforward process:* Make a forward pass through the network to get the logits * Use the logits to calculate the loss* Perform a backward pass through the network with `loss.backward()` to calculate the gradients* Take a step with the optimizer to update the weightsBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
# TODO: Create the network, define the criterion and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(params=model.parameters(), lr=0.01) len(trainloader), 60000/64 # TODO: Train the network here epochs = 10 for e in range(epochs): running_loss = 0 for image,label in iter(trainloader): optimizer.zero_grad() output = model(image.view(image.shape[0],-1)) loss = criterion(output, label) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Epoch {e} - loss {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[2] # Convert 2D image to 1D vector img = img.resize_(1, 784) # TODO: Calculate the class probabilities (softmax) for img with torch.no_grad(): ps = model(img) # Plot the image and probabilities helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
philip-le/deep-learning-v2-pytorch
**Create Train / Dev / Test files. Each file is a dictionary where each key represent the ID of a certain Author and each value is a dict where the keys are : - author_embedding : the Node embedding that correspond to the author (tensor of shape (128,)) - papers_embedding : the abstract embedding of every papers (tensor of shape (10,dim)) (dim depend on the embedding model taken into account) - features : the graph structural features (tensor of shape (4,)) - y : the target (tensor of shape (1,))**
import pandas as pd import numpy as np import networkx as nx from tqdm import tqdm_notebook as tqdm from sklearn.utils import shuffle import gzip import pickle import torch def load_dataset_file(filename): with gzip.open(filename, "rb") as f: loaded_object = pickle.load(f) return loaded_object def save(object, filename, protocol = 0): """Saves a compressed object to disk """ file = gzip.GzipFile(filename, 'wb') file.write(pickle.dumps(object, protocol)) file.close()
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Roberta Embedding
# Load the paper's embedding embedding_per_paper = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/embedding_per_paper_clean.txt') # Load the node's embedding embedding_per_nodes = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/Node2Vec.txt') # read the file to create a dictionary with author key and paper list as value f = open("/content/drive/MyDrive/altegrad_datachallenge/author_papers.txt","r") papers_per_author = {} for l in f: auth_paps = [paper_id.strip() for paper_id in l.split(":")[1].replace("[","").replace("]","").replace("\n","").replace("\'","").replace("\"","").split(",")] papers_per_author[l.split(":")[0]] = auth_paps # Load train set df_train = shuffle(pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/train.csv', dtype={'authorID': np.int64, 'h_index': np.float32})).reset_index(drop=True) # Load test set df_test = pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/test.csv', dtype={'authorID': np.int64}) # Load Graph G = nx.read_edgelist('/content/drive/MyDrive/altegrad_datachallenge/collaboration_network.edgelist', delimiter=' ', nodetype=int) # computes structural features for each node core_number = nx.core_number(G) avg_neighbor_degree = nx.average_neighbor_degree(G) # Split into train/valid df_valid = df_train.iloc[int(len(df_train)*0.9):, :] df_train = df_train.iloc[:int(len(df_train)*0.9), :]
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Train
train_data = {} for i, row in tqdm(df_train.iterrows()): author_id, y = str(int(row['authorID'])), row['h_index'] degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)] author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1)) papers_ids = papers_per_author[author_id] papers_embedding = [] num_papers = 0 for id_paper in papers_ids: num_papers += 1 try: papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1))) except KeyError: print(f"Missing paper for {author_id}") papers_embedding.append(torch.zeros((1,768))) papers_embedding = torch.cat(papers_embedding, dim=0) additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1)) y = torch.Tensor([y]) train_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y} # Saving save(train_data, '/content/drive/MyDrive/altegrad_datachallenge/data/data.train') # Deleting (memory) del train_data
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Validation
valid_data = {} for i, row in tqdm(df_valid.iterrows()): author_id, y = str(int(row['authorID'])), row['h_index'] degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)] author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1)) papers_ids = papers_per_author[author_id] papers_embedding = [] num_papers = 0 for id_paper in papers_ids: num_papers += 1 try: papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1))) except KeyError: papers_embedding.append(torch.zeros((1,768))) papers_embedding = torch.cat(papers_embedding, dim=0) additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1)) y = torch.Tensor([y]) valid_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y} save(valid_data, '/content/drive/MyDrive/altegrad_datachallenge/data/data.valid') del valid_data
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Test
test_data = {} for i, row in tqdm(df_test.iterrows()): author_id = str(int(row['authorID'])) degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)] author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1)) papers_ids = papers_per_author[author_id] papers_embedding = [] num_papers = 0 for id_paper in papers_ids: num_papers += 1 try: papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1))) except KeyError: papers_embedding.append(torch.zeros((1,768))) papers_embedding = torch.cat(papers_embedding, dim=0) additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1)) test_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features} del G del df_test del embedding_per_paper del papers_per_author del core_number del avg_neighbor_degree del embedding_per_nodes save(test_data, '/content/drive/MyDrive/altegrad_datachallenge/data/data.test', 4) del test_data
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Doc2Vec
# Load the paper's embedding embedding_per_paper = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/doc2vec_paper_embedding.txt') # Load the node's embedding embedding_per_nodes = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/Node2Vec.txt') # read the file to create a dictionary with author key and paper list as value f = open("/content/drive/MyDrive/altegrad_datachallenge/data/author_papers.txt","r") papers_per_author = {} for l in f: auth_paps = [paper_id.strip() for paper_id in l.split(":")[1].replace("[","").replace("]","").replace("\n","").replace("\'","").replace("\"","").split(",")] papers_per_author[l.split(":")[0]] = auth_paps # Load train set df_train = shuffle(pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/data/train.csv', dtype={'authorID': np.int64, 'h_index': np.float32})).reset_index(drop=True) # Load test set df_test = pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/data/test.csv', dtype={'authorID': np.int64}) # Load Graph G = nx.read_edgelist('/content/drive/MyDrive/altegrad_datachallenge/data/collaboration_network.edgelist', delimiter=' ', nodetype=int) # computes structural features for each node core_number = nx.core_number(G) avg_neighbor_degree = nx.average_neighbor_degree(G) # Split into train/valid df_valid = df_train.iloc[int(len(df_train)*0.9):, :] df_train = df_train.iloc[:int(len(df_train)*0.9), :]
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Train
train_data = {} for i, row in tqdm(df_train.iterrows()): author_id, y = str(int(row['authorID'])), row['h_index'] degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)] author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1)) papers_ids = papers_per_author[author_id] papers_embedding = [] num_papers = 0 for id_paper in papers_ids: num_papers += 1 try: papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1))) except KeyError: print(f"Missing paper for {author_id}") papers_embedding.append(torch.zeros((1,256))) papers_embedding = torch.cat(papers_embedding, dim=0) additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1)) y = torch.Tensor([y]) train_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y} # Saving save(train_data, '/content/drive/MyDrive/altegrad_datachallenge/data/d2v.train') # Deleting (memory) del train_data
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Dev
valid_data = {} for i, row in tqdm(df_valid.iterrows()): author_id, y = str(int(row['authorID'])), row['h_index'] degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)] author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1)) papers_ids = papers_per_author[author_id] papers_embedding = [] num_papers = 0 for id_paper in papers_ids: num_papers += 1 try: papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1))) except KeyError: papers_embedding.append(torch.zeros((1,256))) papers_embedding = torch.cat(papers_embedding, dim=0) additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1)) y = torch.Tensor([y]) valid_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y} save(valid_data, '/content/drive/MyDrive/altegrad_datachallenge/data/d2v.valid') del valid_data
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
Test
test_data = {} for i, row in tqdm(df_test.iterrows()): author_id = str(int(row['authorID'])) degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)] author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1)) papers_ids = papers_per_author[author_id] papers_embedding = [] num_papers = 0 for id_paper in papers_ids: num_papers += 1 try: papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1))) except KeyError: papers_embedding.append(torch.zeros((1,256))) papers_embedding = torch.cat(papers_embedding, dim=0) additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1)) test_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features} del G del df_test del embedding_per_paper del papers_per_author del core_number del avg_neighbor_degree del embedding_per_nodes save(test_data, '/content/drive/MyDrive/altegrad_datachallenge/data/d2v.test', 4) del test_data
_____no_output_____
MIT
notebook_utils/generate_data.ipynb
omarsou/altegrad_challenge_hindex
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
!pip3 install citipy # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time from scipy.stats import linregress # Import API key from api_keys import weather_api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Output File (CSV) output_data_file = "output_data/cities.csv" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180)
_____no_output_____
MIT
WeatherPy/WeatherPy.ipynb
ball4410/python-api-challenge
Generate Cities List
# List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(lat_range[0], lat_range[1], size=1500) lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) #617
_____no_output_____
MIT
WeatherPy/WeatherPy.ipynb
ball4410/python-api-challenge
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it'sbeing processed (with the city number and city name).
# Save config information url = "http://api.openweathermap.org/data/2.5/weather?units=Imperial&" base_url = f"{url}APPID={weather_api_key}&q=" city_data = [] print("Beginning Data Retrieval") print("--------------------------") # use iterrows to iterate through pandas dataframe for index, city in enumerate(cities): print(f"Processing record {index}: {city}") try: # assemble url and make API request response = requests.get(base_url + city).json() # if index == 1: # print(base_url + city) city_lat = response['coord']["lat"] city_lon = response['coord']["lon"] max_temp = response['main']['temp_max'] humidity = response['main']['humidity'] cloudiness = response['clouds']['all'] wind_speed = response['wind']['speed'] country = response['sys']['country'] date = response['dt'] #Store data for each city found city_data.append({"City": city, "Lat": city_lat, "Lon": city_lon, "Max Temp": max_temp, "Humidity": humidity, "Cloudiness": cloudiness, "Wind Speed": wind_speed, "Country": country, "Date": date}) except (KeyError, IndexError): print("City Not found.Skipping...") print("--------------------------") print("Data Retrieval Complete") print("--------------------------")
Beginning Data Retrieval -------------------------- Processing record 0: souillac Processing record 1: shubarkuduk http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID=9ffa18eb275e20275c32d718a9a31ad7&q=shubarkuduk Processing record 2: bukavu Processing record 3: tuktoyaktuk Processing record 4: westport Processing record 5: hermanus Processing record 6: east london Processing record 7: sibolga Processing record 8: kaitangata Processing record 9: sitka Processing record 10: punta arenas Processing record 11: malwan City Not found.Skipping... Processing record 12: beringovskiy Processing record 13: rikitea Processing record 14: busselton Processing record 15: srednekolymsk Processing record 16: ushuaia Processing record 17: leningradskiy Processing record 18: bethel Processing record 19: puerto madero Processing record 20: port alfred Processing record 21: lebu Processing record 22: tsihombe City Not found.Skipping... Processing record 23: fortuna Processing record 24: niquelandia Processing record 25: saint-philippe Processing record 26: sayyan Processing record 27: butaritari Processing record 28: nogliki Processing record 29: puerto ayora Processing record 30: deputatskiy Processing record 31: bluff Processing record 32: geraldton Processing record 33: mahibadhoo Processing record 34: karratha Processing record 35: longyearbyen Processing record 36: laguna Processing record 37: siguiri Processing record 38: beroroha Processing record 39: upernavik Processing record 40: esperance Processing record 41: kapuskasing Processing record 42: fukuma Processing record 43: tuatapere Processing record 44: jamestown Processing record 45: clyde river Processing record 46: sisimiut Processing record 47: arraial do cabo Processing record 48: tasiilaq Processing record 49: nikolskoye Processing record 50: bredasdorp Processing record 51: taolanaro City Not found.Skipping... Processing record 52: thompson Processing record 53: mataura Processing record 54: kundiawa Processing record 55: grand river south east City Not found.Skipping... Processing record 56: cidreira Processing record 57: pevek Processing record 58: amderma City Not found.Skipping... Processing record 59: ribeira grande Processing record 60: albany Processing record 61: yar-sale Processing record 62: bitung Processing record 63: zadar Processing record 64: belgrade Processing record 65: damavand Processing record 66: attawapiskat City Not found.Skipping... Processing record 67: atuona Processing record 68: castro Processing record 69: seredka Processing record 70: saskylakh Processing record 71: hithadhoo Processing record 72: mahebourg Processing record 73: teknaf Processing record 74: cape town Processing record 75: cherskiy Processing record 76: mys shmidta City Not found.Skipping... Processing record 77: hobart Processing record 78: ilo Processing record 79: salinopolis Processing record 80: carnarvon Processing record 81: bonavista Processing record 82: dikson Processing record 83: christchurch Processing record 84: saint-joseph Processing record 85: toamasina Processing record 86: mar del plata Processing record 87: arman Processing record 88: kodiak Processing record 89: louisbourg City Not found.Skipping... Processing record 90: bilibino Processing record 91: provideniya Processing record 92: bambous virieux Processing record 93: shangrao Processing record 94: vaini Processing record 95: aguimes Processing record 96: sur Processing record 97: anaco Processing record 98: mancio lima Processing record 99: katsuura Processing record 100: utiroa City Not found.Skipping... Processing record 101: barrow Processing record 102: sokna Processing record 103: ericeira Processing record 104: dakar Processing record 105: moose factory Processing record 106: georgetown Processing record 107: khatanga Processing record 108: abu samrah Processing record 109: sindor Processing record 110: mayo Processing record 111: san patricio Processing record 112: san rafael del sur Processing record 113: salalah Processing record 114: barentsburg City Not found.Skipping... Processing record 115: saint pete beach Processing record 116: aktash Processing record 117: karauzyak City Not found.Skipping... Processing record 118: airai Processing record 119: inongo Processing record 120: muskegon Processing record 121: torbay Processing record 122: rungata City Not found.Skipping... Processing record 123: bayanday Processing record 124: natal Processing record 125: jiuquan Processing record 126: port elizabeth Processing record 127: iqaluit Processing record 128: stornoway Processing record 129: oxapampa Processing record 130: kavaratti Processing record 131: hasaki Processing record 132: nizhniy baskunchak Processing record 133: chuy Processing record 134: kudahuvadhoo Processing record 135: conceicao do araguaia Processing record 136: cheuskiny City Not found.Skipping... Processing record 137: walvis bay Processing record 138: nanortalik Processing record 139: namatanai Processing record 140: lagoa Processing record 141: dalmeny Processing record 142: chokurdakh Processing record 143: ossora Processing record 144: boende Processing record 145: ancud Processing record 146: ibra Processing record 147: banda aceh Processing record 148: moindou Processing record 149: colac Processing record 150: kruisfontein Processing record 151: anloga Processing record 152: belushya guba City Not found.Skipping... Processing record 153: coquimbo Processing record 154: severo-kurilsk Processing record 155: nuevitas Processing record 156: naze Processing record 157: norman wells Processing record 158: porto santo Processing record 159: wanxian Processing record 160: jalu Processing record 161: shiyan Processing record 162: vysokogornyy Processing record 163: la ronge Processing record 164: del rio Processing record 165: takoradi Processing record 166: kilindoni Processing record 167: kegayli City Not found.Skipping... Processing record 168: barinas Processing record 169: tabiauea City Not found.Skipping... Processing record 170: iskateley Processing record 171: sioux city Processing record 172: farafangana Processing record 173: valley Processing record 174: nanzhang Processing record 175: solnechnyy Processing record 176: haibowan City Not found.Skipping... Processing record 177: bengkulu Processing record 178: vostok Processing record 179: kapaa Processing record 180: huarmey Processing record 181: katherine Processing record 182: cabo san lucas Processing record 183: lensk Processing record 184: hilo Processing record 185: saint george Processing record 186: vila Processing record 187: manokwari Processing record 188: praia Processing record 189: bilma Processing record 190: new norfolk Processing record 191: touros Processing record 192: marsa matruh Processing record 193: ixtapa Processing record 194: pascagoula Processing record 195: san francisco Processing record 196: husavik Processing record 197: gulshat City Not found.Skipping... Processing record 198: auray Processing record 199: avarua Processing record 200: korla Processing record 201: burica City Not found.Skipping... Processing record 202: washington Processing record 203: roald Processing record 204: paulo afonso Processing record 205: samusu City Not found.Skipping... Processing record 206: lompoc Processing record 207: kulhudhuffushi Processing record 208: mnogovershinnyy Processing record 209: naftah City Not found.Skipping... Processing record 210: casino Processing record 211: haapiti Processing record 212: sechura Processing record 213: los llanos de aridane Processing record 214: mlonggo Processing record 215: seddon Processing record 216: vardo Processing record 217: muros Processing record 218: qaanaaq Processing record 219: inhambane Processing record 220: tiksi Processing record 221: kyra Processing record 222: portland Processing record 223: burkhala City Not found.Skipping... Processing record 224: te anau Processing record 225: dyakonovo City Not found.Skipping... Processing record 226: makat Processing record 227: teneguiban City Not found.Skipping... Processing record 228: yeppoon Processing record 229: zelenyy bor Processing record 230: kargasok
MIT
WeatherPy/WeatherPy.ipynb
ball4410/python-api-challenge
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
# Make city data into DF and export the cities DataFrame to a CSV file city_df = pd.DataFrame(city_data) city_df.describe() city_df.to_csv(output_data_file, index_label="City ID") city_df.head() city_df.count() city_df.describe()
_____no_output_____
MIT
WeatherPy/WeatherPy.ipynb
ball4410/python-api-challenge
Inspect the data and remove the cities where the humidity > 100%.----Skip this step if there are no cities that have humidity > 100%.
city_df.loc[city_df["Humidity"] > 100] # No rows returned # Get the indices of cities that have humidity over 100%. # Make a new DataFrame equal to the city data to drop all humidity outliers by index. # Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
_____no_output_____
MIT
WeatherPy/WeatherPy.ipynb
ball4410/python-api-challenge