markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model:
Ordinary Least Squares (OLS) was used for the linear regression for this model.
2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of y... | # Create a new column, stall_num2, representing the proportion of entries through a stall across the entire period.
total_patrons = df.ENTRIESn_hourly.sum()
# Dataframe with the units, and total passing through each unit across the time period
total_by_stall = pd.DataFrame(df.groupby('UNIT').ENTRIESn_hourly.sum())
# ... | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
2.3 Why did you select these features in your model?
The first step was to qualitatively assess which parameters may be useful for the model. This begins with looking at a list of the data, and the type of data, which has been captured, illustrated as follows. | for i in df.columns.tolist(): print i, | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Some parameters are going to be clearly important:
- UNIT/station - ridership will vary between entry points;
- hour - ridership will definitely be different between peak hour and 4am; and
- weekday - it is intutive that there will be more entries on weekdays; this is clearly illustrated in the visualisations in sectio... | plt.figure(figsize=[8,6])
corr = df[['ENTRIESn_hourly',
'EXITSn_hourly',
'day_week', # Day of the week (0-6)
'weekday', # Whether it is a weekday or not
'day', # Day of the month
'hour', # In set [4, 8, 12, 16, 20, 24]
'fog',
'precipi',
'rain... | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
The final selection of variables was determined through trial and error of rational combinations of variables. The station popularity was captured in using the stall_num2 variable, since it appears to create a superior model compared with just using UNIT dummies, and because it allowed the creation of combinations. Com... | # Construct and fit the model
mod = sm.OLS.from_formula('ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday', data=df)
res = mod.fit_regularized()
s = res.summary2() | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Due to the use of several combinations, there are very few non-dummy features, with the coefficients illustrated below. Since stall_num2 is also used in several combinations, it's individual coefficient doesn't prove very useful. | s.tables[1].ix[['Intercept', 'stall_num2']] | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
However when looking at all the combinations for stall_num2 provides greater insight. Here we can see that activity is greater on weekdays, and greatest in the 16:00-20:00hrs block. It is lowest in the 00:00-04:00hrs block, not shown as it was removed by the model due to the generic stall_num2 parameter being there; th... | s.tables[1].ix[[i for i in s.tables[1].index if i[:5]=='stall']] | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Even more interesting are the coefficient for the rain combinations. These appear to indicate that patronage increases in the 08:00-12:00 and 16:00-20:00, corresponding to peak hour. Conversely, subway entries are lower at all other times. Could it be that subway usage increases if it is raining when people are travell... | s.tables[1].ix[[i for i in s.tables[1].index if i[:4]=='rain']] | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
2.5 What is your model’s R2 (coefficients of determination) value? | print 'Model Coefficient of Determination (R-squared): {:.3f}'.format(res.rsquared) | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
The final R-squared value of 0.74 is much greater than earlier models that used UNIT as a dummy variable, which had R-squared values around 0.55.
2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given... | residuals = res.resid
sns.set_style('whitegrid')
sns.distplot(residuals,bins=np.arange(-10000,10001,200),
kde = False, # kde_kws={'kernel':'gau', 'gridsize':4000, 'bw':100},
fit=sps.cauchy, fit_kws={'gridsize':4000})
plt.xlim(-5000,5000)
plt.title('Distribution of Residuals\nwith fitted cauchy D... | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Secondly, a scatterplot of the residuals against the expected values is plotted. As expected, the largest residuals are associated with cases where the traffic is largest. In general the model appears to underpredict the traffic at the busiest of units. Also clear on this plot is how individual stations form a 'streak'... | sns.set_style('whitegrid')
fig = plt.figure(figsize=[6,6])
plt.xlabel('ENTRIESn_hourly')
plt.ylabel('Residuals')
plt.scatter(df.ENTRIESn_hourly, residuals,
c=(df.stall_num2*total_stall_stddev+total_stall_mean)*100, # denormalise values
cmap='YlGnBu')
plt.colorbar(label='UNIT Relative Traffic (%)... | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Additionally, note that the condition number for the final model is relatively low, hence there don't appear to be any collinearity issues with this model. By comparison, when UNIT was included as a dummy variable instead, the correlation was weaker and the condition number was up around 220. | print 'Condition Number: {:.2f}'.format(res.condition_number) | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
In summary, it appears that this linear model has done a reasonable job of predicting ridership in this instance. Clearly some improvements are possible (like fixing the predictions of negative entries!), but given there will always be a degree of random variation, an R-squared value of 0.74 for a linear model seems qu... | sns.set_style('white')
sns.set_context('talk')
mydf = df.copy()
mydf['rain'] = mydf.rain.apply(lambda x: 'Raining' if x else 'Not Raining')
raindata = df[df.rain==1].ENTRIESn_hourly.tolist()
noraindata = df[df.rain==0].ENTRIESn_hourly.tolist()
fig = plt.figure(figsize=[9,6])
ax = fig.add_subplot(111)
plt.hist([raindat... | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Once both plots are normalised, the difference between subway entries when raining and not raining are almost identical. No useful differentiation can be made between the two datasets here.
3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter ... | # Plot to illustrate the average riders per time block for each weekday.
# First we need to sum up the entries per hour (category) per weekday across all units.
# This is done for every day, whilst retaining the 'day_week' field for convenience. reset_index puts it back into a standard dataframe
# For the sake of illus... | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | chris-jd/udacity | mit |
Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs) are an interesting application of deep learning that allow models to predict the future. While regression models attempt to fit an equation to existing data and extend the predictive power of the equation into the future, RNNs fit a model and use sequenc... | ! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done' | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Next, download the data from Kaggle. | !kaggle datasets download joshmcadams/engine-vibrations
!ls | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Now load the data into a DataFrame. | import pandas as pd
df = pd.read_csv('engine-vibrations.zip')
df.describe() | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
We know the data contains readings of engine vibration over time. Let's see how that looks on a line chart. | import matplotlib.pyplot as plt
plt.figure(figsize=(24, 8))
plt.plot(list(range(len(df['mm']))), df['mm'])
plt.show() | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
That's quite a tough chart to read. Let's sample it. | import matplotlib.pyplot as plt
plt.figure(figsize=(24, 8))
plt.plot(list(range(100)), df['mm'].iloc[:100])
plt.show() | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
See if any of the data is missing. | df.isna().any() | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Finally, we'll do a box plot to see if the data is evenly distributed, which it is. | import seaborn as sns
_ = sns.boxplot(df['mm']) | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
There is not much more EDA we need to do at this point. Let's move on to modeling.
Preparing the Data
Currently we have a series of data that contains a single list of vibration values over time. When training our model and when asking for predictions, we'll want to instead feed the model a subset of our sequence.
We f... | import numpy as np
X = []
y = []
sseq_len = 50
for i in range(0, len(df['mm']) - sseq_len - 1):
X.append(df['mm'][i:i+sseq_len])
y.append(df['mm'][i+sseq_len+1])
y = np.array(y)
X = np.array(X)
X.shape, y.shape | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
We also need to explicitly set the final dimension of the data in order to have it pass through our model. | X = np.expand_dims(X, axis=2)
y = np.expand_dims(y, axis=1)
X.shape, y.shape | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
We'll also standardize our data for the model. Note that we don't normalize here because we need to be able to reproduce negative values. | data_std = df['mm'].std()
data_mean = df['mm'].mean()
X = (X - data_mean) / data_std
y = (y - data_mean) / data_std
X.max(), y.max(), X.min(), y.min() | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
And for final testing after model training, we'll split off 20% of the data. | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0) | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Setting a Baseline
We are only training with 50 data points at a time. This is well within the bounds of what a standard deep neural network can handle, so let's first see what a very simple neural network can do. | import math
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[sseq_len, 1]),
keras.layers.Dense(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stop... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
We quickly converged and, when we ran the model, we got a baseline quality value of 0.03750885081060467.
The Most Basic RNN
Let's contrast a basic feedforward neural network with a basic RNN. To do this we simply need to use the SimpleRNN layer in our network in place of the Dense layer in our network above. Notice tha... | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.SimpleRNN(1, input_shape=[None, 1])
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train,... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Our model converged a little more slowly, but it got an error of only 0.8974118571865628, which is not an improvement over the baseline model.
A Deep RNN
Let's try to build a deep RNN and see if we can get better results.
In the model below, we stick together four layers ranging in width from 50 nodes to our final outp... | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.SimpleRNN(50, return_sequences=True, input_shape=[None, 1]),
keras.layers.SimpleRNN(20, return_sequences=True),
keras.layers.SimpleRNN(10, return_sequences=True),
keras.layers.SimpleRNN(1)
])
model.compile(
loss='mse',
optimizer=... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Woah! What happened? Our MSE during training looked nice: 0.0496. But our final testing didn't perform much better than our simple model. We seem to have overfit!
We can try to simplify the model and add dropout layers to reduce overfitting, but even with a very basic model like the one below, we still get very differe... | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.SimpleRNN(2, return_sequences=True, input_shape=[None, 1]),
keras.layers.Dropout(0.3),
keras.layers.SimpleRNN(1),
keras.layers.Dense(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.ke... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Even with these measures, we still seem to be overfitting a bit. We could keep tuning, but let's instead look at some other types of neurons found in RNNs.
Long Short Term Memory
The RNN layers we've been using are basic neurons that have a very short memory. They tend to learn patterns that they have recently seen, bu... | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.LSTM(1, input_shape=[None, 1]),
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
We got a test RMSE of 0.8989123704842217, which is still not better than our SimpleRNN. And in the more complex model below, we got close to the baseline but still didn't beat it. | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.LSTM(20, return_sequences=True, input_shape=[None, 1]),
keras.layers.Dropout(0.2),
keras.layers.LSTM(10),
keras.layers.Dropout(0.2),
keras.layers.Dense(1)
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
LSTM neurons can be very useful, but as we have seen, they aren't always the best option.
Let's look at one more neuron commonly found in RNN models, the GRU.
Gated Recurrent Unit
The Gated Recurrent Unit (GRU) is another special neuron that often shows up in Recurrent Neural Networks. The GRU is similar to the LSTM in... | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.GRU(1),
])
model.compile(
loss='mse',
optimizer='Adam',
metrics=['mae', 'mse'],
)
stopping = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=0,
patience=2)
history = model.fit(X_train, y_train, epochs=50, callbacks=[sto... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
We got a RMSE of 0.9668634342193015, which isn't bad, but it still performs worse than our baseline.
Convolutional Layers
Convolutional layers are limited to image classification models. They can also be really handy when training RNNs. For training on a sequence of data, we use the Conv1D class as shown below. | tf.random.set_seed(0)
model = keras.models.Sequential([
keras.layers.Conv1D(filters=20, kernel_size=4, strides=2, padding="valid",
input_shape=[None, 1]),
keras.layers.GRU(2, input_shape=[None, 1], activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(1),
])
model.... | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Recurrent Neural Networks are a powerful tool for sequence generation and prediction. But they aren't the only mechanism for sequence prediction. If the sequence you are predicting is short enough, then a standard deep neural network might be able to provide the predictions you are looking for.
Also note that we create... | # Your code goes here | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Exercise 2: Stock Price Prediction
Using the Stonks! dataset, create a recurrent neural network that can predict the stock price for the 'AAA' ticker. Calculate your RMSE with some holdout data.
Use as many text and code cells as you need to complete this exercise.
Hint: if predicting absolute prices doesn't yield a g... | # Your code goes here | content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Let's Import Some Data through NOAA | %%time
heights = [] # empty array to append opened netCDF's to
temps = []
date_range = np.arange(1995,2001,1) # range of years interested in obtaining, remember python starts counting at 0 so for 10 years we actually need to say through 2005
for i in date_range:
url_h... | Spring_2019/LB29/GettingData_XRDASK.ipynb | milancurcic/lunch-bytes | cc0-1.0 |
Turn list of urls into one large, combined (concatenated) dataset based on time | %%time
concat_h = xr.open_mfdataset(heights) # aligns all the lat, lon, lev, values of all the datasets based on dimesnion of time
%%time
concat_t = xr.open_mfdataset(temps) | Spring_2019/LB29/GettingData_XRDASK.ipynb | milancurcic/lunch-bytes | cc0-1.0 |
Take a peak to ensure everything was read successfully and understand the dataset that you have | concat_h, concat_t
%%time
concat_h = concat_h.sel(lat = slice(90,0), level = 500).resample(time = '24H').mean(dim = 'time')
%%time
concat_t = concat_t.sel(lat = slice(90,0), level = 925).resample(time = '24H').mean(dim = 'time') | Spring_2019/LB29/GettingData_XRDASK.ipynb | milancurcic/lunch-bytes | cc0-1.0 |
Take another peak | concat_h, concat_t | Spring_2019/LB29/GettingData_XRDASK.ipynb | milancurcic/lunch-bytes | cc0-1.0 |
Write out data for processing | %%time
concat_h.to_netcdf('heights_9520.nc')
%%time
concat_t.to_netcdf('temps_9520.nc') | Spring_2019/LB29/GettingData_XRDASK.ipynb | milancurcic/lunch-bytes | cc0-1.0 |
<b> Note: Make sure you generate an API Key and replace the value above. The sample key will not work.</b>
From the same API console, choose "Dashboard" on the left-hand menu and "Enable API".
Enable the following APIs for your project (search for them) if they are not already enabled:
<ol>
<li> Google Translate API </... | !pip install --upgrade google-api-python-client | CPB100/lab4c/mlapis.ipynb | turbomanage/training-data-analyst | apache-2.0 |
<h2> Invoke Translate API </h2> | # running Translate API
from googleapiclient.discovery import build
service = build('translate', 'v2', developerKey=APIKEY)
# use the service
inputs = ['is it really this easy?', 'amazing technology', 'wow']
outputs = service.translations().list(source='en', target='fr', q=inputs).execute()
# print outputs
for input, ... | CPB100/lab4c/mlapis.ipynb | turbomanage/training-data-analyst | apache-2.0 |
<h2> Invoke Vision API </h2>
The Vision API can work off an image in Cloud Storage or embedded directly into a POST message. I'll use Cloud Storage and do OCR on this image: <img src="https://storage.googleapis.com/cloud-training-demos/vision/sign2.jpg" width="200" />. That photograph is from http://www.publicdomainp... | # Running Vision API
import base64
IMAGE="gs://cloud-training-demos/vision/sign2.jpg"
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
'source': {
'gcs_image_uri': IMAGE
... | CPB100/lab4c/mlapis.ipynb | turbomanage/training-data-analyst | apache-2.0 |
<h2> Translate sign </h2> | inputs=[foreigntext]
outputs = service.translations().list(source=foreignlang, target='en', q=inputs).execute()
# print(outputs)
for input, output in zip(inputs, outputs['translations']):
print("{0} -> {1}".format(input, output['translatedText'])) | CPB100/lab4c/mlapis.ipynb | turbomanage/training-data-analyst | apache-2.0 |
<h2> Sentiment analysis with Language API </h2>
Let's evaluate the sentiment of some famous quotes using Google Cloud Natural Language API. | lservice = build('language', 'v1beta1', developerKey=APIKEY)
quotes = [
'To succeed, you must have tremendous perseverance, tremendous will.',
'It’s not that I’m so smart, it’s just that I stay with problems longer.',
'Love is quivering happiness.',
'Love is of all passions the strongest, for it attacks simulta... | CPB100/lab4c/mlapis.ipynb | turbomanage/training-data-analyst | apache-2.0 |
<h2> Speech API </h2>
The Speech API can work on streaming data, audio content encoded and embedded directly into the POST message, or on a file on Cloud Storage. Here I'll pass in this <a href="https://storage.googleapis.com/cloud-training-demos/vision/audio.raw">audio file</a> in Cloud Storage. | sservice = build('speech', 'v1', developerKey=APIKEY)
response = sservice.speech().recognize(
body={
'config': {
'languageCode' : 'en-US',
'encoding': 'LINEAR16',
'sampleRateHertz': 16000
},
'audio': {
'uri': 'gs://cloud-training-demos/vision/a... | CPB100/lab4c/mlapis.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Image as 'window'
The simplest use case is treating each image as an image window. This is implemented by default using ImageWindowDataset class from niftynet.contrib.dataset_sampler. This class also acts as a base class, which can be extended to generate smaller windows using different sampling strategies. | from niftynet.io.image_reader import ImageReader
from niftynet.engine.image_window_dataset import ImageWindowDataset
# creating an image reader.
data_param = \
{'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',
'filename_contains': 'nii'}}
reader = ImageReader().initialise(data_... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
The sampler can be used as a numpy function, or a tensorflow operation.
Use the sampler as a numpy function
Directly call the instance (this is actually invoking sampler.layer_op): | windows = sampler()
print(windows.keys(), windows['CT_location'], windows['CT'].shape)
import matplotlib.pyplot as plt
plt.imshow(windows['CT'][0,:,:,0,0,0])
plt.show() | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
Use the sampler as a tensorflow op
First add a iterator node, this is wrapped as pop_batch_op(),
then run the op. | import tensorflow as tf
# adding the tensorflow tensors
next_window = sampler.pop_batch_op()
# run the tensors
with tf.Session() as sess:
sampler.run_threads(sess) #initialise the iterator
windows = sess.run(next_window)
print(windows.keys(), windows['CT_location'], windows['CT'].shape)
| demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
The location array ['MR_location'] represents the spatial coordinates of the window:
[subject_id, x_start, y_start, z_start, x_end, y_end, z_end]
As a numpy function, the output shape is (1, x, y, z, 1, 1) which represents batch, width, height, depth, time points, channels. As a tensorflow op, the output shape is (b... | from niftynet.io.image_reader import ImageReader
from niftynet.engine.sampler_uniform_v2 import UniformSampler
# creating an image reader.
# creating an image reader.
data_param = \
{'MR': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',
'filename_contains': 'nii',
'spati... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
Grid sampler
Generatoring image windows from images with a sliding window.
This is implemented by overriding the layer_op of ImageWindowDataset.
The window_border parameter controls the amount of overlap in between sampling windows.
When the grid sampler is used for fully convolutional inference, the overlapping region... | from niftynet.io.image_reader import ImageReader
from niftynet.engine.sampler_grid_v2 import GridSampler
# creating an image reader.
data_param = \
{'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',
'filename_contains': 'nii'}}
reader = ImageReader().initialise(data_param)
# un... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
Visualisation of the window coordinates (change window_sizes and window_border to see different window allocations): | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
f, (ax1, ax) = plt.subplots(1,2)
# show image
_, img, _ = reader(idx=0)
print(img['CT'].shape)
plt.subplot(1,2,1)
plt.imshow(img['CT'][:,:,0,0,0])
# show sampled windows
all_patc... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
Weighted sampler
Generatoring image windows from images with a sampling prior of the foreground.
This sampler uses a cumulative histogram for fast sampling, works with both continous and discrete maps.
It is implemented by overriding the layer_op of ImageWindowDataset.
Weight map can be specified by an input specificat... | from niftynet.io.image_reader import ImageReader
from niftynet.engine.sampler_weighted_v2 import WeightedSampler
# creating an image reader.
data_param = \
{'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',
'filename_contains': 'PAR.nii.gz'},
'sampler': {'path_to_... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
Balanced sampler
Generatoring image windows from images with a sampling prior of the foreground.
This sampler generates image windows from a discrete label map as if every label
had the same probability of occurrence.
It is implemented by overriding the layer_op of ImageWindowDataset.
Weight map can be specified by an ... | from niftynet.io.image_reader import ImageReader
from niftynet.engine.sampler_balanced_v2 import BalancedSampler
# creating an image reader.
data_param = \
{'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',
'filename_contains': 'PAR.nii.gz'},
'sampler': {'path_to_... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
Visualisation of the window coordinates (change data_param see different window allocations): | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
f, (ax1, ax) = plt.subplots(1,2)
# show image
_, img, _ = reader(idx=0)
print(img['CT'].shape)
plt.subplot(1,2,1)
plt.imshow(img['CT'][:,:,0,0,0])
#plt.subplot(1,2,2)
ax.imshow(im... | demos/module_examples/ImageSampler.ipynb | NifTK/NiftyNet | apache-2.0 |
As with all REBOUNDx effects, the parameters must be inputed with the same units as the simulation (in this case it's AU/Msun/yr). We'll use the astropy units module to help avoid errors | density = (3000.0*u.kg/u.m**3).to(u.Msun/u.AU**3)
c = (constants.c).to(u.AU/u.yr) #speed of light
lstar = (3.828e26*u.kg*u.m**2/u.s**3).to(u.Msun*u.AU**2/u.yr**3) #luminosity of star
radius = (1000*u.m).to(u.AU) #radius of object
albedo = .017 #albedo of object
stef_boltz = constants.sigma_sb.to(u.Msun/u.yr**3/u.K**4) ... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
We then add the Yarkovsky effect and the required parameters for this version. Importantly, we must set 'ye_flag' to 0 to get the Full Version. Physical constants and the stellar luminosity get added to the effect yark | #Loads the effect into Rebound
rebx = reboundx.Extras(sim)
yark = rebx.load_force("yarkovsky_effect")
#Sets the parameters for the effect
yark.params["ye_c"] = c.value #set on the sim and not a particular particle
yark.params["ye_lstar"] = lstar.value #set on the sim and not a particular particle
yark.params["ye_stef_... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
Other parameters need to be added to each particle feeling the Yarkovsky effect | # Sets parameters for the particle
ps = sim.particles
ps[1].r = radius.value #remember radius is not inputed as a Rebx parameter - it's inputed on the particle in the Rebound sim
ps[1].params["ye_flag"] = 0 #setting this flag to 0 will give us the full version of the effect
ps[1].params["ye_body_density"] = density.val... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
We integrate this system for 100,000 years and print out the difference between the particle's semi-major axis before and after the simulation. | %%time
tmax=100000 # in yrs
Nout = 1000
times = np.linspace(0, tmax, Nout)
a_start = .5 #starting semi-major axis for the asteroid
a = np.zeros(Nout)
for i, time in enumerate(times):
a[i] = ps[1].a
sim.integrate(time)
a_final = ps[1].a #semi-major axis of asteroid after the sim
print(... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
Simple Version
This version of the effect is based off of equations from Veras et al. (2019). Once again, a link to this paper is provided below. This version simplifies the equations by placing constant values in a rotation matrix that in general is time-dependent. It requires fewer parameters than the full version an... | sim = rebound.Simulation()
sim.units = ('yr', 'AU', 'Msun') #changes simulation and G to units of solar masses, years, and AU
sim.integrator = "whfast" #integrator for sim
sim.dt = .05 #timestep for sim
sim.add(m=1) #Adds Sun
sim.add(a=.5, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds test particle
sim.add(a=.75,... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
We then add the Yarkovsky effect from Reboundx and the necesary parameters for this version. This time, we must make sure that 'ye_flag' is set to 1 or -1 to get the Simple Version of the effect. Setting it to 1 will push the asteroid outwards, while setting it to -1 will push it inwards. We'll push out our original as... | #Loads the effect into Rebound
rebx = reboundx.Extras(sim)
yark = rebx.load_force("yarkovsky_effect")
#Sets the parameters for the effect
yark.params["ye_c"] = c.value
yark.params["ye_lstar"] = lstar.value
ps = sim.particles #simplifies way to access particles parameters
ps[1].params["ye_flag"] = 1 #setting this fla... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
Now we run the sim for 100,000 years and print out the results for both asteroids. Note the difference in simulation times between the versions. Even with an extra particle, the simple version was faster than the full version. | %%time
tmax=100000 # in yrs
a_start_1 = .5 #starting semi-major axis for the 1st asteroid
a_start_2 = .75 #starting semi-major axis for the 2nd asteroid
a1, a2 = np.zeros(Nout), np.zeros(Nout)
for i, time in enumerate(times):
a1[i] = ps[1].a
a2[i] = ps[2].a
sim.integrate(time)
a_final_1 = ps[1].a #semi-m... | ipython_examples/YarkovskyEffect.ipynb | dtamayo/reboundx | gpl-3.0 |
This chapter presents a simple model of a bike share system and
demonstrates the features of Python we'll use to develop simulations of real-world systems.
Along the way, we'll make decisions about how to model the system. In
the next chapter we'll review these decisions and gradually improve the model.
Modeling a Bike... | bikeshare = State(olin=10, wellesley=2) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The expressions in parentheses are keyword arguments.
They create two variables, olin and wellesley, and give them values.
Then we call the State function.
The result is a State object, which is a collection of state variables.
In this example, the state variables represent the number of
bikes at each location. The ini... | bikeshare.olin | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
And this: | bikeshare.wellesley | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Or, to display the state variables and their values, you can just type the name of the object: | bikeshare | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
These values make up the state of the system.
The ModSim library provides a function called show that displays a State object as a table. | show(bikeshare) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
You don't have to use show, but I think it looks better.
We can update the state by assigning new values to the variables.
For example, if a student moves a bike from Olin to Wellesley, we can figure out the new values and assign them: | bikeshare.olin = 9
bikeshare.wellesley = 3 | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Or we can use update operators, -= and +=, to subtract 1 from
olin and add 1 to wellesley: | bikeshare.olin -= 1
bikeshare.wellesley += 1 | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The result is the same either way.
Defining functions
So far we have used functions defined in NumPy and ModSim. Now we're going to define our own function.
When you are developing code in Jupyter, it is often efficient to write a few lines of code, test them to confirm they do what you intend, and then use them to def... | bikeshare.olin -= 1
bikeshare.wellesley += 1 | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Rather than repeat them every time a bike moves, we can define a new
function: | def bike_to_wellesley():
bikeshare.olin -= 1
bikeshare.wellesley += 1 | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
def is a special word in Python that indicates we are defining a new
function. The name of the function is bike_to_wellesley. The empty
parentheses indicate that this function requires no additional
information when it runs. The colon indicates the beginning of an
indented code block.
The next two lines are the body of... | bike_to_wellesley() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
When you call the function, it runs the statements in the body, which
update the variables of the bikeshare object; you can check by
displaying the new state. | bikeshare | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
When you call a function, you have to include the parentheses. If you
leave them out, you get this: | bike_to_wellesley | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
This result indicates that bike_to_wellesley is a function. You don't
have to know what __main__ means, but if you see something like this,
it probably means that you looked up a function but you didn't actually
call it. So don't forget the parentheses.
Print statements
As you write more complicated programs, it is eas... | bikeshare.olin
bikeshare.wellesley | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Jupyter runs both lines, but it only displays the value of the
second. If you want to display more than one value, you can use
print statements: | print(bikeshare.olin)
print(bikeshare.wellesley) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
When you call the print function, you can put a variable name in
parentheses, as in the previous example, or you can provide a sequence
of variables separated by commas, like this: | print(bikeshare.olin, bikeshare.wellesley) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Python looks up the values of the variables and displays them; in this
example, it displays two values on the same line, with a space between
them.
Print statements are useful for debugging functions. For example, we can
add a print statement to move_bike, like this: | def bike_to_wellesley():
print('Moving a bike to Wellesley')
bikeshare.olin -= 1
bikeshare.wellesley += 1 | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Each time we call this version of the function, it displays a message,
which can help us keep track of what the program is doing.
The message in this example is a string, which is a sequence of
letters and other symbols in quotes.
Just like bike_to_wellesley, we can define a function that moves a
bike from Wellesley to... | def bike_to_olin():
print('Moving a bike to Olin')
bikeshare.wellesley -= 1
bikeshare.olin += 1 | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
And call it like this: | bike_to_olin() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
One benefit of defining functions is that you avoid repeating chunks of
code, which makes programs smaller. Another benefit is that the name you
give the function documents what it does, which makes programs more
readable.
If statements
The ModSim library provides a function called flip that generates random "coin toss... | flip(0.7) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The result is one of two values: True with probability 0.7 (in this example) or False
with probability 0.3. If you run flip like this 100 times, you should
get True about 70 times and False about 30 times. But the results
are random, so they might differ from these expectations.
True and False are special values define... | if flip(0.5):
print('heads') | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
If the result from flip is True, the program displays the string
'heads'. Otherwise it does nothing.
The syntax for if statements is similar to the syntax for
function definitions: the first line has to end with a colon, and the
lines inside the if statement have to be indented.
Optionally, you can add an else clause t... | if flip(0.5):
print('heads')
else:
print('tails') | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Now we can use flip to simulate the arrival of students who want to
borrow a bike. Suppose students arrive at the Olin station every 2
minutes, on average. In that case, the chance of an arrival during any
one-minute period is 50%, and we can simulate it like this: | if flip(0.5):
bike_to_wellesley() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
If students arrive at the Wellesley station every 3 minutes, on average,
the chance of an arrival during any one-minute period is 33%, and we can
simulate it like this: | if flip(0.33):
bike_to_olin() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
We can combine these snippets into a function that simulates a time
step, which is an interval of time, in this case one minute: | def step():
if flip(0.5):
bike_to_wellesley()
if flip(0.33):
bike_to_olin() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Then we can simulate a time step like this: | step() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Even though there are no values in parentheses, we have to include them.
Parameters
The previous version of step is fine if the arrival probabilities
never change, but in reality, these probabilities vary over time.
So instead of putting the constant values 0.5 and 0.33 in step we can replace them with parameters. Para... | def step(p1, p2):
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The values of p1 and p2 are not set inside this function; instead,
they are provided when the function is called, like this: | step(0.5, 0.33) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The values you provide when you call the function are called
arguments. The arguments, 0.5 and 0.33 in this example, get
assigned to the parameters, p1 and p2, in order. So running this
function has the same effect as: | p1 = 0.5
p2 = 0.33
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The advantage of using parameters is that you can call the same function many times, providing different arguments each time.
Adding parameters to a function is called generalization, because it makes the function more general, that is, less specialized.
For loops
At some point you will get sick of running cells over a... | for i in range(3):
print(i)
bike_to_wellesley() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The syntax here should look familiar; the first line ends with a
colon, and the lines inside the for loop are indented. The other
elements of the loop are:
The words for and in are special words we have to use in a for
loop.
range is a Python function we're using here to control the number of times the loop run... | bikeshare = State(olin=10, wellesley=2) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
We can create a new, empty TimeSeries like this: | results = TimeSeries() | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
And we can add a quantity like this: | results[0] = bikeshare.olin | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The number in brackets is the time stamp, also called a label.
We can use a TimeSeries inside a for loop to store the results of the simulation: | for i in range(3):
print(i)
step(0.6, 0.6)
results[i+1] = bikeshare.olin | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Each time through the loop, we print the value of i and call step, which updates bikeshare.
Then we store the number of bikes at Olin in results.
We use the loop variable, i, to compute the time stamp, i+1.
The first time through the loop, the value of i is 0, so the time stamp is 1.
The last time, the value of i is 2... | results | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
The left column is the time stamps; the right column is the quantities (which might be negative, depending on the state of the system).
At the bottom, dtype is the type of the data in the TimeSeries; you can ignore this for now.
The show function displays a TimeSeries as a table: | show(results) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Plotting
results provides a function called plot we can use to plot
the results, and the ModSim library provides decorate, which we can use to label the axes and give the figure a title: | results.plot()
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes') | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Summary
This chapter introduces the tools we need to run simulations, record the results, and plot them.
We used a State object to represent the state of the system.
Then we used the flip function and an if statement to simulate a single time step.
We used for loop to simulate a series of steps, and a TimeSeries to rec... | bikeshare = State(olin=10, wellesley=2)
bikeshare.wellesley | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Exercise: Make a State object with a third state variable, called babson, with initial value 0, and display the state of the system. | # Solution
bikeshare = State(olin=10, wellesley=2, babson=0)
show(bikeshare) | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Exercise: Wrap the code in the chapter in a function named run_simulation that takes three parameters, named p1, p2, and num_steps.
It should:
Create a TimeSeries object to hold the results.
Use a for loop to run step the number of times specified by num_steps, passing along the specified values of p1 and p2.
Aft... | # Solution
def run_simulation(p1, p2, num_steps):
results = TimeSeries()
results[0] = bikeshare.olin
for i in range(num_steps):
step(p1, p2)
results[i+1] = bikeshare.olin
results.plot()
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
... | python/soln/chap02.ipynb | AllenDowney/ModSim | gpl-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.