markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
summary
With
python
batch_size = 128
num_hidden_nodes = 1024
beta = 1e-3
num_steps = 3001
Results
* Test accuracy: 88.5% with beta=0.000000 (no L2 regulization)
* Test accuracy: 86.7% with beta=0.000010
* Test accuracy: 88.8% with beta=0.000100
* Test accuracy: 92.6% with beta=0.001000
* Test accuracy: 89.7% with beta=0.010000
* Test accuracy: 82.2% with beta=0.100000
* Test accuracy: 10.0% with beta=1.000000
Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens? | offset = 0 #offset = (step * batch_size) % (train_labels.shape[0] - batch_size) | google_dl_udacity/lesson3/3_regularization.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
With
python
batch_size = 128
num_hidden_nodes = 1024
beta = 1e-3
num_steps = 3001
Results
* Original Test accuracy: 92.6% with beta=0.001000
* With offset = 0: Test accuracy: 67.5% with beta=0.001000
Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case? | keep_rate = 0.5
dropout = tf.nn.dropout(activated_hidden_layer, keep_rate) #dropout if applied after activation
logits = tf.matmul(dropout, weights2) + biases2 | google_dl_udacity/lesson3/3_regularization.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
Change the above cell to refer to the file locations on your computer (The reason it is two files is that I encountered a previously unseen error halfway through, and had to put a new try/except into the code and restart the scraping). | len(scrape1)
len(scrape2)
df = pd.concat([scrape1,scrape2])
len(df)
df.columns
df['days_jailed'] = df.release_timestamp - df.booking_timestamp
df['days_jailed_np'] = df.days_jailed.dt.days
df.loc[df['days_jailed_np']>7,'days_jailed_np'] = 7
sns.distplot(df['days_jailed_np'].dropna()) | src/visualization/Fulton County Data Viz.ipynb | lahoffm/aclu-bail-reform | mit |
This gives us the overall distribution of time imprisoned for everyone in our dataset who has been released. | df.groupby('inmate_race').agg({'days_jailed_np' : np.mean}).plot(kind='bar') | src/visualization/Fulton County Data Viz.ipynb | lahoffm/aclu-bail-reform | mit |
This gives us mean time in prison by race. | ax= sns.violinplot(data=df, x='inmate_race', y='days_jailed_np', cut=0, scale='width')
for tick in ax.get_xticklabels():
tick.set_rotation(45) | src/visualization/Fulton County Data Viz.ipynb | lahoffm/aclu-bail-reform | mit |
Get IRS data on businesses
The IRS website has some aggregated statistics on business returns in Excel files. We will use the Selected Income and Tax Items for Selected Years.
The original data is from the file linked here:
https://www.irs.gov/pub/irs-soi/14intaba.xls,
but I cleaned it up by hand to remove footnotes and reformat the column and row headers. You can get the cleaned file in this repository data/14intaba_cleaned.xls.
It looks like this:
<img src="img/screenshot-14intaba.png" width="100%"/>
Read the data!
We will use the read_excel function inside of the Pandas library (accessed using pd.read_excel) to get the data. We need to:
skip the first 2 rows
Split out the 'Current dollars' and 'Constant 1990 dollars' subsets
use the left two columns to split out the number of returns and their dollar amounts
When referring to files on your computer from Jupyer, the path you use is relative to the current Jupyter notebook. My directory looks like this:
.
|-- notebooks
|-- input_output.ipynb
|-- data
|- 14intaba_cleaned.xls
so, the relative path from the notebook input_output.ipynb to the dataset 14intaba_cleaned.xls is:
data/14intaba_cleaned.xls | raw = pd.read_excel('data/14intaba_cleaned.xls', skiprows=2) | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
Look at the last 3 rows
The function pd.read_excel returns an object called a 'Data Frame', that is defined inside of the Pandas library. It has associated functions that access and manipulate the data inside. For example: | # Look at the last 3 rows
raw.tail(3) | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
Split out the 'Current dollars' and 'Constant 1990 dollars'
There are two sets of data — for the actual dollars for each variable, and also for constant dollars (accounting for inflation). We will split the raw dataset into two and then index the rows by the units (whether they're number of returns or amount paid/claimed).
The columns we care about are Variable, Units, and the current or constant dollars from each year. (You can view them all with raw.columns.)
We can subset the dataset with the columns we want using raw.ix[:, <desired_cols>].
There are a lot of commands in this section...we will do a better job explaining later. For now, ['braces', 'denote', 'a list'], you can add lists, and you can write a shorthand for loop inside of a list (that's called a "list comprehension"). | index_cols = ['Units', 'Variable']
current_dollars_cols = index_cols + [
c for c in raw.columns if c.startswith('Current')
]
constant_dollars_cols = index_cols + [
c for c in raw.columns if c.startswith('Constant')
]
current_dollars_data = raw[current_dollars_cols][9:]
current_dollars_data.set_index(keys=index_cols, inplace=True)
constant_dollars_data = raw[constant_dollars_cols][9:]
constant_dollars_data.set_index(keys=index_cols, inplace=True)
years = [int(c[-4:]) for c in constant_dollars_data.columns]
constant_dollars_data.columns = years | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
Statistics
Pandas provides methods for statistical summaries. The describe method gives an overall summary. dropna(axis=1) deletes columns containing null values. If it were axis=0 it would be deleting rows. | per_entry = (
constant_dollars_data.transpose()['Amount (thousand USD)'] * 1000 /
constant_dollars_data.transpose()['Number of returns']
)
per_entry.dropna(axis=1).describe().round() | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
Plot
The library that provides plot functions is called Matplotlib. To show the plots in this notebook you need to use the "magic method" %matplotlib inline. It should be used at the beginning of the notebook for clarity. | # This should always be at the beginning of the notebook,
# like all magic statements and import statements.
# It's only here because I didn't want to describe it earlier.
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 12) | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
The per-entry data
The data are (I think) for every form filed, not really per capita, but since we're not interpreting it for anything important we can conflate the two.
Per capita income (Blue line) rose a lot with the tech bubble, then sunk with its crash, and then followed the housing bubble and crash. It also looks like small business income (Red dashed line) hasn't really come back since the crash, but that unemployment (Magenta dots) has gone down. | styles = ['b-', 'g-.', 'r--', 'c-', 'm:']
axes = per_entry[[
'Total income',
'Total social security benefits (not in income)',
'Business or profession net income less loss',
'Total payments',
'Unemployment compensation']].plot(style=styles)
plt.suptitle('Average USD per return (when stated)') | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
Also with log-y
We can see the total social security benefits payout (Green dot dash) increase as the baby boomers come of age, and we see the unemployment compensation (Magenta dots) spike after the 2008 crisis and then fall off. | styles = ['b-', 'r--', 'g-.', 'c-', 'm:']
axes = constant_dollars_data.transpose()['Amount (thousand USD)'][[
'Total income',
'Total payments',
'Total social security benefits (not in income)',
'Business or profession net income less loss',
'Unemployment compensation']].plot(logy=True, style=styles)
plt.legend(bbox_to_anchor=(1, 1),
bbox_transform=plt.gcf().transFigure)
plt.suptitle('Total USD (constant 1990 basis)') | notebooks/input_output.ipynb | tanyaschlusser/stats-via-python | mit |
Step 2: Load Training Data | print("Running on system: %s" % socket.gethostname())
if True:
# Using a local copy of data volume
#inDir = '/Users/graywr1/code/bio-segmentation/data/ISBI2012/'
inDir = '/home/pekalmj1/Data/EM_2012'
X = ndp.nddl.load_cube(os.path.join(inDir, 'train-volume.tif'))
Y = ndp.nddl.load_cube(os.path.join(inDir, 'train-labels.tif'))
# show some details. Note that data tensors are assumed to have dimensions:
# (#slices, #channels, #rows, #columns)
#
print('Train data shape is: %s %s' % (str(X.shape), str(Y.shape)))
plt.imshow(X[0,0,...], interpolation='none', cmap='bone')
plt.title('train volume, slice 0')
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show() | examples/isbi2012_train.ipynb | neurodata/ndparse | apache-2.0 |
Step 3: Training | # Note that for demonstration purposes we use an artifically low
# number of training slices and epochs. For actualy training,
# you would use more data and train for longer.
train_slices = np.arange(2) # e.g. change to np.arange(25)
valid_slices = np.arange(25,30)
n_epochs = 1
tic = time.time()
model = ndp.nddl.train_model(X[train_slices,...], np.squeeze(Y[train_slices, ...]),
X[valid_slices,...], np.squeeze(Y[valid_slices, ...]),
nEpochs=n_epochs, log=logger)
print("Time to train: %0.2f sec" % (time.time() - tic))
| examples/isbi2012_train.ipynb | neurodata/ndparse | apache-2.0 |
Next, we need to authenticate ourselves to Google Cloud Platform. If you are running the code cell below for the first time, a link will show up, which leads to a web page for authentication and authorization. Login with your crendentials and make sure the permissions it requests are proper, after clicking Allow button, you will be redirected to another web page which has a verification code displayed. Copy the code and paste it in the input field below. | auth.authenticate_user() | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
At the same time, let's set the project we are going to use throughout the tutorial. | project_id = 'nus-datathon-2018-team-00'
os.environ["GOOGLE_CLOUD_PROJECT"] = project_id | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
Optional: In this Colab we can opt to use GPU to train our model by clicking "Runtime" on the top menus, then clicking "Change runtime type", select "GPU" for hardware accelerator. You can verify that GPU is working with the following code cell. | # Should output something like '/device:GPU:0'.
tf.test.gpu_device_name() | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
Dataset
We have already extracted the images from the DICOM files to separate folders on GCS, and some preprocessing were also done with the raw images (If you need custom preprocessing, please consult our tutorial on image preprocessing).
The folders ending with _demo contain subsets of training and test images. Specifically, the demo training dataset has 100 images, with 25 images for each breast density category (1 - 4). There are 20 images in the test dataset which were selected randomly. All the images were first padded to 5251x7111 (largest width and height among the selected images) and then resized to 95x128 to fit in memory and save training time. Both training and test images are "Cranial-Caudal" only.
ISIC dataset is organized in a slightly different way, the images are in JPEG format and each image comes with a JSON file containing metadata information. In order to make this tutorial work for ISIC, you will need to first pad and resize the images (we provide a script to do that here), and extract the labels from the JSON files based on your interests.
Training
Before coding on our neurual network, let's create a few helper methods to make loading data from Google Cloud Storage (GCS) easier. | client = storage.Client()
bucket_name = 'datathon-cbis-ddsm-colab'
bucket = client.get_bucket(bucket_name)
def load_images(folder):
images = []
labels = []
# The image name is in format: <LABEL>_Calc_{Train,Test}_P_<Patient_ID>_{Left,Right}_CC.
for label in [1, 2, 3, 4]:
blobs = bucket.list_blobs(prefix=("%s/%s_" % (folder, label)))
for blob in blobs:
byte_stream = BytesIO()
blob.download_to_file(byte_stream)
byte_stream.seek(0)
img = Image.open(byte_stream)
images.append(np.array(img, dtype=np.float32))
labels.append(label-1) # Minus 1 to fit in [0, 4).
return np.array(images), np.array(labels, dtype=np.int32)
def load_train_images():
return load_images('small_train_demo')
def load_test_images():
return load_images('small_test_demo') | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
Let's create a model function, which will be passed to an estimator that we will create later. The model has an architecture of 6 layers:
Convolutional Layer: Applies 32 5x5 filters, with ReLU activation function
Pooling Layer: Performs max pooling with a 2x2 filter and stride of 2
Convolutional Layer: Applies 64 5x5 filters, with ReLU activation function
Pooling Layer: Same setup as #2
Dense Layer: 1,024 neurons, with dropout regulartization rate of 0.25
Logits Layer: 4 neurons, one for each breast density category, i.e. [0, 4)
Note that you can change the parameters on the right (or inline) to tune the neurual network. It is highly recommended to check out the original tensorflow tutorial to get a deeper understanding of the network we are building here. | KERNEL_SIZE = 5 #@param
DROPOUT_RATE = 0.25 #@param
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer.
# Reshape to 4-D tensor: [batch_size, height, width, channels]
# DDSM images are grayscale, which have 1 channel.
input_layer = tf.reshape(features["x"], [-1, 95, 128, 1])
# Convolutional Layer #1.
# Input Tensor Shape: [batch_size, 95, 128, 1]
# Output Tensor Shape: [batch_size, 95, 128, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=KERNEL_SIZE,
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1.
# Input Tensor Shape: [batch_size, 95, 128, 1]
# Output Tensor Shape: [batch_size, 47, 64, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2.
# Input Tensor Shape: [batch_size, 47, 64, 32]
# Output Tensor Shape: [batch_size, 47, 64, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=KERNEL_SIZE,
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2.
# Input Tensor Shape: [batch_size, 47, 64, 32]
# Output Tensor Shape: [batch_size, 23, 32, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 23, 32, 64]
# Output Tensor Shape: [batch_size, 23 * 32 * 64]
pool2_flat = tf.reshape(pool2, [-1, 23 * 32 * 64])
# Dense Layer.
# Input Tensor Shape: [batch_size, 25 * 17 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Dropout operation.
# 0.75 probability that element will be kept.
dropout = tf.layers.dropout(inputs=dense, rate=DROPOUT_RATE,
training=(mode == tf.estimator.ModeKeys.TRAIN))
# Logits Layer.
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 4]
logits = tf.layers.dense(inputs=dropout, units=4)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Loss Calculation.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode).
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops) | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
Now that we have a model function, next step is feeding it to an estimator for training. Here are are creating a main function as required by tensorflow. | BATCH_SIZE = 20 #@param
STEPS = 1000 #@param
artifacts_bucket_name = 'nus-datathon-2018-team-00-shared-files'
# Append a random number to avoid collision.
artifacts_path = "ddsm_model_%s" % random.randint(0, 1000)
model_dir = "gs://%s/%s" % (artifacts_bucket_name, artifacts_path)
def main(_):
# Load training and test data.
train_data, train_labels = load_train_images()
eval_data, eval_labels = load_test_images()
# Create the Estimator.
ddsm_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn,
model_dir=model_dir)
# Set up logging for predictions.
# Log the values in the "Softmax" tensor with label "probabilities".
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
# Train the model.
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=BATCH_SIZE,
num_epochs=None,
shuffle=True)
ddsm_classifier.train(
input_fn=train_input_fn,
steps=STEPS,
hooks=[logging_hook])
# Evaluate the model and print results.
eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_results = ddsm_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results) | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
Finally, here comes the exciting moment. We are going to train and evaluate the model we just built! Run the following code cell and pay attention to the accuracy printed at the end of logs.
Note if this is not the first time you run the following cell, to avoid weird errors like "NaN loss during training", please run the following command to remove the temporary files. | # Remove temporary files.
artifacts_bucket = client.get_bucket(artifacts_bucket_name)
artifacts_bucket.delete_blobs(artifacts_bucket.list_blobs(prefix=artifacts_path))
# Set logging level.
tf.logging.set_verbosity(tf.logging.INFO)
# Start training, this will call the main method defined above behind the scene.
# The whole training process will take ~5 mins.
tf.app.run() | datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb | GoogleCloudPlatform/healthcare | apache-2.0 |
UMAP vs T-SNE
Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data
The data is uniformly distributed on a Riemannian manifold;
The Riemannian metric is locally constant (or can be approximated as such);
The manifold is locally connected.
From these assumptions it is possible to model the manifold with a fuzzy topological structure. The embedding is found by searching for a low dimensional projection of the data that has the closest possible equivalent fuzzy topological structure. | corpus = load_hobbies() | examples/Sangarshanan/comparing_corpus_visualizers.ipynb | pdamodaran/yellowbrick | apache-2.0 |
Writing a Function to quickly Visualize Corpus
Which can then be used for rapid comparison | def visualize(dim_reduction,encoding,corpus,labels = True,alpha=0.7,metric=None):
if 'tfidf' in encoding.lower():
encode = TfidfVectorizer()
if 'count' in encoding.lower():
encode = CountVectorizer()
docs = encode.fit_transform(corpus.data)
if labels is True:
labels = corpus.target
else:
labels = None
if 'umap' in dim_reduction.lower():
if metric is None:
viz = UMAPVisualizer()
else:
viz = UMAPVisualizer(metric=metric)
if 't-sne' in dim_reduction.lower():
viz = TSNEVisualizer(alpha = alpha)
viz.fit(docs,labels)
viz.poof() | examples/Sangarshanan/comparing_corpus_visualizers.ipynb | pdamodaran/yellowbrick | apache-2.0 |
Quickly Comparing Plots by Controlling
The Dimensionality Reduction technique used
The Encoding Technique used
The dataset to be visualized
Whether to differentiate Labels or not
Set the alpha parameter
Set the metric for UMAP | visualize('t-sne','tfidf',corpus)
visualize('t-sne','count',corpus,alpha = 0.5)
visualize('t-sne','tfidf',corpus,labels =False)
visualize('umap','tfidf',corpus)
visualize('umap','tfidf',corpus,labels = False)
visualize('umap','count',corpus,metric= 'cosine') | examples/Sangarshanan/comparing_corpus_visualizers.ipynb | pdamodaran/yellowbrick | apache-2.0 |
1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
DOC.set_value("Other: troposphere")
DOC.set_value("mesosphere")
DOC.set_value("stratosphere")
DOC.set_value("whole atmosphere")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Lumped higher hydrocarbon species and oxidation products, parameterized source of Cly and Bry in stratosphere, short-lived species not advected")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(82)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
DOC.set_value("Operator splitting")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(30)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds). | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(30)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
DOC.set_value("Anthropogenic")
DOC.set_value("Other: bare ground")
DOC.set_value("Sea surface")
DOC.set_value("Vegetation")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("CO, CH2O, NO, C3H6, isoprene, C2H6, C2H4, C4H10, terpenes, C3H8, acetone, CH3OH, C2H5OH, H2, SO2, NH3")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("DMS")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
DOC.set_value("Aircraft")
DOC.set_value("Biomass burning")
DOC.set_value("Lightning")
DOC.set_value("Other: volcanoes")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("CO, CH2O, NO, C3H6, isoprene, C2H6, C2H4, C4H10, terpenes, C3H8, acetone, CH3OH, C2H5OH, H2, SO2, NH3")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("CH4, N2O")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
DOC.set_value("Bry")
DOC.set_value("Cly")
DOC.set_value("H2O")
DOC.set_value("HOx")
DOC.set_value("NOy")
DOC.set_value("Other: sox")
DOC.set_value("Ox")
DOC.set_value("VOCs")
DOC.set_value("isoprene")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(157)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(21)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(19)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
DOC.set_value("Bry")
DOC.set_value("Cly")
DOC.set_value("NOy")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
DOC.set_value("NAT (Nitric acid trihydrate)")
DOC.set_value("Polar stratospheric ice")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(3)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("3")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
DOC.set_value("Sulphate")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(39)
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
DOC.set_value("Offline (with clouds)")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1 Counter
A Counter is a dict subclass for counting hashable objects. It is an unordered collection where elements are stored as dictionary keys and their counts are stored as dictionary values.
1.1 construction | c1 = Counter()
c2 = Counter('gaufung')
c3 = Counter({'red':4,'blue':10})
c4 = Counter(cats=4,dogs=5) | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
1.2 using key | c = Counter(['dog', 'cat'])
c['fox'] | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
1.3 delete key
Setting a count to zero does not remove an element from a counter. Use del to remove it entirely: | c['dog'] = 0
del c['dog'] | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
1.4 elements
Return an iterator over elements repeating each as many times as its count. | c = Counter(a=4, b=2, c=0, d=-2)
print list(c.elements()) | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
1.5 most_common
Return a list of the n most common elements and their counts from the most common to the least. | Counter('abracadabra').most_common(3) | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
1.6 subtract([iterable-or-mapping])
Elements are subtracted from an iterable or from another mapping (or counter). | c = Counter(a=4, b=2, c=0, d=-2)
d = Counter(a=1, b=2, c=3, d=4)
c.subtract(d)
c | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
2 deque
Deques are a generalization of stacks and queues,Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction.
append(x)
Add x to the right side of the deque.
appendleft(x)
Add x to the left side of the deque.
clear()
Remove all elements from the deque leaving it with length 0.
count(x)
Count the number of deque elements equal to x.
extend(iterable)
Extend the right side of the deque by appending elements from the iterable argument.
extendleft(iterable)
Extend the left side of the deque by appending elements from iterable. Note, the series of left appends results in reversing the order of elements in the iterable argument.
pop()
Remove and return an element from the right side of the deque. If no elements are present, raises an IndexError.
popleft()
Remove and return an element from the left side of the deque. If no elements are present, raises an IndexError.
remove(value)
Removed the first occurrence of value. If not found, raises a ValueError.
reverse()
Reverse the elements of the deque in-place and then return None.
rotate(n)
Rotate the deque n steps to the right. If n is negative, rotate to the left. Rotating one step to the right is equivalent to: d.appendleft(d.pop()).
maxlen
Maximum size of a deque or None if unbounded.
3 defaultdict
dictionary supply missing values | s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)]
d = defaultdict(list)
for k, v in s:
d[k].append(v)
d.items() | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
4 namedtuple
Named tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. They can be used wherever regular tuples are used, and they add the ability to access fields by name instead of position index.
namedtuple(typename, field_names[, verbose=False][, rename=False]) | Point = namedtuple('Point', ['x', 'y'], verbose=True)
p = Point(11, y=22)
p | python-statatics-tutorial/basic-theme/python-language/Collections.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
The module HARK.ConsumptionSaving.ConsIndShockModel concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks are fully transitory or fully permanent.
ConsIndShockModel includes:
1. A very basic "perfect foresight" model with no uncertainty.
2. A model with risk over transitory and permanent income shocks.
3. The model described in (2), with an interest rate for debt that differs from the interest rate for savings.
This notebook provides documentation for the second of these models.
$\newcommand{\CRRA}{\rho}$
$\newcommand{\DiePrb}{\mathsf{D}}$
$\newcommand{\PermGroFac}{\Gamma}$
$\newcommand{\Rfree}{\mathsf{R}}$
$\newcommand{\DiscFac}{\beta}$
Statement of idiosyncratic income shocks model
Suppose we want to solve a model like the one analyzed in BufferStockTheory, which has all the same features as the perfect foresight consumer, plus idiosyncratic shocks to income each period. Agents with this kind of model are represented by the class IndShockConsumerType.
Specifically, this type of consumer receives two income shocks at the beginning of each period: a completely transitory shock $\newcommand{\tShkEmp}{\theta}{\tShkEmp_t}$ and a completely permanent shock $\newcommand{\pShk}{\psi}{\pShk_t}$. Moreover, the agent is subject to borrowing a borrowing limit: the ratio of end-of-period assets $A_t$ to permanent income $P_t$ must be greater than $\underline{a}$. As with the perfect foresight problem, this model is stated in terms of normalized variables, dividing all real variables by $P_t$:
\begin{eqnarray}
v_t(m_t) &=& \max_{c_t} {~} u(c_t) + \DiscFac (1-\DiePrb_{t+1}) \mathbb{E}{t} \left[ (\PermGroFac{t+1}\psi_{t+1})^{1-\CRRA} v_{t+1}(m_{t+1}) \right], \
a_t &=& m_t - c_t, \
a_t &\geq& \text{$\underline{a}$}, \
m_{t+1} &=& \Rfree/(\PermGroFac_{t+1} \psi_{t+1}) a_t + \theta_{t+1}, \
(\psi_{t+1},\theta_{t+1}) &\sim& F_{t+1}, \
\mathbb{E}[\psi]=\mathbb{E}[\theta] &=& 1, \
u(c) &=& \frac{c^{1-\rho}}{1-\rho}.
\end{eqnarray}
Solution method for IndShockConsumerType
With the introduction of (non-trivial) risk, the idiosyncratic income shocks model has no closed form solution and must be solved numerically. The function solveConsIndShock solves the one period problem for the IndShockConsumerType class. To do so, HARK uses the original version of the endogenous grid method (EGM) first described here <cite data-cite="6202365/HQ6H9JEI"></cite>; see also the SolvingMicroDSOPs lecture notes.
Briefly, the transition equation for $m_{t+1}$ can be substituted into the problem definition; the second term of the reformulated maximand represents "end of period value of assets" $\mathfrak{v}_t(a_t)$ ("Gothic v"):
\begin{eqnarray}
v_t(m_t) &=& \max_{c_t} {~} u(c_t) + \underbrace{\DiscFac (1-\DiePrb_{t+1}) \mathbb{E}{t} \left[ (\PermGroFac{t+1}\psi_{t+1})^{1-\CRRA} v_{t+1}(\Rfree/(\PermGroFac_{t+1} \psi_{t+1}) a_t + \theta_{t+1}) \right]}_{\equiv \mathfrak{v}_t(a_t)}.
\end{eqnarray}
The first order condition with respect to $c_t$ is thus simply:
\begin{eqnarray}
u^{\prime}(c_t) - \mathfrak{v}'_t(a_t) = 0 \Longrightarrow c_t^{-\CRRA} = \mathfrak{v}'_t(a_t) \Longrightarrow c_t = \mathfrak{v}'_t(a_t)^{-1/\CRRA},
\end{eqnarray}
and the marginal value of end-of-period assets can be computed as:
\begin{eqnarray}
\mathfrak{v}'t(a_t) = \DiscFac (1-\DiePrb{t+1}) \mathbb{E}{t} \left[ \Rfree (\PermGroFac{t+1}\psi_{t+1})^{-\CRRA} v'{t+1}(\Rfree/(\PermGroFac{t+1} \psi_{t+1}) a_t + \theta_{t+1}) \right].
\end{eqnarray}
To solve the model, we choose an exogenous grid of $a_t$ values that spans the range of values that could plausibly be achieved, compute $\mathfrak{v}'_t(a_t)$ at each of these points, calculate the value of consumption $c_t$ whose marginal utility is consistent with the marginal value of assets, then find the endogenous $m_t$ gridpoint as $m_t = a_t + c_t$. The set of $(m_t,c_t)$ gridpoints is then interpolated to construct the consumption function.
Example parameter values to construct an instance of IndShockConsumerType
In order to create an instance of IndShockConsumerType, the user must specify parameters that characterize the (age-varying) distribution of income shocks $F_{t+1}$, the artificial borrowing constraint $\underline{a}$, and the exogenous grid of end-of-period assets-above-minimum for use by EGM, along with all of the parameters for the perfect foresight model. The table below presents the complete list of parameter values required to instantiate an IndShockConsumerType, along with example values.
| Parameter | Description | Code | Example value | Time-varying? |
| :---: | --- | --- | --- | --- |
| $\DiscFac$ |Intertemporal discount factor | $\texttt{DiscFac}$ | $0.96$ | |
| $\CRRA$|Coefficient of relative risk aversion | $\texttt{CRRA}$ | $2.0$ | |
| $\Rfree$ | Risk free interest factor | $\texttt{Rfree}$ | $1.03$ | |
| $1 - \DiePrb_{t+1}$ |Survival probability | $\texttt{LivPrb}$ | $[0.98]$ | $\surd$ |
|$\PermGroFac_{t+1}$|Permanent income growth factor|$\texttt{PermGroFac}$| $[1.01]$ | $\surd$ |
| $\sigma_\psi$| Standard deviation of log permanent income shocks | $\texttt{PermShkStd}$ | $[0.1]$ |$\surd$ |
| $N_\psi$| Number of discrete permanent income shocks | $\texttt{PermShkCount}$ | $7$ | |
| $\sigma_\theta$| Standard deviation of log transitory income shocks | $\texttt{TranShkStd}$ | $[0.2]$ | $\surd$ |
| $N_\theta$| Number of discrete transitory income shocks | $\texttt{TranShkCount}$ | $7$ | |
| $\mho$ | Probability of being unemployed and getting $\theta=\underline{\theta}$ | $\texttt{UnempPrb}$ | $0.05$ | |
| $\underline{\theta}$| Transitory shock when unemployed | $\texttt{IncUnemp}$ | $0.3$ | |
| $\mho^{Ret}$ | Probability of being "unemployed" when retired | $\texttt{UnempPrb}$ | $0.0005$ | |
| $\underline{\theta}^{Ret}$| Transitory shock when "unemployed" and retired | $\texttt{IncUnemp}$ | $0.0$ | |
| $(none)$ | Period of the lifecycle model when retirement begins | $\texttt{T_retire}$ | $0$ | |
| $(none)$ | Minimum value in assets-above-minimum grid | $\texttt{aXtraMin}$ | $0.001$ | |
| $(none)$ | Maximum value in assets-above-minimum grid | $\texttt{aXtraMax}$ | $20.0$ | |
| $(none)$ | Number of points in base assets-above-minimum grid | $\texttt{aXtraCount}$ | $48$ | |
| $(none)$ | Exponential nesting factor for base assets-above-minimum grid | $\texttt{aXtraNestFac}$ | $3$ | |
| $(none)$ | Additional values to add to assets-above-minimum grid | $\texttt{aXtraExtra}$ | $None$ | |
| $\underline{a}$| Artificial borrowing constraint (normalized) | $\texttt{BoroCnstArt}$ | $0.0$ | |
| $(none)$|Indicator for whether $\texttt{vFunc}$ should be computed | $\texttt{vFuncBool}$ | $True$ | |
| $(none)$ |Indicator for whether $\texttt{cFunc}$ should use cubic splines | $\texttt{CubicBool}$ | $False$ | |
|$T$| Number of periods in this type's "cycle" |$\texttt{T_cycle}$| $1$ | |
|(none)| Number of times the "cycle" occurs |$\texttt{cycles}$| $0$ | | | IdiosyncDict={
# Parameters shared with the perfect foresight model
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Intertemporal discount factor
"LivPrb" : [0.98], # Survival probability
"PermGroFac" :[1.01], # Permanent income growth factor
# Parameters that specify the income distribution over the lifecycle
"PermShkStd" : [0.1], # Standard deviation of log permanent shocks to income
"PermShkCount" : 7, # Number of points in discrete approximation to permanent income shocks
"TranShkStd" : [0.2], # Standard deviation of log transitory shocks to income
"TranShkCount" : 7, # Number of points in discrete approximation to transitory income shocks
"UnempPrb" : 0.05, # Probability of unemployment while working
"IncUnemp" : 0.3, # Unemployment benefits replacement rate
"UnempPrbRet" : 0.0005, # Probability of "unemployment" while retired
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"T_retire" : 0, # Period of retirement (0 --> no retirement)
"tax_rate" : 0.0, # Flat income tax rate (legacy parameter, will be removed in future)
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 20, # Maximum end-of-period "assets above minimum" value
"aXtraCount" : 48, # Number of points in the base grid of "assets above minimum"
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraExtra" : [None], # Additional values to add to aXtraGrid
# A few other paramaters
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"vFuncBool" : True, # Whether to calculate the value function during solution
"CubicBool" : False, # Preference shocks currently only compatible with linear cFunc
"T_cycle" : 1, # Number of periods in the cycle for this agent type
# Parameters only used in simulation
"AgentCount" : 10000, # Number of agents of this type
"T_sim" : 120, # Number of periods to simulate
"aNrmInitMean" : -6.0, # Mean of log initial assets
"aNrmInitStd" : 1.0, # Standard deviation of log initial assets
"pLvlInitMean" : 0.0, # Mean of log initial permanent income
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor
"T_age" : None, # Age after which simulated agents are automatically killed
} | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
The distribution of permanent income shocks is specified as mean one lognormal, with an age-varying (underlying) standard deviation. The distribution of transitory income shocks is also mean one lognormal, but with an additional point mass representing unemployment; the transitory shocks are adjusted so that the distribution is still mean one. The continuous distributions are discretized with an equiprobable distribution.
Optionally, the user can specify the period when the individual retires and escapes essentially all income risk as T_retire; this can be turned off by setting the parameter to $0$. In retirement, all permanent income shocks are turned off, and the only transitory shock is an "unemployment" shock, likely with small probability; this prevents the retired problem from degenerating into a perfect foresight model.
The grid of assets above minimum $\texttt{aXtraGrid}$ is specified by its minimum and maximum level, the number of gridpoints, and the extent of exponential nesting. The greater the (integer) value of $\texttt{aXtraNestFac}$, the more dense the gridpoints will be at the bottom of the grid (and more sparse near the top); setting $\texttt{aXtraNestFac}$ to $0$ will generate an evenly spaced grid of $a_t$.
The artificial borrowing constraint $\texttt{BoroCnstArt}$ can be set to None to turn it off.
It is not necessary to compute the value function in this model, and it is not computationally free to do so. You can choose whether the value function should be calculated and returned as part of the solution of the model with $\texttt{vFuncBool}$. The consumption function will be constructed as a piecewise linear interpolation when $\texttt{CubicBool}$ is False, and will be a piecewise cubic spline interpolator if True.
Solving and examining the solution of the idiosyncratic income shocks model
The cell below creates an infinite horizon instance of IndShockConsumerType and solves its model by calling its solve method. | IndShockExample = IndShockConsumerType(**IdiosyncDict)
IndShockExample.cycles = 0 # Make this type have an infinite horizon
IndShockExample.solve() | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
After solving the model, we can examine an element of this type's $\texttt{solution}$: | print(vars(IndShockExample.solution[0])) | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
The single-period solution to an idiosyncratic shocks consumer's problem has all of the same attributes as in the perfect foresight model, with a couple additions. The solution can include the marginal marginal value of market resources function $\texttt{vPPfunc}$, but this is only constructed if $\texttt{CubicBool}$ is True, so that the MPC can be accurately computed; when it is False, then $\texttt{vPPfunc}$ merely returns NaN everywhere.
The solveConsIndShock function calculates steady state market resources and stores it in the attribute $\texttt{mNrmSS}$. This represents the steady state level of $m_t$ if this period were to occur indefinitely, but with income shocks turned off. This is relevant in a "one period infinite horizon" model like we've specified here, but is less useful in a lifecycle model.
Let's take a look at the consumption function by plotting it, along with its derivative (the MPC): | print('Consumption function for an idiosyncratic shocks consumer type:')
plot_funcs(IndShockExample.solution[0].cFunc,IndShockExample.solution[0].mNrmMin,5)
print('Marginal propensity to consume for an idiosyncratic shocks consumer type:')
plot_funcs_der(IndShockExample.solution[0].cFunc,IndShockExample.solution[0].mNrmMin,5) | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
The lower part of the consumption function is linear with a slope of 1, representing the constrained part of the consumption function where the consumer would like to consume more by borrowing-- his marginal utility of consumption exceeds the marginal value of assets-- but he is prevented from doing so by the artificial borrowing constraint.
The MPC is a step function, as the $\texttt{cFunc}$ itself is a piecewise linear function; note the large jump in the MPC where the borrowing constraint begins to bind.
If you want to look at the interpolation nodes for the consumption function, these can be found by "digging into" attributes of $\texttt{cFunc}$: | print('mNrmGrid for unconstrained cFunc is ',IndShockExample.solution[0].cFunc.functions[0].x_list)
print('cNrmGrid for unconstrained cFunc is ',IndShockExample.solution[0].cFunc.functions[0].y_list)
print('mNrmGrid for borrowing constrained cFunc is ',IndShockExample.solution[0].cFunc.functions[1].x_list)
print('cNrmGrid for borrowing constrained cFunc is ',IndShockExample.solution[0].cFunc.functions[1].y_list) | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
The consumption function in this model is an instance of LowerEnvelope1D, a class that takes an arbitrary number of 1D interpolants as arguments to its initialization method. When called, a LowerEnvelope1D evaluates each of its component functions and returns the lowest value. Here, the two component functions are the unconstrained consumption function-- how the agent would consume if the artificial borrowing constraint did not exist for just this period-- and the borrowing constrained consumption function-- how much he would consume if the artificial borrowing constraint is binding.
The actual consumption function is the lower of these two functions, pointwise. We can see this by plotting the component functions on the same figure: | plot_funcs(IndShockExample.solution[0].cFunc.functions,-0.25,5.) | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
Simulating the idiosyncratic income shocks model
In order to generate simulated data, an instance of IndShockConsumerType needs to know how many agents there are that share these particular parameters (and are thus ex ante homogeneous), the distribution of states for newly "born" agents, and how many periods to simulated. These simulation parameters are described in the table below, along with example values.
| Description | Code | Example value |
| :---: | --- | --- |
| Number of consumers of this type | $\texttt{AgentCount}$ | $10000$ |
| Number of periods to simulate | $\texttt{T_sim}$ | $120$ |
| Mean of initial log (normalized) assets | $\texttt{aNrmInitMean}$ | $-6.0$ |
| Stdev of initial log (normalized) assets | $\texttt{aNrmInitStd}$ | $1.0$ |
| Mean of initial log permanent income | $\texttt{pLvlInitMean}$ | $0.0$ |
| Stdev of initial log permanent income | $\texttt{pLvlInitStd}$ | $0.0$ |
| Aggregrate productivity growth factor | $\texttt{PermGroFacAgg}$ | $1.0$ |
| Age after which consumers are automatically killed | $\texttt{T_age}$ | $None$ |
Here, we will simulate 10,000 consumers for 120 periods. All newly born agents will start with permanent income of exactly $P_t = 1.0 = \exp(\texttt{pLvlInitMean})$, as $\texttt{pLvlInitStd}$ has been set to zero; they will have essentially zero assets at birth, as $\texttt{aNrmInitMean}$ is $-6.0$; assets will be less than $1\%$ of permanent income at birth.
These example parameter values were already passed as part of the parameter dictionary that we used to create IndShockExample, so it is ready to simulate. We need to set the track_vars attribute to indicate the variables for which we want to record a history. | IndShockExample.track_vars = ['aNrm','mNrm','cNrm','pLvl']
IndShockExample.initialize_sim()
IndShockExample.simulate() | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
We can now look at the simulated data in aggregate or at the individual consumer level. Like in the perfect foresight model, we can plot average (normalized) market resources over time, as well as average consumption: | plt.plot(np.mean(IndShockExample.history['mNrm'],axis=1))
plt.xlabel('Time')
plt.ylabel('Mean market resources')
plt.show()
plt.plot(np.mean(IndShockExample.history['cNrm'],axis=1))
plt.xlabel('Time')
plt.ylabel('Mean consumption')
plt.show() | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
We could also plot individual consumption paths for some of the consumers-- say, the first five: | plt.plot(IndShockExample.history['cNrm'][:,0:5])
plt.xlabel('Time')
plt.ylabel('Individual consumption paths')
plt.show() | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
Other example specifications of idiosyncratic income shocks consumers
$\texttt{IndShockConsumerType}$-- and $\texttt{HARK}$ in general-- can also represent models that are not infinite horizon.
Lifecycle example
Suppose we wanted to represent consumers with a lifecycle-- parameter values that differ by age, with a finite end point beyond which the individual cannot surive. This can be done very easily by simply specifying the time-varying attributes $\texttt{PermGroFac}$, $\texttt{LivPrb}$, $\texttt{PermShkStd}$, and $\texttt{TranShkStd}$ as Python lists specifying the sequence of periods these agents will experience, from beginning to end.
In the cell below, we define a parameter dictionary for a rather short ten period lifecycle, with arbitrarily chosen parameters. For a more realistically calibrated (and much longer) lifecycle model, see the SolvingMicroDSOPs REMARK. | LifecycleDict={ # Click arrow to expand this fairly large parameter dictionary
# Parameters shared with the perfect foresight model
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Intertemporal discount factor
"LivPrb" : [0.99,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1],
"PermGroFac" : [1.01,1.01,1.01,1.02,1.02,1.02,0.7,1.0,1.0,1.0],
# Parameters that specify the income distribution over the lifecycle
"PermShkStd" : [0.1,0.2,0.1,0.2,0.1,0.2,0.1,0,0,0],
"PermShkCount" : 7, # Number of points in discrete approximation to permanent income shocks
"TranShkStd" : [0.3,0.2,0.1,0.3,0.2,0.1,0.3,0,0,0],
"TranShkCount" : 7, # Number of points in discrete approximation to transitory income shocks
"UnempPrb" : 0.05, # Probability of unemployment while working
"IncUnemp" : 0.3, # Unemployment benefits replacement rate
"UnempPrbRet" : 0.0005, # Probability of "unemployment" while retired
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"T_retire" : 7, # Period of retirement (0 --> no retirement)
"tax_rate" : 0.0, # Flat income tax rate (legacy parameter, will be removed in future)
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 20, # Maximum end-of-period "assets above minimum" value
"aXtraCount" : 48, # Number of points in the base grid of "assets above minimum"
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraExtra" : [None], # Additional values to add to aXtraGrid
# A few other paramaters
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"vFuncBool" : True, # Whether to calculate the value function during solution
"CubicBool" : False, # Preference shocks currently only compatible with linear cFunc
"T_cycle" : 10, # Number of periods in the cycle for this agent type
# Parameters only used in simulation
"AgentCount" : 10000, # Number of agents of this type
"T_sim" : 120, # Number of periods to simulate
"aNrmInitMean" : -6.0, # Mean of log initial assets
"aNrmInitStd" : 1.0, # Standard deviation of log initial assets
"pLvlInitMean" : 0.0, # Mean of log initial permanent income
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor
"T_age" : 11, # Age after which simulated agents are automatically killed
} | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
In this case, we have specified a ten period model in which retirement happens in period $t=7$. Agents in this model are more likely to die as they age, and their permanent income drops by 30\% at retirement. Let's make and solve this lifecycle example, then look at the $\texttt{solution}$ attribute. | LifecycleExample = IndShockConsumerType(**LifecycleDict)
LifecycleExample.cycles = 1 # Make this consumer live a sequence of periods -- a lifetime -- exactly once
LifecycleExample.solve()
print('First element of solution is',LifecycleExample.solution[0])
print('Solution has', len(LifecycleExample.solution),'elements.') | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
This was supposed to be a ten period lifecycle model-- why does our consumer type have eleven elements in its $\texttt{solution}$? It would be more precise to say that this specification has ten non-terminal periods. The solution to the 11th and final period in the model would be the same for every set of parameters: consume $c_t = m_t$, because there is no future. In a lifecycle model, the terminal period is assumed to exist; the $\texttt{LivPrb}$ parameter does not need to end with a $0.0$ in order to guarantee that survivors die.
We can quickly plot the consumption functions in each period of the model: | print('Consumption functions across the lifecycle:')
mMin = np.min([LifecycleExample.solution[t].mNrmMin for t in range(LifecycleExample.T_cycle)])
LifecycleExample.unpack('cFunc') # This makes all of the cFuncs accessible in the attribute cFunc
plot_funcs(LifecycleExample.cFunc,mMin,5) | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
"Cyclical" example
We can also model consumers who face an infinite horizon, but who do not face the same problem in every period. Consider someone who works as a ski instructor: they make most of their income for the year in the winter, and make very little money in the other three seasons.
We can represent this type of individual as a four period, infinite horizon model in which expected "permanent" income growth varies greatly across seasons. | CyclicalDict = { # Click the arrow to expand this parameter dictionary
# Parameters shared with the perfect foresight model
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Intertemporal discount factor
"LivPrb" : 4*[0.98], # Survival probability
"PermGroFac" : [1.082251, 2.8, 0.3, 1.1],
# Parameters that specify the income distribution over the lifecycle
"PermShkStd" : [0.1,0.1,0.1,0.1],
"PermShkCount" : 7, # Number of points in discrete approximation to permanent income shocks
"TranShkStd" : [0.2,0.2,0.2,0.2],
"TranShkCount" : 7, # Number of points in discrete approximation to transitory income shocks
"UnempPrb" : 0.05, # Probability of unemployment while working
"IncUnemp" : 0.3, # Unemployment benefits replacement rate
"UnempPrbRet" : 0.0005, # Probability of "unemployment" while retired
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"T_retire" : 0, # Period of retirement (0 --> no retirement)
"tax_rate" : 0.0, # Flat income tax rate (legacy parameter, will be removed in future)
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 20, # Maximum end-of-period "assets above minimum" value
"aXtraCount" : 48, # Number of points in the base grid of "assets above minimum"
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraExtra" : [None], # Additional values to add to aXtraGrid
# A few other paramaters
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"vFuncBool" : True, # Whether to calculate the value function during solution
"CubicBool" : False, # Preference shocks currently only compatible with linear cFunc
"T_cycle" : 4, # Number of periods in the cycle for this agent type
# Parameters only used in simulation
"AgentCount" : 10000, # Number of agents of this type
"T_sim" : 120, # Number of periods to simulate
"aNrmInitMean" : -6.0, # Mean of log initial assets
"aNrmInitStd" : 1.0, # Standard deviation of log initial assets
"pLvlInitMean" : 0.0, # Mean of log initial permanent income
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor
"T_age" : None, # Age after which simulated agents are automatically killed
} | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
This consumer type's parameter dictionary is nearly identical to the original infinite horizon type we made, except that each of the time-varying parameters now have four values, rather than just one. Most of these have the same value in each period except for $\texttt{PermGroFac}$, which varies greatly over the four seasons. Note that the product of the four "permanent" income growth factors is almost exactly 1.0-- this type's income does not grow on average in the long run!
Let's make and solve this consumer type, then plot his quarterly consumption functions: | CyclicalExample = IndShockConsumerType(**CyclicalDict)
CyclicalExample.cycles = 0 # Make this consumer type have an infinite horizon
CyclicalExample.solve()
CyclicalExample.unpack('cFunc')
print('Quarterly consumption functions:')
mMin = min([X.mNrmMin for X in CyclicalExample.solution])
plot_funcs(CyclicalExample.cFunc,mMin,5) | examples/ConsIndShockModel/IndShockConsumerType.ipynb | econ-ark/HARK | apache-2.0 |
A Simple Parser for Term Rewriting
This file implements a parser for terms and equations. It uses the parser generator Ply. To install Ply, change the cell below into a code cell and execute it. If the package ply is already installed, this command will only produce a message that the package is already installed.
!conda install -y -c anaconda ply
Specification of the Scanner
The scanner that is implemented below recognizes numbers, variable names, function names, and various operator symbols. Variable names have to start with a lower case letter, while function names start with an uppercase letter. | import ply.lex as lex
tokens = [ 'NUMBER', 'VAR', 'FCT', 'BACKSLASH' ] | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The token Number specifies a natural number. Syntactically, numbers are treated a function symbols. | def t_NUMBER(t):
r'0|[1-9][0-9]*'
return t | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Variables start with a letter, followed by letters, digits, and underscores. They must be followed by a character that is not an opening parenthesis (. | def t_VAR(t):
r'[a-zA-Z][a-zA-Z0-9_]*(?=[^(a-zA-Z0-9_])'
return t | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Function names start with a letter, followed by letters, digits, and underscores.
They have to be followed by an opening parenthesis (. | def t_FCT(t):
r'[a-zA-Z][a-zA-Z0-9_]*(?=[(])'
return t
def t_BACKSLASH(t):
r'\\'
return t | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Single line comments are supported and work as in C. | def t_COMMENT(t):
r'//[^\n]*'
t.lexer.lineno += t.value.count('\n')
pass | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The arithmetic operators and a few other symbols are supported. | literals = ['+', '-', '*', '/', '\\', '%', '^', '(', ')', ';', '=', ','] | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
White space, i.e. space characters, tabulators, and carriage returns are ignored. | t_ignore = ' \t\r' | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Syntactically, newline characters are ignored. However, we still need to keep track of them in order to know which line we are in. This information is needed later for error messages. | def t_newline(t):
r'\n'
t.lexer.lineno += 1
return | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Given a token, the function find_colum returns the column where token starts.
This is possible, because token.lexer.lexdata stores the string that is given to the scanner and token.lexpos is the number of characters that precede token. | def find_column(token):
program = token.lexer.lexdata
line_start = program.rfind('\n', 0, token.lexpos) + 1
return (token.lexpos - line_start) + 1 | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function t_error is called for any token t that can not be scanned by the lexer. In this case, t.value[0] is the first character that can not be recognized by the scanner. | def t_error(t):
column = find_column(t)
print(f"Illegal character '{t.value[0]}' in line {t.lineno}, column {column}.")
t.lexer.skip(1) | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The next assignment is necessary to make the lexer think that the code given above is part of some file. | __file__ = 'main'
lexer = lex.lex()
def test_scanner(file_name):
with open(file_name, 'r') as handle:
program = handle.read()
print(program)
lexer.input(program)
lexer.lineno = 1
return [t for t in lexer] | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
for t in test_scanner('Examples/quasigroup.eqn'):
print(t)
Specification of the Parser
We will use the following grammar to specify the language that our compiler can translate:
```
axioms
: equation
| axioms equation
;
equation
: term '=' term
;
term: term '+' term
| term '-' term
| term '*' term
| term '/' term
| term '\' term
| term '%' term
| term '^' term
| '(' term ')'
| FCT '(' term_list ')'
| FCT
| VAR
;
term_list
: / epsilon /
| term
| term ',' ne_term_list
;
ne_term_list
: term
| term ',' ne_term_list
;
```
We will use precedence declarations to resolve the ambiguity that is inherent in this grammar. | import ply.yacc as yacc | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The start variable of our grammar is axioms. | start = 'axioms'
precedence = (
('nonassoc', '='),
('left', '+', '-'),
('left', '*', '/', 'BACKSLASH', '%'),
('right', '^')
)
def p_axioms_one(p):
"axioms : equation"
p[0] = ('axioms', p[1])
def p_axioms_more(p):
"axioms : axioms equation"
p[0] = p[1] + (p[2],)
def p_equation(p):
"equation : term '=' term ';'"
p[0] = ('=', p[1], p[3])
def p_term_plus(p):
"term : term '+' term"
p[0] = ('+', p[1], p[3])
def p_term_minus(p):
"term : term '-' term"
p[0] = ('-', p[1], p[3])
def p_term_times(p):
"term : term '*' term"
p[0] = ('*', p[1], p[3])
def p_term_divide(p):
"term : term '/' term"
p[0] = ('/', p[1], p[3])
def p_term_backslash(p):
"term : term BACKSLASH term"
p[0] = ('\\', p[1], p[3])
def p_term_modulo(p):
"term : term '%' term"
p[0] = ('%', p[1], p[3])
def p_term_power(p):
"term : term '^' term"
p[0] = ('^', p[1], p[3])
def p_term_group(p):
"term : '(' term ')'"
p[0] = p[2]
def p_term_fct_call(p):
"term : FCT '(' term_list ')'"
p[0] = (p[1],) + p[3][1:]
def p_term_number(p):
"term : NUMBER"
p[0] = (p[1],)
def p_term_id(p):
"term : VAR"
p[0] = ('$var', p[1])
def p_term_list_empty(p):
"term_list :"
p[0] = ('.',)
def p_term_list_one(p):
"term_list : term"
p[0] = ('.', p[1])
def p_term_list_more(p):
"term_list : term ',' ne_term_list"
p[0] = ('.', p[1]) + p[3][1:]
def p_ne_term_list_one(p):
"ne_term_list : term"
p[0] = ('.', p[1])
def p_ne_term_list_more(p):
"ne_term_list : term ',' ne_term_list"
p[0] = ('.', p[1]) + p[3][1:]
def p_error(p):
if p:
column = find_column(p)
print(f'Syntax error at token "{p.value}" in line {p.lineno}, column {column}.')
else:
print('Syntax error at end of input.') | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Setting the optional argument write_tables to False is required to prevent an obscure bug where the parser generator tries to read an empty parse table. As we have used precedence declarations to resolve all shift/reduce conflicts, the action table should contain no conflict. | parser = yacc.yacc(write_tables=False, debug=True) | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
!cat parser.out
The notebook AST-2-Dot.ipynb provides the function tuple2dot. This function can be used to visualize the abstract syntax tree that is generated by the function yacc.parse. | %run AST-2-Dot.ipynb | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function parse takes a file_name as its sole argument. The file is read and parsed.
The resulting parse tree is visualized using graphviz. It is important to reset the
attribute lineno of the scanner, for otherwise error messages will not have the correct line numbers. | def test_parse(file_name):
lexer.lineno = 1
with open(file_name, 'r') as handle:
program = handle.read()
ast = yacc.parse(program)
print(ast)
return tuple2dot(ast) | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
!cat Examples/quasigroup.eqn
test_parse('Examples/quasigroup.eqn')
The function indent is used to indent the generated assembler commands by preceding them with 8 space characters. | def parse_file(file_name):
lexer.lineno = 1
with open(file_name, 'r') as handle:
program = handle.read()
AST = yacc.parse(program)
if AST:
_, *L = AST
return L
return None | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
parse_file('Examples/group-theory.eqn') | def parse_equation(s):
lexer.lineno = 1
AST = yacc.parse(s + ';')
if AST:
_, *L = AST
return L[0]
return None | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
parse_equation('i(x) * x = 1') | def parse_term(s):
lexer.lineno = 1
AST = yacc.parse(s + '= 1;')
if AST:
_, *L = AST
return L[0][1]
return None | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
parse_term('i(x) * x') | def to_str(t):
if isinstance(t, set):
return '{' + ', '.join({ f'{to_str(eq)}' for eq in t }) + '}'
if isinstance(t, list):
return '[' + ', '.join([ f'{to_str(eq)}' for eq in t ]) + ']'
if isinstance(t, dict):
return '{' + ', '.join({ f'{k}: {to_str(v)}' for k, v in t.items() }) + '}'
if isinstance(t, str):
return t
if t[0] == '$var':
return t[1]
if len(t) == 3 and t[0] in ['=']:
_, lhs, rhs = t
return f'{to_str(lhs)} = {to_str(rhs)}'
if t[0] == '\\':
op, lhs, rhs = t
return to_str_paren(lhs) + ' \\ ' + to_str_paren(rhs)
if len(t) == 3 and t[0] in ['+', '-', '*', '/', '%', '^']:
op, lhs, rhs = t
return f'{to_str_paren(lhs)} {op} {to_str_paren(rhs)}'
f, *Args = t
if Args == []:
return f
return f'{f}({to_str_list(Args)})'
def to_str_paren(t):
if isinstance(t, str):
return t
if t[0] == '$var':
return t[1]
if len(t) == 3:
op, lhs, rhs = t
return f'({to_str_paren(lhs)} {op} {to_str_paren(rhs)})'
f, *Args = t
if Args == []:
return f
return f'{f}({to_str_list(Args)})'
def to_str_list(TL):
if TL == []:
return ''
t, *Ts = TL
if Ts == []:
return f'{to_str(t)}'
return f'{to_str(t)}, {to_str_list(Ts)}' | Python/4 Automatic Theorem Proving/Parser.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
To do:
Get temperature for Guatemala
Fetch rainfall for Afghanistan between 1980 and 1999 | url = 'http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/GTM.csv'
response = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc
if response.status_code != 200:
print ('Failed to get data: ', response.status_code)
else:
print ('First 100 charecters of data are: ')
print (response.text[:100])
url = 'http://climatedataapi.worldbank.org/climateweb/rest/v1/country/annualavg/pr/1980/1999/AFG.csv'
response = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc
if response.status_code != 200:
print ('Failed to get data: ', response.status_code)
else:
print ('First 100 charecters of data are: ')
print (response.text[:100])
# Create a csv file: test01.csv
1901, 12.3
1902, 45.6
1903, 78.9
with open('test01.csv', 'r') as reader:
for line in reader:
print (len(line))
with open ('test01.csv', 'r') as reader:
for line in reader:
fields = line.split (',')
print (fields)
# We need to get rid of the hidden newline \n
with open ('test01.csv', 'r') as reader:
for line in reader:
fields = line.strip().split (',')
print (fields) | Untitled1.ipynb | WillRhB/PythonLesssons | mit |
Using the csv library instead | import csv
with open ('test01.csv', 'r') as rawdata:
csvdata = csv.reader(rawdata)
for record in csvdata:
print (record)
url = 'http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/year/GTM.csv'
response = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc
if response.status_code != 200:
print ('Failed to get data: ', response.status_code)
else:
wrapper = csv.reader(response.text.strip().split('\n'))
for record in wrapper:
if record[0] != 'year' :
year = int(record[0])
value = float(record[1])
print (year, value) | Untitled1.ipynb | WillRhB/PythonLesssons | mit |
We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data. | def preprocess(img, size=512):
transform = T.Compose([
T.Scale(size),
T.ToTensor(),
T.Normalize(mean=SQUEEZENET_MEAN.tolist(),
std=SQUEEZENET_STD.tolist()),
T.Lambda(lambda x: x[None]),
])
return transform(img)
def deprocess(img):
transform = T.Compose([
T.Lambda(lambda x: x[0]),
T.Normalize(mean=[0, 0, 0], std=[1.0 / s for s in SQUEEZENET_STD.tolist()]),
T.Normalize(mean=[-m for m in SQUEEZENET_MEAN.tolist()], std=[1, 1, 1]),
T.Lambda(rescale),
T.ToPILImage(),
])
return transform(img)
def rescale(x):
low, high = x.min(), x.max()
x_rescaled = (x - low) / (high - low)
return x_rescaled
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def features_from_img(imgpath, imgsize):
img = preprocess(PIL.Image.open(imgpath), size=imgsize)
img_var = Variable(img.type(dtype))
return extract_features(img_var, cnn), img_var
# Older versions of scipy.misc.imresize yield different results
# from newer versions, so we check to make sure scipy is up to date.
def check_scipy():
import scipy
vnum = int(scipy.__version__.split('.')[1])
assert vnum >= 16, "You must install SciPy >= 0.16.0 to complete this notebook."
check_scipy()
answers = np.load('style-transfer-checks.npz')
| CS231n/assignment3/StyleTransfer-PyTorch.ipynb | UltronAI/Deep-Learning | mit |
As in the last assignment, we need to set the dtype to select either the CPU or the GPU | dtype = torch.FloatTensor
# Uncomment out the following line if you're on a machine with a GPU set up for PyTorch!
# dtype = torch.cuda.FloatTensor
# Load the pre-trained SqueezeNet model.
cnn = torchvision.models.squeezenet1_1(pretrained=True).features
cnn.type(dtype)
# We don't want to train the model any further, so we don't want PyTorch to waste computation
# computing gradients on parameters we're never going to update.
for param in cnn.parameters():
param.requires_grad = False
# We provide this helper code which takes an image, a model (cnn), and returns a list of
# feature maps, one per layer.
def extract_features(x, cnn):
"""
Use the CNN to extract features from the input image x.
Inputs:
- x: A PyTorch Variable of shape (N, C, H, W) holding a minibatch of images that
will be fed to the CNN.
- cnn: A PyTorch model that we will use to extract features.
Returns:
- features: A list of feature for the input images x extracted using the cnn model.
features[i] is a PyTorch Variable of shape (N, C_i, H_i, W_i); recall that features
from different layers of the network may have different numbers of channels (C_i) and
spatial dimensions (H_i, W_i).
"""
features = []
prev_feat = x
for i, module in enumerate(cnn._modules.values()):
next_feat = module(prev_feat)
features.append(next_feat)
prev_feat = next_feat
return features | CS231n/assignment3/StyleTransfer-PyTorch.ipynb | UltronAI/Deep-Learning | mit |
Computing Loss
We're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below.
Content loss
We can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent not on the parameters of the model, but instead on the pixel values of our original image.
Let's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\ell$), that has feature maps $A^\ell \in \mathbb{R}^{1 \times C_\ell \times H_\ell \times W_\ell}$. $C_\ell$ is the number of filters/channels in layer $\ell$, $H_\ell$ and $W_\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\ell \in \mathbb{R}^{N_\ell \times M_\ell}$ be the feature map for the current image and $P^\ell \in \mathbb{R}^{N_\ell \times M_\ell}$ be the feature map for the content source image where $M_\ell=H_\ell\times W_\ell$ is the number of elements in each feature map. Each row of $F^\ell$ or $P^\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.
Then the content loss is given by:
$L_c = w_c \times \sum_{i,j} (F_{ij}^{\ell} - P_{ij}^{\ell})^2$ | def content_loss(content_weight, content_current, content_original):
"""
Compute the content loss for style transfer.
Inputs:
- content_weight: Scalar giving the weighting for the content loss.
- content_current: features of the current image; this is a PyTorch Tensor of shape
(1, C_l, H_l, W_l).
- content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l).
Returns:
- scalar content loss
"""
pass
| CS231n/assignment3/StyleTransfer-PyTorch.ipynb | UltronAI/Deep-Learning | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.