text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: Define a label map
Step3: Collect the dataset
Step4: Read the dataset and apply basic preprocessing
Step5: The columns we are interested in are the following
Step6: Dataset visualization
Step7: Train/test split
Step8: To account for that we will go for a stratified split.
Step9: Data input pipeline
Step11: Our text preprocessing code mostly comes from
Step12: Run the preprocessor on a sample input
Step13: We will now create tf.data.Dataset objects from the dataframes.
Step14: Preprocessing utilities
Step15: Create the final datasets
Step16: Model building utilities
Step17: Vision encoder utilities
Step18: Text encoder utilities
Step19: Multimodal model utilities
Step20: You can inspect the structure of the individual encoders as well by setting the
Step21: Evaluate the model
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q tensorflow_text
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from tensorflow import keras
label_map = {"Contradictory": 0, "Implies": 1, "NoEntailment": 2}
image_base_path = keras.utils.get_file(
"tweet_images",
"https://github.com/sayakpaul/Multimodal-Entailment-Baseline/releases/download/v1.0.0/tweet_images.tar.gz",
untar=True,
)
df = pd.read_csv(
"https://github.com/sayakpaul/Multimodal-Entailment-Baseline/raw/main/csvs/tweets.csv"
)
df.sample(10)
images_one_paths = []
images_two_paths = []
for idx in range(len(df)):
current_row = df.iloc[idx]
id_1 = current_row["id_1"]
id_2 = current_row["id_2"]
extentsion_one = current_row["image_1"].split(".")[-1]
extentsion_two = current_row["image_2"].split(".")[-1]
image_one_path = os.path.join(image_base_path, str(id_1) + f".{extentsion_one}")
image_two_path = os.path.join(image_base_path, str(id_2) + f".{extentsion_two}")
images_one_paths.append(image_one_path)
images_two_paths.append(image_two_path)
df["image_1_path"] = images_one_paths
df["image_2_path"] = images_two_paths
# Create another column containing the integer ids of
# the string labels.
df["label_idx"] = df["label"].apply(lambda x: label_map[x])
def visualize(idx):
current_row = df.iloc[idx]
image_1 = plt.imread(current_row["image_1_path"])
image_2 = plt.imread(current_row["image_2_path"])
text_1 = current_row["text_1"]
text_2 = current_row["text_2"]
label = current_row["label"]
plt.subplot(1, 2, 1)
plt.imshow(image_1)
plt.axis("off")
plt.title("Image One")
plt.subplot(1, 2, 2)
plt.imshow(image_1)
plt.axis("off")
plt.title("Image Two")
plt.show()
print(f"Text one: {text_1}")
print(f"Text two: {text_2}")
print(f"Label: {label}")
random_idx = np.random.choice(len(df))
visualize(random_idx)
random_idx = np.random.choice(len(df))
visualize(random_idx)
df["label"].value_counts()
# 10% for test
train_df, test_df = train_test_split(
df, test_size=0.1, stratify=df["label"].values, random_state=42
)
# 5% for validation
train_df, val_df = train_test_split(
train_df, test_size=0.05, stratify=train_df["label"].values, random_state=42
)
print(f"Total training examples: {len(train_df)}")
print(f"Total validation examples: {len(val_df)}")
print(f"Total test examples: {len(test_df)}")
# Define TF Hub paths to the BERT encoder and its preprocessor
bert_model_path = (
"https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1"
)
bert_preprocess_path = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3"
def make_bert_preprocessing_model(sentence_features, seq_length=128):
Returns Model mapping string features to BERT inputs.
Args:
sentence_features: A list with the names of string-valued features.
seq_length: An integer that defines the sequence length of BERT inputs.
Returns:
A Keras Model that can be called on a list or dict of string Tensors
(with the order or names, resp., given by sentence_features) and
returns a dict of tensors for input to BERT.
input_segments = [
tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft)
for ft in sentence_features
]
# Tokenize the text to word pieces.
bert_preprocess = hub.load(bert_preprocess_path)
tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name="tokenizer")
segments = [tokenizer(s) for s in input_segments]
# Optional: Trim segments in a smart way to fit seq_length.
# Simple cases (like this example) can skip this step and let
# the next step apply a default truncation to approximately equal lengths.
truncated_segments = segments
# Pack inputs. The details (start/end token ids, dict of output tensors)
# are model-dependent, so this gets loaded from the SavedModel.
packer = hub.KerasLayer(
bert_preprocess.bert_pack_inputs,
arguments=dict(seq_length=seq_length),
name="packer",
)
model_inputs = packer(truncated_segments)
return keras.Model(input_segments, model_inputs)
bert_preprocess_model = make_bert_preprocessing_model(["text_1", "text_2"])
keras.utils.plot_model(bert_preprocess_model, show_shapes=True, show_dtype=True)
idx = np.random.choice(len(train_df))
row = train_df.iloc[idx]
sample_text_1, sample_text_2 = row["text_1"], row["text_2"]
print(f"Text 1: {sample_text_1}")
print(f"Text 2: {sample_text_2}")
test_text = [np.array([sample_text_1]), np.array([sample_text_2])]
text_preprocessed = bert_preprocess_model(test_text)
print("Keys : ", list(text_preprocessed.keys()))
print("Shape Word Ids : ", text_preprocessed["input_word_ids"].shape)
print("Word Ids : ", text_preprocessed["input_word_ids"][0, :16])
print("Shape Mask : ", text_preprocessed["input_mask"].shape)
print("Input Mask : ", text_preprocessed["input_mask"][0, :16])
print("Shape Type Ids : ", text_preprocessed["input_type_ids"].shape)
print("Type Ids : ", text_preprocessed["input_type_ids"][0, :16])
def dataframe_to_dataset(dataframe):
columns = ["image_1_path", "image_2_path", "text_1", "text_2", "label_idx"]
dataframe = dataframe[columns].copy()
labels = dataframe.pop("label_idx")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
resize = (128, 128)
bert_input_features = ["input_word_ids", "input_type_ids", "input_mask"]
def preprocess_image(image_path):
extension = tf.strings.split(image_path)[-1]
image = tf.io.read_file(image_path)
if extension == b"jpg":
image = tf.image.decode_jpeg(image, 3)
else:
image = tf.image.decode_png(image, 3)
image = tf.image.resize(image, resize)
return image
def preprocess_text(text_1, text_2):
text_1 = tf.convert_to_tensor([text_1])
text_2 = tf.convert_to_tensor([text_2])
output = bert_preprocess_model([text_1, text_2])
output = {feature: tf.squeeze(output[feature]) for feature in bert_input_features}
return output
def preprocess_text_and_image(sample):
image_1 = preprocess_image(sample["image_1_path"])
image_2 = preprocess_image(sample["image_2_path"])
text = preprocess_text(sample["text_1"], sample["text_2"])
return {"image_1": image_1, "image_2": image_2, "text": text}
batch_size = 32
auto = tf.data.AUTOTUNE
def prepare_dataset(dataframe, training=True):
ds = dataframe_to_dataset(dataframe)
if training:
ds = ds.shuffle(len(train_df))
ds = ds.map(lambda x, y: (preprocess_text_and_image(x), y)).cache()
ds = ds.batch(batch_size).prefetch(auto)
return ds
train_ds = prepare_dataset(train_df)
validation_ds = prepare_dataset(val_df, False)
test_ds = prepare_dataset(test_df, False)
def project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
):
projected_embeddings = keras.layers.Dense(units=projection_dims)(embeddings)
for _ in range(num_projection_layers):
x = tf.nn.gelu(projected_embeddings)
x = keras.layers.Dense(projection_dims)(x)
x = keras.layers.Dropout(dropout_rate)(x)
x = keras.layers.Add()([projected_embeddings, x])
projected_embeddings = keras.layers.LayerNormalization()(x)
return projected_embeddings
def create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained ResNet50V2 model to be used as the base encoder.
resnet_v2 = keras.applications.ResNet50V2(
include_top=False, weights="imagenet", pooling="avg"
)
# Set the trainability of the base encoder.
for layer in resnet_v2.layers:
layer.trainable = trainable
# Receive the images as inputs.
image_1 = keras.Input(shape=(128, 128, 3), name="image_1")
image_2 = keras.Input(shape=(128, 128, 3), name="image_2")
# Preprocess the input image.
preprocessed_1 = keras.applications.resnet_v2.preprocess_input(image_1)
preprocessed_2 = keras.applications.resnet_v2.preprocess_input(image_2)
# Generate the embeddings for the images using the resnet_v2 model
# concatenate them.
embeddings_1 = resnet_v2(preprocessed_1)
embeddings_2 = resnet_v2(preprocessed_2)
embeddings = keras.layers.Concatenate()([embeddings_1, embeddings_2])
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the vision encoder model.
return keras.Model([image_1, image_2], outputs, name="vision_encoder")
def create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained BERT model to be used as the base encoder.
bert = hub.KerasLayer(bert_model_path, name="bert",)
# Set the trainability of the base encoder.
bert.trainable = trainable
# Receive the text as inputs.
bert_input_features = ["input_type_ids", "input_mask", "input_word_ids"]
inputs = {
feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature)
for feature in bert_input_features
}
# Generate embeddings for the preprocessed text using the BERT model.
embeddings = bert(inputs)["pooled_output"]
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the text encoder model.
return keras.Model(inputs, outputs, name="text_encoder")
def create_multimodal_model(
num_projection_layers=1,
projection_dims=256,
dropout_rate=0.1,
vision_trainable=False,
text_trainable=False,
):
# Receive the images as inputs.
image_1 = keras.Input(shape=(128, 128, 3), name="image_1")
image_2 = keras.Input(shape=(128, 128, 3), name="image_2")
# Receive the text as inputs.
bert_input_features = ["input_type_ids", "input_mask", "input_word_ids"]
text_inputs = {
feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature)
for feature in bert_input_features
}
# Create the encoders.
vision_encoder = create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, vision_trainable
)
text_encoder = create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, text_trainable
)
# Fetch the embedding projections.
vision_projections = vision_encoder([image_1, image_2])
text_projections = text_encoder(text_inputs)
# Concatenate the projections and pass through the classification layer.
concatenated = keras.layers.Concatenate()([vision_projections, text_projections])
outputs = keras.layers.Dense(3, activation="softmax")(concatenated)
return keras.Model([image_1, image_2, text_inputs], outputs)
multimodal_model = create_multimodal_model()
keras.utils.plot_model(multimodal_model, show_shapes=True)
multimodal_model.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics="accuracy"
)
history = multimodal_model.fit(train_ds, validation_data=validation_ds, epochs=10)
_, acc = multimodal_model.evaluate(test_ds)
print(f"Accuracy on the test set: {round(acc * 100, 2)}%.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert and repartition Subway Dataframe using PySpark
Step2: Convert, repartition, and sort Taxi Dataframe using PySpark
Step3: Using Dask, Read, and set index on Taxi Dataframe produced using PySpark, then write to disk for easy reading in Dask
|
<ASSISTANT_TASK:>
Python Code:
# standard imports
funcs = pyspark.sql.functions
types = pyspark.sql.types
sqlContext.sql("set spark.sql.shuffle.partitions=32")
bike = spark.read.parquet('/data/citibike.parquet')
bike.registerTempTable('bike')
spark.sql('select * from bike limit 5').toPandas()
bike = (bike
.withColumn('start_time',
funcs.from_unixtime((bike.start_time/1000000).cast(types.IntegerType()))
.cast(types.TimestampType()))
.withColumn('stop_time',
funcs.from_unixtime((bike.stop_time/1000000).cast(types.IntegerType()))
.cast(types.TimestampType()))
# .withColumn('start_time',
# (funcs.substring(bike.start_time, 1, 20)).cast(types.TimestampType()))
# .withColumn('stop_time', bike.stop_time.cast(types.TimestampType())) \
.withColumn('start_taxizone_id', bike.start_taxizone_id.cast(types.FloatType()))
.withColumn('end_taxizone_id', bike.end_taxizone_id.cast(types.FloatType()))
)
bike.registerTempTable('bike2')
spark.sql('select * from bike2 limit 5').toPandas()
bike.sort('start_time') \
.write.parquet('/data/citibike_spark.parquet', compression='snappy', mode='overwrite')
# standard imports
funcs = pyspark.sql.functions
types = pyspark.sql.types
sqlContext.sql("set spark.sql.shuffle.partitions=32")
subway = spark.read.parquet('/data/subway.parquet')
subway.registerTempTable("subway")
spark.sql('select * from subway limit 5').toPandas()
subway = \
subway.withColumn('endtime', funcs.from_unixtime((subway.endtime/1000000).cast(types.IntegerType())) \
.cast(types.TimestampType()))
subway.registerTempTable("subway2")
spark.sql('select * from subway2 limit 5').toPandas()
subway = subway.sort("ca", "unit", "scp", "endtime")
subway.write.parquet('/data/subway_spark.parquet', compression='snappy', mode='overwrite')
# standard imports
funcs = pyspark.sql.functions
types = pyspark.sql.types
sqlContext.sql("set spark.sql.shuffle.partitions=800")
taxi = spark.read.parquet('/data/all_trips_unprocessed.parquet')
taxi = (
taxi.withColumn('dropoff_datetime',
funcs.from_unixtime((taxi.dropoff_datetime/1000000).cast(types.IntegerType()))
.cast(types.TimestampType())) \
.withColumn('pickup_datetime',
funcs.from_unixtime((taxi.pickup_datetime/1000000).cast(types.IntegerType()))
.cast(types.TimestampType())) \
.withColumn('dropoff_taxizone_id', taxi.dropoff_taxizone_id.cast(types.IntegerType())) \
.withColumn('pickup_taxizone_id', taxi.pickup_taxizone_id.cast(types.IntegerType())) \
.withColumn('dropoff_latitude', taxi.dropoff_latitude.cast(types.FloatType())) \
.withColumn('dropoff_longitude', taxi.dropoff_longitude.cast(types.FloatType())) \
.withColumn('ehail_fee', taxi.ehail_fee.cast(types.FloatType())) \
.withColumn('extra', taxi.extra.cast(types.FloatType())) \
.withColumn('fare_amount', taxi.fare_amount.cast(types.FloatType())) \
.withColumn('improvement_surcharge', taxi.improvement_surcharge.cast(types.FloatType())) \
.withColumn('mta_tax', taxi.mta_tax.cast(types.FloatType())) \
.withColumn('pickup_latitude', taxi.pickup_latitude.cast(types.FloatType())) \
.withColumn('pickup_longitude', taxi.pickup_longitude.cast(types.FloatType())) \
.withColumn('tip_amount', taxi.tip_amount.cast(types.FloatType())) \
.withColumn('tolls_amount', taxi.tolls_amount.cast(types.FloatType())) \
.withColumn('total_amount', taxi.total_amount.cast(types.FloatType())) \
.withColumn('trip_distance', taxi.trip_distance.cast(types.FloatType())) \
.withColumn('passenger_count', taxi.passenger_count.cast(types.IntegerType())) \
.withColumn('rate_code_id', taxi.rate_code_id.cast(types.IntegerType()))
# .withColumn('trip_id', funcs.monotonically_increasing_id())
)
taxi.sort('pickup_datetime').withColumn('trip_id', funcs.monotonically_increasing_id()) \
.write.parquet('/data/all_trips_spark.parquet', compression='snappy', mode='overwrite')
import pandas as pd
import numpy as np
pd.options.display.max_rows = 100
pd.options.display.max_columns = 100
import dask.dataframe as dd
import dask.distributed
client = dask.distributed.Client()
trips = dd.read_parquet('/data/all_trips_spark.parquet', engine='arrow')
trips.head()
trips.tail()
# arrow engine adds quotes to all string fields for some reason. Strip them out.
dtypedict = dict(trips.dtypes)
for k in dtypedict:
if dtypedict[k] == np.dtype('O'):
trips[k] = trips[k].str.strip('"')
trips = trips.set_index('pickup_datetime', npartitions=trips.npartitions, sorted=True, compute=False)
trips.to_parquet('/data/all_trips.parquet', has_nulls=True, compression="SNAPPY", object_encoding='json')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0 - Naive Face Verification
Step3: Expected Output
Step4: Expected Output
Step5: Here're some examples of distances between the encodings between three individuals
Step7: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Step8: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture
Step9: Expected Output
Step11: Expected Output
Step12: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)),axis=-1)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)),axis=-1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.maximum(basic_loss, 0)
loss = tf.reduce_sum(loss)
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
verify("images/camera_0.jpg", "younes", database, FRmodel)
verify("images/camera_2.jpg", "kian", database, FRmodel)
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
who_is_it("images/camera_0.jpg", database, FRmodel)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TPU の使用
Step2: TPU の初期化
Step3: 手動でデバイスを配置する
Step4: 分散ストラテジー
Step5: 計算を複製してすべての TPU コアで実行できるようにするには、計算を strategy.run API に渡します。次の例では、すべてのコアが同じ入力 (a, b) を受け入れて、各コアで独立して行列の乗算を実行しています。出力は、すべてのレプリカからの値となります。
Step6: TPU での分類
Step7: データセットを読み込む
Step8: Keras の高位 API を使用してモデルをトレーニングする
Step9: Python のオーバーヘッドを緩和し、TPU のパフォーマンスを最大化するには、引数 steps_per_execution を Model.compile に渡します。この例では、スループットが約 50% 増加します。
Step12: カスタムトレーニングループを使用してモデルをトレーニングする
Step13: 次に、トレーニングループを実行します。
Step16: tf.function 内の複数のステップでパフォーマンスを改善する
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import os
import tensorflow_datasets as tfds
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
with tf.device('/TPU:0'):
c = tf.matmul(a, b)
print("c device: ", c.device)
print(c)
strategy = tf.distribute.TPUStrategy(resolver)
@tf.function
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
z = strategy.run(matmul_fn, args=(a, b))
print(z)
def create_model():
return tf.keras.Sequential(
[tf.keras.layers.Conv2D(256, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(256, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)])
def get_dataset(batch_size, is_training=True):
split = 'train' if is_training else 'test'
dataset, info = tfds.load(name='mnist', split=split, with_info=True,
as_supervised=True, try_gcs=True)
# Normalize the input data.
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
dataset = dataset.map(scale)
# Only shuffle and repeat the dataset in training. The advantage of having an
# infinite dataset for training is to avoid the potential last partial batch
# in each epoch, so that you don't need to think about scaling the gradients
# based on the actual batch size.
if is_training:
dataset = dataset.shuffle(10000)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
return dataset
with strategy.scope():
model = create_model()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
batch_size = 200
steps_per_epoch = 60000 // batch_size
validation_steps = 10000 // batch_size
train_dataset = get_dataset(batch_size, is_training=True)
test_dataset = get_dataset(batch_size, is_training=False)
model.fit(train_dataset,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=test_dataset,
validation_steps=validation_steps)
with strategy.scope():
model = create_model()
model.compile(optimizer='adam',
# Anything between 2 and `steps_per_epoch` could help here.
steps_per_execution = 50,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=test_dataset,
validation_steps=validation_steps)
# Create the model, optimizer and metrics inside the strategy scope, so that the
# variables can be mirrored on each device.
with strategy.scope():
model = create_model()
optimizer = tf.keras.optimizers.Adam()
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
# Calculate per replica batch size, and distribute the datasets on each TPU
# worker.
per_replica_batch_size = batch_size // strategy.num_replicas_in_sync
train_dataset = strategy.experimental_distribute_datasets_from_function(
lambda _: get_dataset(per_replica_batch_size, is_training=True))
@tf.function
def train_step(iterator):
The step function for one training step.
def step_fn(inputs):
The computation to run on each TPU device.
images, labels = inputs
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits, from_logits=True)
loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
training_loss.update_state(loss * strategy.num_replicas_in_sync)
training_accuracy.update_state(labels, logits)
strategy.run(step_fn, args=(next(iterator),))
steps_per_eval = 10000 // batch_size
train_iterator = iter(train_dataset)
for epoch in range(5):
print('Epoch: {}/5'.format(epoch))
for step in range(steps_per_epoch):
train_step(train_iterator)
print('Current step: {}, training loss: {}, accuracy: {}%'.format(
optimizer.iterations.numpy(),
round(float(training_loss.result()), 4),
round(float(training_accuracy.result()) * 100, 2)))
training_loss.reset_states()
training_accuracy.reset_states()
@tf.function
def train_multiple_steps(iterator, steps):
The step function for one training step.
def step_fn(inputs):
The computation to run on each TPU device.
images, labels = inputs
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits, from_logits=True)
loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
training_loss.update_state(loss * strategy.num_replicas_in_sync)
training_accuracy.update_state(labels, logits)
for _ in tf.range(steps):
strategy.run(step_fn, args=(next(iterator),))
# Convert `steps_per_epoch` to `tf.Tensor` so the `tf.function` won't get
# retraced if the value changes.
train_multiple_steps(train_iterator, tf.convert_to_tensor(steps_per_epoch))
print('Current step: {}, training loss: {}, accuracy: {}%'.format(
optimizer.iterations.numpy(),
round(float(training_loss.result()), 4),
round(float(training_accuracy.result()) * 100, 2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The above is the standard normal distribution. Its mean is 0 and SD is 1. About 95% values fall within $\mu \pm 2 SD$ and 98% within $\mu \pm 3 SD$
Step2: Let us try for some common z scores
Step3: As you noticed, the norm.cdf() function gives the cumulative probability (left tail) from -3 to 3 approx. If you need right tailed distribution, you simply subtract this value from 1.
Step4: As is the ppf() function gives only positive z scores, you need to apply $\pm$ to it.
Step5: Now let us use scikit-learn to easily transform this dataset
Step6: As you see above, the shape of distribution is the same, just the values are scaled.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
vals = np.random.standard_normal(100000)
len(vals)
fig, ax = plt.subplots(1,1)
hist_vals = ax.hist(vals, bins=200, color='red', density=True)
import scipy.stats as st
# compute the p value for a z score
st.norm.cdf(-1.741)
[st.norm.cdf(-3), st.norm.cdf(-1), st.norm.cdf(0), st.norm.cdf(1), st.norm.cdf(2)]
# Find Z score for a probability of 0.97 (2sd)
st.norm.ppf(0.97)
[st.norm.ppf(0.95), st.norm.ppf(0.97), st.norm.ppf(0.98), st.norm.ppf(0.99)]
demo_dist = 55 + np.random.randn(200) * 3.4
std_normal = np.random.randn(200)
[demo_dist.mean(), demo_dist.std(), demo_dist.min(), demo_dist.max()]
[std_normal.mean(), std_normal.std(), std_normal.min(), std_normal.max()]
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
demo_dist = demo_dist.reshape(200,1)
demo_dist_scaled = scaler.fit_transform(demo_dist)
[round(demo_dist_scaled.mean(),3), demo_dist_scaled.std(), demo_dist_scaled.min(), demo_dist_scaled.max()]
fig, axs = plt.subplots(2,2, figsize=(15,8))
p1 = axs[0][0].scatter(sorted(demo_dist), sorted(std_normal))
axs[0][0].set_title("Scatter of original dataset against standard normal")
p1 = axs[0][1].scatter(sorted(demo_dist_scaled), sorted(std_normal))
axs[0][1].set_title("Scatter of scaled dataset against standard normal")
p2 = axs[1][0].hist(demo_dist, bins=50)
axs[1][0].set_title("Histogram of original dataset against standard normal")
p3 = axs[1][1].hist(demo_dist_scaled, bins=50)
axs[1][1].set_title("Histogram of scaled dataset against standard normal")
demo_dist = 55 + np.random.randn(200) * 3.4
std_normal = np.random.randn(200)
demo_dist = sorted(demo_dist)
std_normal = sorted(std_normal)
plt.scatter(demo_dist, std_normal)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Policies
Step2: Python Policies
Step3: The most important method is action(time_step) which maps a time_step containing an observation from the environment to a PolicyStep named tuple containing the following attributes
Step4: Example 2
Step5: TensorFlow Policies
Step6: Example 2
Step7: In TensorFlow most network layers are designed for batch operations, so we expect the input time_steps to be batched, and the output of the network will be batched as well. Also the network is responsible for producing actions in the correct range of the given action_spec. This is conventionally done using e.g. a tanh activation for the final layer to produce actions in [-1, 1] and then scaling and shifting this to the correct range as the input action_spec (e.g. see tf_agents/agents/ddpg/networks.actor_network()).
Step8: We can apply it to any batch of time_steps that follow time_step_spec
Step9: In the above example, we created the policy using an action network that produces an action tensor. In this case, policy.distribution(time_step) is a deterministic (delta) distribution around the output of policy.action(time_step). One way to produce a stochastic policy is to wrap the actor policy in a policy wrapper that adds noise to the actions. Another way is to create the actor policy using an action distribution network instead of an action network as shown below.
Step10: Note that in the above, actions are clipped to the range of the given action spec [-1, 1]. This is because a constructor argument of ActorPolicy clip=True by default. Setting this to false will return unclipped actions produced by the network.
Step11: Policy Wrappers
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
from tf_agents.specs import array_spec
from tf_agents.specs import tensor_spec
from tf_agents.networks import network
from tf_agents.policies import py_policy
from tf_agents.policies import random_py_policy
from tf_agents.policies import scripted_py_policy
from tf_agents.policies import tf_policy
from tf_agents.policies import random_tf_policy
from tf_agents.policies import actor_policy
from tf_agents.policies import q_policy
from tf_agents.policies import greedy_policy
from tf_agents.trajectories import time_step as ts
class Base(object):
@abc.abstractmethod
def __init__(self, time_step_spec, action_spec, policy_state_spec=()):
self._time_step_spec = time_step_spec
self._action_spec = action_spec
self._policy_state_spec = policy_state_spec
@abc.abstractmethod
def reset(self, policy_state=()):
# return initial_policy_state.
pass
@abc.abstractmethod
def action(self, time_step, policy_state=()):
# return a PolicyStep(action, state, info) named tuple.
pass
@abc.abstractmethod
def distribution(self, time_step, policy_state=()):
# Not implemented in python, only for TF policies.
pass
@abc.abstractmethod
def update(self, policy):
# update self to be similar to the input `policy`.
pass
@property
def time_step_spec(self):
return self._time_step_spec
@property
def action_spec(self):
return self._action_spec
@property
def policy_state_spec(self):
return self._policy_state_spec
action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)
my_random_py_policy = random_py_policy.RandomPyPolicy(time_step_spec=None,
action_spec=action_spec)
time_step = None
action_step = my_random_py_policy.action(time_step)
print(action_step)
action_step = my_random_py_policy.action(time_step)
print(action_step)
action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)
action_script = [(1, np.array([5, 2], dtype=np.int32)),
(0, np.array([0, 0], dtype=np.int32)), # Setting `num_repeats` to 0 will skip this action.
(2, np.array([1, 2], dtype=np.int32)),
(1, np.array([3, 4], dtype=np.int32))]
my_scripted_py_policy = scripted_py_policy.ScriptedPyPolicy(
time_step_spec=None, action_spec=action_spec, action_script=action_script)
policy_state = my_scripted_py_policy.get_initial_state()
time_step = None
print('Executing scripted policy...')
action_step = my_scripted_py_policy.action(time_step, policy_state)
print(action_step)
action_step= my_scripted_py_policy.action(time_step, action_step.state)
print(action_step)
action_step = my_scripted_py_policy.action(time_step, action_step.state)
print(action_step)
print('Resetting my_scripted_py_policy...')
policy_state = my_scripted_py_policy.get_initial_state()
action_step = my_scripted_py_policy.action(time_step, policy_state)
print(action_step)
action_spec = tensor_spec.BoundedTensorSpec(
(2,), tf.float32, minimum=-1, maximum=3)
input_tensor_spec = tensor_spec.TensorSpec((2,), tf.float32)
time_step_spec = ts.time_step_spec(input_tensor_spec)
my_random_tf_policy = random_tf_policy.RandomTFPolicy(
action_spec=action_spec, time_step_spec=time_step_spec)
observation = tf.ones(time_step_spec.observation.shape)
time_step = ts.restart(observation)
action_step = my_random_tf_policy.action(time_step)
print('Action:')
print(action_step.action)
class ActionNet(network.Network):
def __init__(self, input_tensor_spec, output_tensor_spec):
super(ActionNet, self).__init__(
input_tensor_spec=input_tensor_spec,
state_spec=(),
name='ActionNet')
self._output_tensor_spec = output_tensor_spec
self._sub_layers = [
tf.keras.layers.Dense(
action_spec.shape.num_elements(), activation=tf.nn.tanh),
]
def call(self, observations, step_type, network_state):
del step_type
output = tf.cast(observations, dtype=tf.float32)
for layer in self._sub_layers:
output = layer(output)
actions = tf.reshape(output, [-1] + self._output_tensor_spec.shape.as_list())
# Scale and shift actions to the correct range if necessary.
return actions, network_state
input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)
time_step_spec = ts.time_step_spec(input_tensor_spec)
action_spec = tensor_spec.BoundedTensorSpec((3,),
tf.float32,
minimum=-1,
maximum=1)
action_net = ActionNet(input_tensor_spec, action_spec)
my_actor_policy = actor_policy.ActorPolicy(
time_step_spec=time_step_spec,
action_spec=action_spec,
actor_network=action_net)
batch_size = 2
observations = tf.ones([2] + time_step_spec.observation.shape.as_list())
time_step = ts.restart(observations, batch_size)
action_step = my_actor_policy.action(time_step)
print('Action:')
print(action_step.action)
distribution_step = my_actor_policy.distribution(time_step)
print('Action distribution:')
print(distribution_step.action)
class ActionDistributionNet(ActionNet):
def call(self, observations, step_type, network_state):
action_means, network_state = super(ActionDistributionNet, self).call(
observations, step_type, network_state)
action_std = tf.ones_like(action_means)
return tfp.distributions.MultivariateNormalDiag(action_means, action_std), network_state
action_distribution_net = ActionDistributionNet(input_tensor_spec, action_spec)
my_actor_policy = actor_policy.ActorPolicy(
time_step_spec=time_step_spec,
action_spec=action_spec,
actor_network=action_distribution_net)
action_step = my_actor_policy.action(time_step)
print('Action:')
print(action_step.action)
distribution_step = my_actor_policy.distribution(time_step)
print('Action distribution:')
print(distribution_step.action)
input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)
time_step_spec = ts.time_step_spec(input_tensor_spec)
action_spec = tensor_spec.BoundedTensorSpec((),
tf.int32,
minimum=0,
maximum=2)
num_actions = action_spec.maximum - action_spec.minimum + 1
class QNetwork(network.Network):
def __init__(self, input_tensor_spec, action_spec, num_actions=num_actions, name=None):
super(QNetwork, self).__init__(
input_tensor_spec=input_tensor_spec,
state_spec=(),
name=name)
self._sub_layers = [
tf.keras.layers.Dense(num_actions),
]
def call(self, inputs, step_type=None, network_state=()):
del step_type
inputs = tf.cast(inputs, tf.float32)
for layer in self._sub_layers:
inputs = layer(inputs)
return inputs, network_state
batch_size = 2
observation = tf.ones([batch_size] + time_step_spec.observation.shape.as_list())
time_steps = ts.restart(observation, batch_size=batch_size)
my_q_network = QNetwork(
input_tensor_spec=input_tensor_spec,
action_spec=action_spec)
my_q_policy = q_policy.QPolicy(
time_step_spec, action_spec, q_network=my_q_network)
action_step = my_q_policy.action(time_steps)
distribution_step = my_q_policy.distribution(time_steps)
print('Action:')
print(action_step.action)
print('Action distribution:')
print(distribution_step.action)
my_greedy_policy = greedy_policy.GreedyPolicy(my_q_policy)
action_step = my_greedy_policy.action(time_steps)
print('Action:')
print(action_step.action)
distribution_step = my_greedy_policy.distribution(time_steps)
print('Action distribution:')
print(distribution_step.action)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If XOR(f, g) is UNSAT, functions f and g are equivalent.
Step2: Let's see if we can do better using the Tseitin transformation,
Step3: Success!
|
<ASSISTANT_TASK:>
Python Code:
%dotobjs S_rca[2].simplify(), S_ksa[2].simplify()
f = Xor(S_rca[9], S_ksa[9])
%timeit f.satisfy_one()
g = f.tseitin()
%timeit g.satisfy_one()
assert f.satisfy_one() is None and g.satisfy_one() is None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The swissmetro dataset used in this example is conveniently bundled with Larch,
Step2: We can inspect a few rows of data to see what we have using the head method.
Step3: The Biogeme code includes a variety of commands to manipulate the data
Step4: You can also use the eval method of pandas DataFrames.
Step5: This can allow for writing data
Step6: Removing some observations can also be done directly using pandas.
Step7: You may note that we don't assign this value to a column within the
Step8: The info method of the DataFrames object gives a short summary
Step9: A longer summary is available by setting verbose to True.
Step10: You may have noticed that the info summary notes that this data is "not computation-ready".
Step11: When computations are run, we'll need all the data to be in float format, but Larch knows this and will
Step12: Class Membership Model
Step13: The utility function of the first class isn't written here, which means it will implicitly
Step14: The we'll load the data needed for our models using the load_data method.
Step15: Only the data actually needed by the models has been converted, which may help
Step16: To complete our analysis, we can compute the log likelihood at "null" parameters.
Step17: And the parameter covariance matrixes.
Step18: Reporting Results
Step19: Pipe into the report section headers in markdown format (use one hash for top level
Step20: You can also pipe in dataframes directly, include the pf parameter frame from the model.
Step21: And a selection of pre-formatted summary sections.
Step22: In addition to reviewing report sections in a Jupyter notebook, the
|
<ASSISTANT_TASK:>
Python Code:
import larch
import pandas
from larch.roles import P,X
from larch import data_warehouse
raw = pandas.read_csv(larch.data_warehouse.example_file('swissmetro.csv.gz'))
raw.head()
raw['SM_COST'] = raw['SM_CO'] * (raw["GA"]==0)
raw['TRAIN_COST'] = raw.eval("TRAIN_CO * (GA == 0)")
raw['TRAIN_COST_SCALED'] = raw['TRAIN_COST'] / 100
raw['TRAIN_TT_SCALED'] = raw['TRAIN_TT'] / 100
raw['SM_COST_SCALED'] = raw.eval('SM_COST / 100')
raw['SM_TT_SCALED'] = raw['SM_TT'] / 100
raw['CAR_CO_SCALED'] = raw['CAR_CO'] / 100
raw['CAR_TT_SCALED'] = raw['CAR_TT'] / 100
raw['CAR_AV_SP'] = raw.eval("CAR_AV * (SP!=0)")
raw['TRAIN_AV_SP'] = raw.eval("TRAIN_AV * (SP!=0)")
keep = raw.eval("PURPOSE in (1,3) and CHOICE != 0")
dfs = larch.DataFrames(raw[keep], alt_codes=[1,2,3])
dfs.info()
dfs.info(verbose=True)
dfs.data_co.info()
m1 = larch.Model(dataservice=dfs)
m1.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m1.choice_co_code = 'CHOICE'
m1.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_COST_SCALED") * P("B_COST")
m1.utility_co[2] = X("SM_COST_SCALED") * P("B_COST")
m1.utility_co[3] = P("ASC_CAR") + X("CAR_CO_SCALED") * P("B_COST")
m2 = larch.Model(dataservice=dfs)
m2.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m2.choice_co_code = 'CHOICE'
m2.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_TT_SCALED") * P("B_TIME") + X("TRAIN_COST_SCALED") * P("B_COST")
m2.utility_co[2] = X("SM_TT_SCALED") * P("B_TIME") + X("SM_COST_SCALED") * P("B_COST")
m2.utility_co[3] = P("ASC_CAR") + X("CAR_TT_SCALED") * P("B_TIME") + X("CAR_CO_SCALED") * P("B_COST")
mk = larch.Model()
mk.utility_co[2] = P("W_OTHER")
from larch.model.latentclass import LatentClassModel
m = LatentClassModel(mk, {1:m1, 2:m2})
m.load_data()
m.dataframes.info(verbose=1)
result = m.maximize_loglike()
result
# TEST
from pytest import approx
assert result.loglike == approx(-5208.498065961453)
m.loglike_null()
# TEST
assert _ == approx(-6964.662979191462)
m.calculate_parameter_covariance()
m.covariance_matrix
m.robust_covariance_matrix
report = larch.Reporter("Latent Class Example")
report << "# Parameter Estimates"
report << m.pf
report << "# Estimation Statistics"
report << m.estimation_statistics()
report << "# Parameter Covariance"
report << "## Typical Parameter Covariance"
report << m.covariance_matrix
report << "## Robust Parameter Covariance"
report << m.robust_covariance_matrix
report << "# Utility Functions"
report << "## Class 1"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
report << "## Class 2"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
report.save('latent-class-example-report.html', overwrite=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Regression
Step2: Ridge Regression (L2 penalty)
Step3: Lasso (L1 penalty)
Step4: Linear models for classification
Step5: Multi-Class linear classification
Step6: Exercises
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
true_coefficient
X.shape, y.shape
plt.plot(X[:,1], y, 'bo', markersize=4);
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from plots import plot_linear_svc_regularization
plot_linear_svc_regularization()
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
# %load solutions/linear_models.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load config from default location
Step2: Create Deployment object
Step3: Fill required Deployment fields (apiVersion, kind, and metadata)
Step4: A Deployment also needs a .spec section
Step5: Add Pod template in .spec.template section
Step6: Pod template container description
Step7: Create Deployment
Step8: Update container image
Step9: Apply update (RollingUpdate)
Step10: Delete Deployment
|
<ASSISTANT_TASK:>
Python Code:
from kubernetes import client, config
config.load_kube_config()
apps_api = client.AppsV1Api()
deployment = client.V1Deployment()
deployment.api_version = "apps/v1"
deployment.kind = "Deployment"
deployment.metadata = client.V1ObjectMeta(name="nginx-deployment")
spec = client.V1DeploymentSpec()
spec.replicas = 3
spec.template = client.V1PodTemplateSpec()
spec.template.metadata = client.V1ObjectMeta(labels={"app": "nginx"})
spec.template.spec = client.V1PodSpec()
container = client.V1Container()
container.name="nginx"
container.image="nginx:1.7.9"
container. ports = [client.V1ContainerPort(container_port=80)]
spec.template.spec.containers = [container]
deployment.spec = spec
apps_api.create_namespaced_deployment(namespace="default", body=deployment)
deployment.spec.template.spec.containers[0].image = "nginx:1.9.1"
apps_api.replace_namespaced_deployment(name="nginx-deployment", namespace="default", body=deployment)
apps_api.delete_namespaced_deployment(name="nginx-deployment", namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: k-means法の手続き
Step2: 11.1.1 k-means++ 法
Step3: 11.1.2 ハードクラスタリングとソフトクラスタリング
Step4: 11.1.4 シルエット図を使ってクラスタリングの性能を数値化する
Step5: 11.2 クラスタを階層木として構成する
Step6: 11.2.1 距離行列で階層的クラスタリングを実行する
Step7: 11.2.2 樹形図をヒートマップと組み合わせる
Step8: 11.3 DBSCANを使って高密度の領域を特定する
|
<ASSISTANT_TASK:>
Python Code:
# 単純な2次元のデータセットを生成する
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=150, # サンプル点の総数
n_features=2, # 特徴量の個数
centers=3, # クラスタの個数
cluster_std=0.5, # クラスタ内の標準偏差
shuffle=True, # サンプルをシャッフル
random_state=0) # 乱数生成器の状態を指定
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1], c='white', marker='o', s=50)
plt.grid()
plt.show()
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3, # クラスタの個数
init='random', # セントロイドの初期値をランダムに選択
n_init=10, # 異なるセントロイドの初期値を用いた実行回数
max_iter=300, # 最大イテレーション回数
tol=1e-04, # 収束と判定するための相対的な許容誤差
random_state=0) # セントロイドの初期化に用いる乱数生成器の状態
y_km = km.fit_predict(X) # クラスタの中心の計算と各サンプルのインデックスの予測
plt.scatter(X[y_km == 0, 0], # グラフのxの値
X[y_km == 0, 1], # グラフのyの値
s=50, # プロットのサイズ
c='lightgreen', # プロットの色
marker='s', # マーカーの形
label='cluster 1') # ラベル名
plt.scatter(X[y_km == 1, 0],
X[y_km == 1, 1],
s=50,
c='orange',
marker='o',
label='cluster 2')
plt.scatter(X[y_km == 2, 0],
X[y_km == 2, 1],
s=50,
c='lightblue',
marker='v',
label='cluster 3')
plt.scatter(km.cluster_centers_[:, 0],
km.cluster_centers_[:, 1],
s=250,
marker='*',
c='red',
label='centroids')
plt.legend()
plt.grid()
plt.show()
# 歪み(SSE)の値を取得
print('Distortion: {:.2f}'.format(km.inertia_))
distortions = []
for i in range(1, 11):
km = KMeans(n_clusters=i, # クラスタの個数
init='k-means++', # k-measn++ 法によりクラスタ中心を選択
n_init=10, # 異なるセントロイドの初期値を用いた実行回数
max_iter=300, # 最大イテレーション回数
random_state=0)
km.fit(X)
distortions.append(km.inertia_)
plt.plot(range(1, 11), distortions, marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.show()
km = KMeans(n_clusters=3,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
cluster_labels = np.unique(y_km) # y_km の要素の中で重複をなくす
n_clusters = cluster_labels.shape[0]
# シルエット係数を計算
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(i / n_clusters) # 色の値をセット
plt.barh(range(y_ax_lower, y_ax_upper), # 水平の棒グラフを描画
c_silhouette_vals, # 棒の幅
height=1.0, # 棒の高さ
edgecolor='none', # 棒の端の色
color=color) # 棒の色
yticks.append((y_ax_lower + y_ax_upper) / 2) # クラスタラベルの表示位置を追加
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals) # シルエット係数の平均値
plt.axvline(silhouette_avg, color='red', linestyle='--') # 係数の平均値には線を引く
plt.yticks(yticks, cluster_labels + 1) # クラスタラベルを表示
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.show()
km = KMeans(n_clusters=2,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
plt.scatter(X[y_km == 0, 0],
X[y_km == 0, 1],
s=50,
c='lightgreen',
marker='s',
label='cluster 1')
plt.scatter(X[y_km == 1, 0],
X[y_km == 1, 1],
s=50,
c='orange',
marker='o',
label='cluster 2')
plt.scatter(km.cluster_centers_[:, 0],
km.cluster_centers_[:, 1],
s=250,
marker='*',
c='red',
label='centroids')
plt.legend()
plt.grid()
plt.show()
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
cluster_labels = np.unique(y_km) # y_km の要素の中で重複をなくす
n_clusters = cluster_labels.shape[0]
# シルエット係数を計算
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(i / n_clusters) # 色の値をセット
plt.barh(range(y_ax_lower, y_ax_upper), # 水平の棒グラフを描画
c_silhouette_vals, # 棒の幅
height=1.0, # 棒の高さ
edgecolor='none', # 棒の端の色
color=color) # 棒の色
yticks.append((y_ax_lower + y_ax_upper) / 2) # クラスタラベルの表示位置を追加
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals) # シルエット係数の平均値
plt.axvline(silhouette_avg, color='red', linestyle='--') # 係数の平均値には線を引く
plt.yticks(yticks, cluster_labels + 1) # クラスタラベルを表示
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.show()
import pandas as pd
import numpy as np
np.random.seed(123)
variables = ['X', 'Y', 'Z']
size = 5 # データサイズ
labels = ['ID_{}'.format(i) for i in range(size)]
X = np.random.random_sample([size, 3]) * 10 # 5行3列のサンプルデータを生成
df = pd.DataFrame(X, columns=variables, index=labels)
df
from scipy.spatial.distance import pdist, squareform
# pdist で距離を計算、squareform で対称行列を作成
row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')),
columns=labels, index=labels)
row_dist
from scipy.cluster.hierarchy import linkage
row_clusters = linkage(df.values, method='complete', metric='euclidean')
# 1、2列目は最も類似度が低いメンバー
# distanceはそれらのメンバーの距離
# 4列目は各クラスタのメンバーの個数
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clus.'],
index=['cluster {}'.format(i + 1) for i in range(row_clusters.shape[0])])
# 樹形図を表示
from scipy.cluster.hierarchy import dendrogram
import matplotlib.pyplot as plt
row_dendr = dendrogram(row_clusters, labels=labels)
plt.ylabel('Euclidean distance')
plt.show()
# plot row dendrogram
fig = plt.figure(figsize=(8, 8), facecolor='white')
axd = fig.add_axes([0.09, 0.1, 0.2, 0.6])
# note: for matplotlib < v1.5.1, please use orientation='right'
row_dendr = dendrogram(row_clusters, orientation='left')
# reorder data with respect to clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
axd.set_xticks([])
axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# plot heatmap
axm = fig.add_axes([0.23, 0.1, 0.6, 0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
# plt.savefig('./figures/heatmap.png', dpi=300)
plt.show()
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
plt.scatter(X[:, 0], X[:, 1])
plt.show()
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3))
# k-means 法で分類してプロット
km = KMeans(n_clusters=2, random_state=0)
y_km = km.fit_predict(X)
ax1.scatter(X[y_km == 0, 0], X[y_km == 0, 1],
c='lightblue', marker='o', s=40, label='cluster 1')
ax1.scatter(X[y_km == 1, 0], X[y_km == 1, 1],
c='red', marker='s', s=40, label='cluster 2')
ax1.set_title('K-means clustering')
# 完全連結法法で分類してプロット
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=2,
affinity='euclidean',
linkage='complete')
y_ac = ac.fit_predict(X)
ax2.scatter(X[y_ac == 0, 0], X[y_ac == 0, 1], c='lightblue',
marker='o', s=40, label='cluster 1')
ax2.scatter(X[y_ac == 1, 0], X[y_ac == 1, 1], c='red',
marker='s', s=40, label='cluster 2')
ax2.set_title('Agglomerative clustering')
plt.legend()
plt.show()
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2, min_samples=5, metric='euclidean')
y_db = db.fit_predict(X)
plt.scatter(X[y_db == 0, 0], X[y_db == 0, 1],
c='lightblue', marker='o', s=40,
label='cluster 1')
plt.scatter(X[y_db == 1, 0], X[y_db == 1, 1],
c='red', marker='s', s=40,
label='cluster 2')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tests overlaying inferred states on top of rule 18 spacetime diagram
|
<ASSISTANT_TASK:>
Python Code:
overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, text_color='red')
overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, colors=plt.cm.Set2, text_color='black')
overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, colorbar=True)
rule_18 = ECA(18, domain_18(500))
rule_18.evolve(500)
states_18 = epsilon_field(rule_18.get_spacetime())
states_18.estimate_states(3,3,1,alpha=0)
states_18.filter_data()
overlay_test(rule_18.get_spacetime(), states_18.get_causal_field(), t_min=200, t_max=240, x_min=200, x_max=240)
overlay_test(rule_18.get_spacetime(), states_18.get_causal_field(), t_min=200, t_max=220, x_min=200, x_max=220)
print states_18.get_causal_field()[200:220, 200:220]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extract features
Step2: Defining a custom feature
Step3: There's easier ways that we can do this though. revscoring.Feature overloads simple mathematical operators to allow you to do math with features and get a feature returned. revscoring.features.modifiers contains a set of basic functions that do the same. This code roughly corresponds to what's going on above.
Step4: While the implicit pattern is quicker and easier than the explicit pattern, it's name can not be customized.
Step5: Extracting datasources
Step6: OK. Let's define a new feature for counting the number of templates added. I'll make use of mwparserfromhell to do this. See the docs.
Step7: Debugging
Step8: In the tree structure above, you can see how our new feature depends on wikitext.revision.diff.segments_added which depends on wikitext.revision.diff.operations which depends (as you might imagine) on the current and parent revision. Some features can get quite complicated.
Step9: The dependency injection system will only solve a unique dependency once for a given tree. So, even though <revision.parent.text> appears twice above, it will only be extracted once and then cached. This allows for multiple features to share large sections of their dependency trees -- and therefor minimize resource usage.
|
<ASSISTANT_TASK:>
Python Code:
from revscoring.extractors import api
import mwapi
extractor = api.Extractor(mwapi.Session("https://en.wikipedia.org",
user_agent="Revscoring feature demo ahalfaker@wikimedia.org"))
from revscoring.features import wikitext
list(extractor.extract(123456789, [wikitext.revision.chars,
wikitext.revision.diff.chars_added]))
from revscoring import Feature
chars_added_ratio_explicit = Feature(
"chars_added_ratio_explicit",
lambda a,c: a/max(c, 1), # Prevents divide by zero
depends_on=[wikitext.revision.diff.chars_added,
wikitext.revision.chars],
returns=float)
list(extractor.extract(123456789, [chars_added_ratio_explicit]))
from revscoring.features import modifiers
chars_added_ratio_implicit = (wikitext.revision.diff.chars_added /
modifiers.max(wikitext.revision.chars, 1))
list(extractor.extract(123456789, [chars_added_ratio_implicit]))
chars_added_ratio_explicit, chars_added_ratio_implicit
list(extractor.extract(662953550, [wikitext.revision.diff.datasources.segments_added,
wikitext.revision.diff.datasources.segments_removed]))
import mwparserfromhell as mwp
templates_added = Feature("templates_added",
lambda add_segments: sum(len(mwp.parse(s).filter_templates()) > 0 for s in add_segments),
depends_on=[wikitext.revision.diff.datasources.segments_added],
returns=int)
list(extractor.extract(662953550, [templates_added]))
from revscoring.dependencies import draw
print(draw(templates_added))
print(draw(wikitext.revision.diff.number_prop_delta_sum))
try:
list(extractor.extract(2, [wikitext.revision.diff.words_added]))
except Exception as e:
print(e)
try:
list(extractor.extract(262721924, [wikitext.revision.diff.words_added]))
except Exception as e:
print(e)
from revscoring.features import revision_oriented
try:
list(extractor.extract(172665816, [revision_oriented.revision.comment_matches("foo")]))
except Exception as e:
print(e)
from revscoring.features import temporal
try:
list(extractor.extract(591839757, [revision_oriented.revision.user.text_matches("foo")]))
except Exception as e:
print(e)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tarea 1 (a). Irradiancia máxima
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import Image
Image("http://upload.wikimedia.org/wikipedia/commons/thumb/2/28/IEC60825_MPE_W_s.png/640px-IEC60825_MPE_W_s.png")
####
# Parámetros a modificar. INICIO
####
filename = "http://www.semrock.com/_ProductData/Spectra/NF01-229_244_DesignSpectrum.txt"
# Parámetros a modificar. FIN
####
%pylab inline
data=genfromtxt(filename,dtype=float,skip_header=4) # Carga los datos
longitud_de_onda=data[:,0];transmitancia=data[:,1];
print "Datos cargados OK"
import plotly
py = plotly.plotly('ofii','i6jc6xsecb')
data = [{'x': longitud_de_onda, 'y':transmitancia}]
layout={'title': 'Transmitancia Filtro Escogido','yaxis':{'type':'log'}}
py.iplot(data,layout=layout,fileopt='overwrite')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is the End!
Step3: For many functions it’s easy to exactly calculate derivatives.
Step4: When f is a function of many variables, it has multiple partial derivatives.
Step6: Using the Gradient
Step7: Choosing the Right Step Size
Step9: It is possible that certain step sizes will result in invalid inputs for our function.
Step11: Putting It All Together
Step14: Sometimes we’ll instead want to maximize a function, which we can do by minimizing its negative
Step16: Using the batch approach, each gradient step requires us to make a prediction and compute the gradient for the whole data set, which makes each step take a long time.
Step17: This approach avoids circling around near a minimum forever
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# Size of the points dataset.
m = 20
# Points x-coordinate and dummy value (x0, x1).
X0 = np.ones((m, 1))
X1 = np.arange(1, m+1).reshape(m, 1)
X = np.hstack((X0, X1))
# Points y-coordinate
y = np.array([3, 4, 5, 5, 2, 4, 7, 8, 11, 8, 12,
11, 13, 13, 16, 17, 18, 17, 19, 21]).reshape(m, 1)
# The Learning Rate alpha.
alpha = 0.01
def error_function(theta, X, y):
'''Error function J definition.'''
diff = np.dot(X, theta) - y
return (1./2*m) * np.dot(np.transpose(diff), diff)
def gradient_function(theta, X, y):
'''Gradient of the function J definition.'''
diff = np.dot(X, theta) - y
return (1./m) * np.dot(np.transpose(X), diff)
def gradient_descent(X, y, alpha):
'''Perform gradient descent.'''
theta = np.array([1, 1]).reshape(2, 1)
gradient = gradient_function(theta, X, y)
while not np.all(np.absolute(gradient) <= 1e-5):
theta = theta - alpha * gradient
gradient = gradient_function(theta, X, y)
return theta
# source:https://www.jianshu.com/p/c7e642877b0e
optimal = gradient_descent(X, y, alpha)
print('Optimal parameters Theta:', optimal[0][0], optimal[1][0])
print('Error function:', error_function(optimal, X, y)[0,0])
def difference_quotient(f, x, h):
return (f(x + h) - f(x)) / h
def square(x):
return x * x
def derivative(x):
return 2 * x
derivative_estimate = lambda x: difference_quotient(square, x, h=0.00001)
def sum_of_squares(v):
computes the sum of squared elements in v
return sum(v_i ** 2 for v_i in v)
# plot to show they're basically the same
import matplotlib.pyplot as plt
x = range(-10,10)
plt.plot(x, list(map(derivative, x)), 'rx') # red x
plt.plot(x, list(map(derivative_estimate, x)), 'b+') # blue +
plt.show()
def partial_difference_quotient(f, v, i, h):
# add h to just the i-th element of v
w = [v_j + (h if j == i else 0)
for j, v_j in enumerate(v)]
return (f(w) - f(v)) / h
def estimate_gradient(f, v, h=0.00001):
return [partial_difference_quotient(f, v, i, h)
for i, _ in enumerate(v)]
def step(v, direction, step_size):
move step_size in the direction from v
return [v_i + step_size * direction_i
for v_i, direction_i in zip(v, direction)]
def sum_of_squares_gradient(v):
return [2 * v_i for v_i in v]
from collections import Counter
from linear_algebra import distance, vector_subtract, scalar_multiply
from functools import reduce
import math, random
print("using the gradient")
# generate 3 numbers
v = [random.randint(-10,10) for i in range(3)]
print(v)
tolerance = 0.0000001
n = 0
while True:
gradient = sum_of_squares_gradient(v) # compute the gradient at v
if n%50 ==0:
print(v, sum_of_squares(v))
next_v = step(v, gradient, -0.01) # take a negative gradient step
if distance(next_v, v) < tolerance: # stop if we're converging
break
v = next_v # continue if we're not
n += 1
print("minimum v", v)
print("minimum value", sum_of_squares(v))
step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
def safe(f):
define a new function that wraps f and return it
def safe_f(*args, **kwargs):
try:
return f(*args, **kwargs)
except:
return float('inf') # this means "infinity" in Python
return safe_f
def minimize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):
use gradient descent to find theta that minimizes target function
step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
theta = theta_0 # set theta to initial value
target_fn = safe(target_fn) # safe version of target_fn
value = target_fn(theta) # value we're minimizing
while True:
gradient = gradient_fn(theta)
next_thetas = [step(theta, gradient, -step_size)
for step_size in step_sizes]
# choose the one that minimizes the error function
next_theta = min(next_thetas, key=target_fn)
next_value = target_fn(next_theta)
# stop if we're "converging"
if abs(value - next_value) < tolerance:
return theta
else:
theta, value = next_theta, next_value
# minimize_batch"
v = [random.randint(-10,10) for i in range(3)]
v = minimize_batch(sum_of_squares, sum_of_squares_gradient, v)
print("minimum v", v)
print("minimum value", sum_of_squares(v))
def negate(f):
return a function that for any input x returns -f(x)
return lambda *args, **kwargs: -f(*args, **kwargs)
def negate_all(f):
the same when f returns a list of numbers
return lambda *args, **kwargs: [-y for y in f(*args, **kwargs)]
def maximize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):
return minimize_batch(negate(target_fn),
negate_all(gradient_fn),
theta_0,
tolerance)
def in_random_order(data):
generator that returns the elements of data in random order
indexes = [i for i, _ in enumerate(data)] # create a list of indexes
random.shuffle(indexes) # shuffle them
for i in indexes: # return the data in that order
yield data[i]
def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
data = list(zip(x, y))
theta = theta_0 # initial guess
alpha = alpha_0 # initial step size
min_theta, min_value = None, float("inf") # the minimum so far
iterations_with_no_improvement = 0
# if we ever go 100 iterations with no improvement, stop
while iterations_with_no_improvement < 100:
value = sum( target_fn(x_i, y_i, theta) for x_i, y_i in data )
if value < min_value:
# if we've found a new minimum, remember it
# and go back to the original step size
min_theta, min_value = theta, value
iterations_with_no_improvement = 0
alpha = alpha_0
else:
# otherwise we're not improving, so try shrinking the step size
iterations_with_no_improvement += 1
alpha *= 0.9
# and take a gradient step for each of the data points
for x_i, y_i in in_random_order(data):
gradient_i = gradient_fn(x_i, y_i, theta)
theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i))
return min_theta
def maximize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
return minimize_stochastic(negate(target_fn),
negate_all(gradient_fn),
x, y, theta_0, alpha_0)
print("using minimize_stochastic_batch")
x = list(range(101))
y = [3*x_i + random.randint(-10, 20) for x_i in x]
theta_0 = random.randint(-10,10)
v = minimize_stochastic(sum_of_squares, sum_of_squares_gradient, x, y, theta_0)
print("minimum v", v)
print("minimum value", sum_of_squares(v))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate sample data
Step2: Compute Affinity Propagation
Step3: Plot result
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
from sklearn.cluster import AffinityPropagation
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=300, centers=centers, cluster_std=0.5,
random_state=0)
print(X.shape)
print(X[1:10,:])
af = AffinityPropagation(preference=-50).fit(X)
cluster_centers_indices = af.cluster_centers_indices_
print(cluster_centers_indices)
labels = af.labels_
n_clusters_ = len(cluster_centers_indices)
print('Estimated number of clusters: %d' % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels, metric='sqeuclidean'))
import matplotlib.pyplot as plt
from itertools import cycle
plt.close('all')
plt.figure(1)
plt.clf()
colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
cluster_center = X[cluster_centers_indices[k]]
plt.plot(X[class_members, 0], X[class_members, 1], col + '.')
plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
for x in X[class_members]:
plt.plot([cluster_center[0], x[0]], [cluster_center[1], x[1]], col)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download the IMDB dataset
Step2: Explore the data
Step3: Let's also print the first 10 labels.
Step4: Build the model
Step5: Let's now build the full model
Step6: The layers are stacked sequentially to build the classifier
Step7: Train the model
Step8: Evaluate the model
|
<ASSISTANT_TASK:>
Python Code:
!pip install tensorflow-hub
!pip install tensorflow-datasets
import os
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
# Split the training set into 60% and 40% to end up with 15,000 examples
# for training, 10,000 examples for validation and 25,000 examples for testing.
train_data, validation_data, test_data = tfds.load(
name="imdb_reviews",
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
train_labels_batch
embedding = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
model = # TODO 1: Your code here
model.summary()
# TODO 2: Your code here
history = # TODO 3: Your code here
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bag of Words (BoW) language model
Step2: Let's read in Augustine's Confessions text
Step3: There should be 13 books, which are fortunately separated by six line breaks
Step4: Let's peek at the first
Step5: Term Frequency Revisited
Step6: Now we can use Counter to get the term frequency
Step7: Challenge
Step8: Cool. So we'll create the CountVectorizer object, then transform it on our list of documents, here that would be the books in Augustine's Confessions.
Step9: What's this? A sparse matrix just means that some cells in the table don't have value. Why? Because the vocabulary base is not the same for all the books! Let's try to demonstrate this.
Step10: Welcome to the Document Term Matrix. This is a core concept in NLP and text analysis. It's not that complicated!
Step11: Looks to be about 13 counts, one for each book, let's double check!
Step12: Normalization
Step13: Cool, but we already know how to do this much faster with Counter. Let's take this another step further. In order to make apples-to-apples comparisons across Books, we can normalize our values by dividing each word count by the total number of words in its Book. To do that, we'll need to sum on axis=1, which means summing the row (number of words in that book), as opposed to summing the column.
Step14: Reading the matrix above, we see that the word "abandoned" accounts for .0145406% of words in Book 1, and .0277855% of words in Book 2.
Step15: For a variety of reasons we like to remove words like "the", "of", "and", etc. These are refered to as 'stopwords.' As Piper notes in footnote 24
Step16: Since we are using an older translation of Augustine, we have to remove archaic forms of these stopwords as well.
Step17: Question
Step18: Note
Step19: Vector Space Model of Language
Step20: Each vector has a number of coordinates equal to the number of unique words in the corpus. Let's just take Book 1
Step21: One way to measure the similarity of texts, which Piper uses in his article, would be to measure the Euclidean distance between their coordinates in space. According to Wikipedia
Step22: We can visualize this too
Step23: We can think of this 2 dimensional distance between 2 points as looking at 2 different texts. In this very simple 2-d model though, we only have 2 words in the entire corpus! (2,6) and (5,10) would be the absolute counts for each text. Imagine
Step24: We don't have to use our cool formula to calculate this, or to scale it up for n dimensions. That's what scipy is for
Step25: Another measure of two vectors, more common for text analysis, is called cosine similarity. According to Wikipedia
Step26: Let's go back to two dimensions for the vanilla numpy calculation
Step27: Of course, scipy has taken care of this for us too
Step28: For the 3-d model
Step29: Challenge
Step30: We'll make a matrix for the points
Step31: Now we can use sklearn's pairwise_distances method to compare each book to each book
Step32: Cool! We got what we calculated. Note
Step33: Visualizing hundreds of dimensions is difficult for us. So we can use multi-dimensional scaling (MDS) to put this into a 2-d graph for us
Step34: Homework
Step35: A standard clustering test such as k-means indicates that the two clusters consist of Books 11–12, with Book 13 being grouped with Books 1–10.) [71]
Step36: I then
Step37: Piper the introduces two new ideas.
Step38: Great! And now for his second measure
Step39: Awesome! We can also write ourselves a quick function to call the four functions we just wrote
Step40: text_measures should now return two values. The first values is the cross_half_dist and the second values is the in_half_dist. Let's test this out on Augustine's `Confessions'
Step41: Looks good! Now we can read in the corpus Piper used
Step42: We'll stick with English so we don't have to think about the possible issues of going between languages
Step43: We'll slightly change our text_measures function so that it can read in the file of the text we want to read in, instead of taking the confessions string we already had
Step44: Now we can use Table's apply method to call the function text_measures_alt on all the files in the corpus
Step45: Let's add these measures to our Table
Step46: If we want to see which novels stick out, we might be interested in the z-score for a particular novel. This is how many standard devations the novel is away from the mean. Let's write a function
Step47: Now we can add these to the Table too
Step48: Scatter plot, please!
|
<ASSISTANT_TASK:>
Python Code:
!wget https://ndownloader.figshare.com/files/3686778 -P data/
%%capture
!unzip data/3686778 -d data/
%matplotlib inline
import numpy as np
from datascience import *
with open('data/Augustine-Confessions.txt') as f:
confessions = f.read()
confessions
confessions_list = confessions.split('\n'*6)
len(confessions_list)
confessions_list[0]
import spacy
nlp = spacy.load('en', parser=False)
first_book = confessions_list[0]
parsed = nlp(first_book)
first_token_list = [token.text for token in parsed]
first_token_list
from collections import Counter
word_freq = Counter(first_token_list)
word_freq.most_common(20)
from sklearn.feature_extraction.text import CountVectorizer
CountVectorizer?
cv = CountVectorizer()
dtm = cv.fit_transform(confessions_list)
dtm
# de-sparsify
desparse = dtm.toarray()
# create labels for columns
word_list = cv.get_feature_names()
# create a new Table
dtm_tb = Table(word_list).with_rows(desparse)
dtm_tb
dtm_tb['read']
len(dtm_tb['read'])
toks_tab = Table()
toks_tab.append_column(label="Word List", values=word_list)
toks_tab.append_column(label="Frequency", values=sum(desparse)) # this sum(desparse) will sum the word count column
toks_tab.show()
row_sums = np.sum(desparse, axis=1)
normed = desparse/row_sums[:,None]
dtm_tb = Table(word_list).with_rows(normed)
dtm_tb
dtm_tb['abandoned']
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS
ENGLISH_STOP_WORDS
ye_olde_stop_words = ['thou','thy','thee', 'thine', 'ye', 'hath','hast', 'wilt','aught',\
'art', 'dost','doth', 'shall', 'shalt','tis','canst','thyself',\
'didst', 'yea', 'wert']
stop_words = list(ENGLISH_STOP_WORDS) + ye_olde_stop_words
# remove stopwords from column list
dtm_tb = dtm_tb.drop(stop_words)
# it is often more efficient to perform operations on arrays rather than tables
dtm_array = dtm_tb.to_array()
dtm_array
from sklearn.feature_extraction.text import TfidfTransformer
cv = CountVectorizer(stop_words = stop_words)
dtm = cv.fit_transform(confessions_list)
tt = TfidfTransformer(norm='l1',use_idf=False)
dtm_tf = tt.fit_transform(dtm)
word_list = cv.get_feature_names()
dtm_array = dtm_tf.toarray()
Table(word_list).with_rows(dtm_array)
dtm_array = dtm_tf.toarray()
dtm_array
dtm_array[0]
a = (2,6)
b = (5,10)
euc_dist = np.sqrt( (a[0]-b[0])**2 + (a[1]-b[1])**2 )
euc_dist
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter([a[0], b[0]], [a[1], b[1]])
plt.plot([a[0], b[0]], [a[1], b[1]])
plt.show()
a = (2,6,15)
b = (5,10,3)
euc_dist = np.sqrt( (a[0]-b[0])**2 + (a[1]-b[1])**2 + (a[2]-b[2])**2 )
euc_dist
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter([a[0], b[0]], [a[1], b[1]], [a[2], b[2]])
ax.plot([a[0], b[0]], [a[1], b[1]], [a[2], b[2]])
plt.show()
from scipy.spatial import distance
distance.euclidean(a,b)
mpl.rcParams['legend.fontsize'] = 10
origin = (0,0,0)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter([a[0], b[0], origin[0]], [a[1], b[1], origin[1]], [a[2], b[2], origin[2]])
ax.plot([origin[0], a[0]], [origin[1], a[1]], [origin[2], a[2]])
ax.plot([origin[0], b[0]], [origin[1], b[1]], [origin[2], b[2]])
plt.show()
a = (2,6)
b = (5,10)
# don't worry about the formula so much as the intuition behind it: angle between vectors
cos_dist = 1 - (a[0]*b[0] + a[1]*b[1]) / ( np.sqrt(a[0]**2 + a[1]**2 ) * np.sqrt(b[0]**2 + b[1]**2 ) )
cos_dist
distance.cosine(a,b)
a = (2,6,15)
b = (5,10,3)
distance.cosine(a,b)
a = (2,6)
b = (5,10)
c = (14,11)
print(distance.euclidean(a,b))
print(distance.euclidean(a,c))
print(distance.euclidean(b,c))
point_matrix = np.array([a,b,c])
point_matrix
from sklearn.metrics import pairwise
pairwise.pairwise_distances(point_matrix, metric='euclidean')
dist_matrix = pairwise.pairwise_distances(dtm_tf, metric='euclidean')
title_list = ['Book '+str(i+1) for i in range(len(confessions_list))]
Table(title_list).with_rows(dist_matrix)
from sklearn.manifold import MDS
mds = MDS(n_components = 2, dissimilarity="precomputed")
embeddings = mds.fit_transform(dist_matrix)
_, ax = plt.subplots(figsize=(10,10))
ax.scatter(embeddings[:,0], embeddings[:,1], alpha=0)
for i in range(13):
ax.annotate(i+1, ((embeddings[i,0], embeddings[i,1])))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2)
kmeans.fit_predict(dist_matrix)
def text_splitter(text):
n = int(len(text)/20) # get length n of each part
text_list = [text[i*n:(i+1)*n] for i in range(20)] # slice out the text
return(text_list)
def text_distances(text_list):
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import pairwise
ye_olde_stop_words = ['thou','thy','thee', 'thine', 'ye', 'hath','hast', 'wilt','aught',\
'art', 'dost','doth', 'shall', 'shalt','tis','canst','thyself',\
'didst', 'yea', 'wert']
stop_words = list(ENGLISH_STOP_WORDS)+ye_olde_stop_words
cv = CountVectorizer(stop_words = stop_words, min_df=0.6)
dtm = cv.fit_transform(text_list)
tt = TfidfTransformer(norm='l1',use_idf=False)
dtm_tf = tt.fit_transform(dtm)
dist_matrix = pairwise.pairwise_distances(dtm_tf, metric='euclidean')
return(dist_matrix)
def in_half_dist(matrix):
n = len(matrix) # length of work, should be 20
d1 = [] # will hold distances for first half
d2 = [] # will hold distances for second half
for i in range(int(n/2)-1): # loop through first half of work (10 in our case)
for j in range(i+1, int(n/2)): # loop through itself (first half again)
d1.append(matrix[i,j]) # append distance between one part to another (in first half)
for i in range(int(n/2), n-1):
for j in range(i+1, n):
d2.append(matrix[i,j])
return(abs(sum(d1)-sum(d2))/len(d1)) # take average of each distance array and subtract 2 from 1
def cross_half_dist(matrix):
n = len(matrix) # number of parts, here 20
d = [] # will hold distnaces
for i in range(int(n/2)): # loop through first half
for j in range(int(n/2), n): # loop through second half
d.append(matrix[i,j]) # append distance between first and second
return(sum(d)/len(d)) # take average
def text_measures(text):
text_list = text_splitter(text)
dist_matrix = text_distances(text_list)
return(cross_half_dist(dist_matrix), in_half_dist(dist_matrix))
text_measures(confessions)
metadata_tb = Table.read_table('data/2_txtlab_Novel450.csv')
metadata_tb.show(5)
metadata_tb = metadata_tb.where('language', "English")
metadata_tb.show(5)
corpus_path = 'data/2_txtalb_Novel450/'
def text_measures_alt(text_name):
with open(corpus_path+text_name, 'r') as file_in:
text = file_in.read()
text_list = text_splitter(text)
dist_matrix = text_distances(text_list)
return(cross_half_dist(dist_matrix), in_half_dist(dist_matrix))
measures = metadata_tb.apply(text_measures_alt, 'filename')
measures
metadata_tb['Cross-Half'] = measures[:,0]
metadata_tb['In-Half'] = measures[:,1]
metadata_tb.show(5)
def get_zscores(values):
import numpy as np
mn = np.mean(values)
st = np.std(values)
zs = []
for x in values:
z = (x-mn)/st
zs.append(z)
return zs
metadata_tb['Cross-Z-Score'] = get_zscores(measures[:,0])
metadata_tb['In-Z-Score'] = get_zscores(measures[:,1])
metadata_tb.show(5)
metadata_tb.scatter('In-Half', 'Cross-Half')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To verify that the download was successful, you can run the following command.
Step2: 2. Load VADER model
Step3: We will use the following three sentences
Step4: The next for loop assigns a sentiment score from VADER to each sentence.
Step5: VADER provides a dictionary containing four ratings, i.e., keys, for each sentence.
Step7: The next function defines an API to process texts (textual_unit) using different settings. This function operates on texts and assumes spaCy is loaded with the corresponding language model as we just did. Take a little time to analyze the function, which uses certain spaCy token properties to process the text in different ways and returns the VADER sentiment.
Step8: We can now use various API settings to experiment with processing text in different ways.
Step9: Perhaps you are surprised to see that there is no difference in the output! This is useful information for you to try to understand how the system works! Perhaps there are sentences for which it does matter. Feel free to experiment with other sentences.
Step10: Please note that in this case, VADER only considers day to predict the sentiment score and ignores all other words. Do you agree with the assessment that day is neutral? I hope you have a great day!
Step11: This is even more interesting. The part of speech label VERB is not applied to any of the tokens ('s is labeled as auxiliary and not with the label VERB). We have not provided VADER with input at all!
|
<ASSISTANT_TASK:>
Python Code:
import nltk
# You only need to run this cell once.
# After that, you can comment it out.
nltk.download('vader_lexicon', quiet=False)
from nltk.sentiment import vader
from nltk.sentiment.vader import SentimentIntensityAnalyzer
vader_model = SentimentIntensityAnalyzer()
sentences = ["Here are my sentences.",
"It's a nice day.",
"It's a rainy day."]
for sent in sentences:
scores = vader_model.polarity_scores(sent)
print()
print('INPUT SENTENCE', sent)
print('VADER OUTPUT', scores)
import spacy
nlp = spacy.load('en') # en_core_web_sm
def run_vader(nlp,
textual_unit,
lemmatize=False,
parts_of_speech_to_consider=set(),
verbose=0):
Run VADER on a sentence from spacy
:param nlp: spaCy model
:param str textual unit: a textual unit, e.g., sentence, sentences (one string)
(by looping over doc.sents)
:param bool lemmatize: If True, provide lemmas to VADER instead of words
:param set parts_of_speech_to_consider:
-empty set -> all parts of speech are provided
-non-empty set: only these parts of speech are considered
:param int verbose: if set to 1, information is printed
about input and output
:rtype: dict
:return: vader output dict
doc = nlp(textual_unit)
input_to_vader = []
for sent in doc.sents:
for token in sent:
if verbose >= 2:
print(token, token.pos_)
to_add = token.text
if lemmatize:
to_add = token.lemma_
if to_add == '-PRON-':
to_add = token.text
if parts_of_speech_to_consider:
if token.pos_ in parts_of_speech_to_consider:
input_to_vader.append(to_add)
else:
input_to_vader.append(to_add)
scores = vader_model.polarity_scores(' '.join(input_to_vader))
if verbose >= 1:
print()
print('INPUT SENTENCE', sent)
print('INPUT TO VADER', input_to_vader)
print('VADER OUTPUT', scores)
return scores
sentences = ["Here are my sentences.",
"It's a nice day.",
"It's a rainy day."]
prediction = run_vader(nlp, sentences[1], lemmatize=False, verbose=1)
print(prediction)
prediction = run_vader(nlp, sentences[1], lemmatize=True, verbose=1)
print(prediction)
run_vader(nlp,
sentences[1],
lemmatize=True,
parts_of_speech_to_consider={'NOUN'},
verbose=1)
run_vader(nlp,
sentences[1],
lemmatize=True,
parts_of_speech_to_consider={'VERB'},
verbose=1)
run_vader(nlp,
sentences[1],
lemmatize=True,
parts_of_speech_to_consider={'ADJ'},
verbose=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just to make sure nothing goes wrong with reads/writes (as this model takes a lot less time per epoch), get a backup of the best model so far.
Step2: Was hoping for a better score there... Check how many epochs each model saw
Step3: Let it run a little longer.
Step4: And the most recent now
Step5: The score it gets on the holdout set is
Step6: It got worse!
|
<ASSISTANT_TASK:>
Python Code:
cd ..
%run check_test_score.py -v run_settings/alexnet_based_norm_global.json
%run check_test_score.py -v run_settings/alexnet_learning_rate.json
%matplotlib inline
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_based_norm_global.pkl
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_learning_rate.pkl.backup
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_learning_rate.pkl
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_learning_rate_recent.pkl
%run check_test_score.py -v run_settings/alexnet_learning_rate.json
%run check_test_score.py -v run_settings/alexnet_learning_rate2.json
%run check_test_score.py -v run_settings/alexnet_learning_rate3.json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Set up your Google Cloud project
Step3: Otherwise, set your project ID here.
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: If you are using Colab Notebooks, set the project using gcloud config.
Step8: Create a Cloud Storage bucket
Step9: Only if your bucket doesn't already exist
Step10: Finally, validate access to your Cloud Storage bucket by examining its contents
Step11: Service Account
Step12: Set service account access for Vertex AI Pipelines
Step13: Set up variables
Step14: Initialize Vertex AI SDK for Python
Step17: Write the Apache Beam pipeline module
Step18: Write the requirements (installs) for the Apache Beam pipeline module
Step19: Copy python module and requirements file to Cloud Storage
Step20: Create and execute the pipeline job
Step21: Delete a pipeline job
Step25: Write the Apache Beam pipeline module
Step26: Write the requirements (installs) for the Apache Beam pipeline module
Step27: Write the setup.py (installs) for the Dataflow workers
Step28: Copy python module and requirements file to Cloud Storage
Step29: Location of BigQuery training data.
Step30: Create and execute the pipeline job
Step31: Delete a pipeline job
Step32: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install -U tensorflow $USER_FLAG -q
! pip3 install -U tensorflow-data-validation $USER_FLAG -q
! pip3 install -U tensorflow-transform $USER_FLAG -q
! pip3 install -U tensorflow-io $USER_FLAG -q
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
PROJECT_ID = ""
if not os.getenv("IS_TESTING"):
# Get your Google Cloud project ID from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
! gcloud config set project $PROJECT_ID
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_URI
! gsutil ls -al $BUCKET_URI
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your service account from gcloud
if not IS_COLAB:
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
if IS_COLAB:
shell_output = ! gcloud projects describe $PROJECT_ID
project_number = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{project_number}-compute@developer.gserviceaccount.com"
print("Service Account:", SERVICE_ACCOUNT)
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI
import google.cloud.aiplatform as aip
from google_cloud_pipeline_components.v1.dataflow import DataflowPythonJobOp
from google_cloud_pipeline_components.v1.wait_gcp_resources import \
WaitGcpResourcesOp
from kfp import dsl
from kfp.v2 import compiler
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
%%writefile wc.py
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
Main entry point; defines and runs the wordcount pipeline.
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
%%writefile requirements.txt
apache-beam
future
GCS_WC_PY = BUCKET_URI + "/wc.py"
! gsutil cp wc.py $GCS_WC_PY
GCS_REQUIREMENTS_TXT = BUCKET_URI + "/requirements.txt"
! gsutil cp requirements.txt $GCS_REQUIREMENTS_TXT
GCS_WC_OUT = BUCKET_URI + "/wc_out.txt"
PIPELINE_ROOT = "{}/pipeline_root/dataflow_wc".format(BUCKET_URI)
@dsl.pipeline(name="dataflow-wc", description="Dataflow word count component pipeline")
def pipeline(
python_file_path: str = GCS_WC_PY,
project_id: str = PROJECT_ID,
location: str = REGION,
staging_dir: str = PIPELINE_ROOT,
args: list = ["--output", GCS_WC_OUT, "--runner", "DataflowRunner"],
requirements_file_path: str = GCS_REQUIREMENTS_TXT,
):
dataflow_python_op = DataflowPythonJobOp(
project=project_id,
location=location,
python_module_path=python_file_path,
temp_location=staging_dir,
requirements_file_path=requirements_file_path,
args=args,
)
_ = WaitGcpResourcesOp(gcp_resources=dataflow_python_op.outputs["gcp_resources"])
compiler.Compiler().compile(pipeline_func=pipeline, package_path="dataflow_wc.json")
pipeline = aip.PipelineJob(
display_name="dataflow_wc",
template_path="dataflow_wc.json",
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
pipeline.run()
! gsutil cat {GCS_WC_OUT}* | head -n10
! rm -f dataflow_wc.json wc.py requirements.txt
pipeline.delete()
%%writefile split.py
import argparse
import logging
import tensorflow_transform.beam as tft_beam
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
Main entry point; defines and runs the wordcount pipeline.
parser = argparse.ArgumentParser()
parser.add_argument('--bq_table',
dest='bq_table')
parser.add_argument('--bucket',
dest='bucket')
args, pipeline_args = parser.parse_known_args(argv)
logging.info("ARGS")
logging.info(args)
logging.info("PIPELINE ARGS")
logging.info(pipeline_args)
for i in range(0, len(pipeline_args), 2):
if "--temp_location" == pipeline_args[i]:
temp_location = pipeline_args[i+1]
elif "--project" == pipeline_args[i]:
project = pipeline_args[i+1]
exported_train = args.bucket + '/exported_data/train'
exported_eval = args.bucket + '/exported_data/eval'
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as pipeline:
with tft_beam.Context(temp_location):
raw_data_query = "SELECT {0},{1} FROM {2} LIMIT 500".format("CAST(station_number as STRING) AS station_number,year,month,day","mean_temp", args.bq_table)
def parse_bq_record(bq_record):
Parses a bq_record to a dictionary.
output = {}
for key in bq_record:
output[key] = [bq_record[key]]
return output
def split_dataset(bq_row, num_partitions, ratio):
Returns a partition number for a given bq_row.
import json
assert num_partitions == len(ratio)
bucket = sum(map(ord, json.dumps(bq_row))) % sum(ratio)
total = 0
for i, part in enumerate(ratio):
total += part
if bucket < total:
return i
return len(ratio) - 1
# Read raw BigQuery data.
raw_train_data, raw_eval_data = (
pipeline
| "Read Raw Data"
>> beam.io.ReadFromBigQuery(
query=raw_data_query,
project=project,
use_standard_sql=True,
)
| "Parse Data" >> beam.Map(parse_bq_record)
| "Split" >> beam.Partition(split_dataset, 2, ratio=[8, 2])
)
# Write raw train data to GCS .
_ = raw_train_data | "Write Raw Train Data" >> beam.io.WriteToText(
file_path_prefix=exported_train, file_name_suffix=".csv"
)
# Write raw eval data to GCS .
_ = raw_eval_data | "Write Raw Eval Data" >> beam.io.WriteToText(
file_path_prefix=exported_eval, file_name_suffix=".csv"
)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
%%writefile requirements.txt
apache-beam
tensorflow-transform==1.2.0
future
%%writefile setup.py
import setuptools
REQUIRED_PACKAGES = [
'tensorflow-transform==1.2.0',
'future'
]
PACKAGE_NAME = 'my_package'
PACKAGE_VERSION = '0.0.1'
setuptools.setup(
name=PACKAGE_NAME,
version=PACKAGE_VERSION,
description='Demo for split transformation',
install_requires=REQUIRED_PACKAGES,
author="cdpe@google.com",
packages=setuptools.find_packages()
)
GCS_SPLIT_PY = BUCKET_URI + "/split.py"
! gsutil cp split.py $GCS_SPLIT_PY
GCS_REQUIREMENTS_TXT = BUCKET_URI + "/requirements.txt"
! gsutil cp requirements.txt $GCS_REQUIREMENTS_TXT
GCS_SETUP_PY = BUCKET_URI + "/setup.py"
! gsutil cp setup.py $GCS_SETUP_PY
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
BQ_TABLE = "bigquery-public-data.samples.gsod"
PIPELINE_ROOT = "{}/pipeline_root/dataflow_split".format(BUCKET_URI)
@dsl.pipeline(name="dataflow-split", description="Dataflow split dataset")
def pipeline(
python_file_path: str = GCS_SPLIT_PY,
project_id: str = PROJECT_ID,
location: str = REGION,
staging_dir: str = PIPELINE_ROOT,
args: list = [
"--bucket",
BUCKET_URI,
"--bq_table",
BQ_TABLE,
"--runner",
"DataflowRunner",
"--setup_file",
GCS_SETUP_PY,
],
requirements_file_path: str = GCS_REQUIREMENTS_TXT,
):
# DataflowPythonJobOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:v0.2.0_dataflow_logs_fix"
dataflow_python_op = DataflowPythonJobOp(
project=project_id,
location=location,
python_module_path=python_file_path,
temp_location=staging_dir,
requirements_file_path=requirements_file_path,
args=args,
)
_ = WaitGcpResourcesOp(gcp_resources=dataflow_python_op.outputs["gcp_resources"])
compiler.Compiler().compile(pipeline_func=pipeline, package_path="dataflow_split.json")
pipeline = aip.PipelineJob(
display_name="dataflow_split",
template_path="dataflow_split.json",
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
pipeline.run()
! gsutil ls {BUCKET_URI}/exported_data
! rm -f dataflow_split.json split.py requirements.txt
pipeline.delete()
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Moving pictures dataset. from
Step3: sorting the samples based on a metadata field (sort_samples)
Step4: and is the new data sorted?
Step5: consecutive sorting using different fields
Step6: filter samples based on metadata field (filter_samples)
Step7: we can supply a list of values instead of only one value
Step8: we can also reverse the filtering (removing samples with the supplied values)
Step9: filter low abundance features (filter_abundance)
Step10: Keeping the low abundance bacteria instead
Step11: filter non-common bacteria (filter_prevalence)
Step12: Filter bacteria based on the mean frequency over all samples (filter_mean)
|
<ASSISTANT_TASK:>
Python Code:
import calour as ca
ca.set_log_level(11)
%matplotlib notebook
cfs=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',
'data/chronic-fatigue-syndrome.sample.txt',
normalize=10000,min_reads=1000)
print(cfs)
movpic=ca.read_amplicon('data/moving_pic.biom',
'data/moving_pic.sample.txt',
normalize=10000,min_reads=1000)
print(movpic)
print(cfs.sample_metadata['Subject'].is_monotonic_increasing)
cfs=cfs.sort_samples('Subject')
print(cfs.sample_metadata['Subject'].is_monotonic_increasing)
movpic=movpic.sort_samples('DAYS_SINCE_EXPERIMENT_START')
movpic=movpic.sort_samples('HOST_SUBJECT_ID')
print(movpic.sample_metadata['DAYS_SINCE_EXPERIMENT_START'].is_monotonic_increasing)
print(movpic.sample_metadata['HOST_SUBJECT_ID'].is_monotonic_increasing)
tt=movpic.filter_samples('HOST_SUBJECT_ID','F4')
print('* original:\n%s\n\n* filtered:\n%s' % (movpic, tt))
print(movpic.sample_metadata['BODY_HABITAT'].unique())
yy=tt.filter_samples('BODY_HABITAT', ['UBERON:skin', 'UBERON:feces'])
print(yy)
yy=tt.filter_samples('BODY_HABITAT', ['UBERON:skin', 'UBERON:feces'], negate=True)
print(yy)
tt=cfs.filter_abundance(25)
print('* original:\n%s\n\n* filtered:\n%s' % (cfs, tt))
tt=cfs.filter_abundance(25, negate=True)
print('* original:\n%s\n\n* filtered:\n%s' % (cfs,tt))
# remove bacteria present in less than half of the samples
tt=cfs.filter_prevalence(0.5)
print('* original:\n%s\n\n* filtered:\n%s' % (cfs, tt))
# keep only high frequency bacteria (mean over all samples > 1%)
tt=cfs.filter_mean(0.01)
print('* original:\n%s\n\n* filtered:\n%s' % (cfs, tt))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The pyshtools module contains the three classes SHCoeffs, SHGrid, and SHWindow, a submodule shtools that contains all of the Python-wrapped Fortran functions, and a series of submodules that groups these functions (constant, legendre, expand, shio, spectralanalysis, localizedspectralanalysis, rotate, gravmag, and other). SHCoeffs, SHGrid, and SHWindow bundle the vast majority of pyshtools functionality. In this notebook we will demonstrate how to use the classes related to grids and coefficients.
Step2: Next, we will create a random realization of spherical harmonic coefficients whose expected power is given by the spectrum we just created
Step3: This creates a new class instance of SHCoeffs that contains several attributes and methods. By default, pyshtools assumes that the coefficients are real and that they are normalized using the '4pi' convention exlcuding the Condon-Shortley phase factor. This is the standard normalization in Geodesy and many fields of geophysics and spectral analysis. Other normalization can be specified explicitly by specifying the optional parameter normalization, which can be either '4pi', 'ortho', or 'schmidt'. The Condon-Shortley phase can be included by setting the optional parameter csphase to -1, and if you wanted complex coefficients, you could set kind='complex'.
Step4: The spherical harmonic coefficients of a given SHCoeffs class instance can be converted easily to a different normalization using the return_coeffs() method. Here, we will convert the coeffients to the orthonormalized convention using the Condon-Shortley phase, which is common in many fields of physics and seismology. Also, let's just return the first 50 degrees of the function
Step5: If you ever forget how your data are normalized, you can use the built in info() method to remind you
Step6: We will next calculate the power spectrum of our two functions and plot them along with our theoretical input spectrum
Step7: As you can see, the power spectrum of the random realization follows closely the input spectrum. Furthermore, the power spectrum is displayed in units of '4pi' spherical harmonics and therefore independend of the employed normalization. You will also note that we used the method get_degrees() to return a numpy list of the degrees from 0 up to the maximum value of the coefficients.
Step8: When expanding data onto a grid, pyshtools provides three options. The default is to use grid='DH', which is an equally-sampled grid in latitude and longitude that conforms to Driscoll and Healy's (1994) sampling theorem. If you would like a grid that is oversampled in longitude by a factor or two, such that the number of longitude bands is twice that as the number of latitude bands, use grid='DH2' instead.
Step9: Once again, if you ever forget how your grid was constructed, the info() method provides you with everything you need to know
Step10: Sometimes you need to set individual spherical harmonic coefficients to a specified value. For example, for planetary topography, the degree 2 order 0 term can be atypically large as a result of the planet's rotation. Let's set this term equal to zero and replot it using an over-sampled DH grid. Though we will modify only a single coefficient here, you could also provide input arrays for l, m, and value. When using the set_coeffs() method, you set the cosine terms using positive values for the orders, and the sine terms using negative orders
Step11: If you want to extract the grid as a numpy array, this can be done using the get_grid() method. If instead you want to extract the coefficients as a numpy array, this can be done using the get_coeffs() method. Here, we will just return the first few degrees of the spherical harmonic coefficients and verify that the coefficient was indeed set to zero
Step12: It is also easy to rotate either the physical body or the underlying coordinate frame of the function expressed in spherical harmonics. This is accomplished using three Euler angles, alpha, beta and gamma. There are many different conventions for these angles, so please read the documentation before doing this blindly! Here, we will rotate the point at the north pole to 60 degrees north latitude and 180 degrees east longitude
Step13: By default, this routine expects the angles to be in degrees, which we specified redundantly by the provided optional parameter. If the angles were in radians, you would instead set this to False.
Step14: We can easily add, subtract multiply, and divide sets of coefficients and grids. If you need to make an exact copy of either, you can use the copy() method.
Step15: Finally, pyshtools provides many physical constants that are useful when working with planetary data. As an example, here is the GM of Earth, along with information on it's source
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
import pyshtools
degrees = np.arange(101, dtype=float)
degrees[0] = np.inf
power = degrees**(-2)
clm = pyshtools.SHCoeffs.from_random(power)
fig, ax = clm.plot_powerperdegree()
clm_ortho = clm.return_coeffs(normalization='ortho', csphase=-1, lmax=50)
clm_ortho.info()
fix, ax = plt.subplots(1,1)
ax.plot(clm.get_degrees(), power, '-k',
clm.get_degrees(), clm.get_powerperdegree(), '-r',
clm_ortho.get_degrees(), clm_ortho.get_powerperdegree(), '-b')
ax.set(yscale='log', xlabel='degree', ylabel='power')
grid = clm.expand()
fig, ax = grid.plot_rawdata()
grid_glq = clm.expand(grid='GLQ')
grid_glq.plot_rawdata()
lats = grid_glq.get_lats()
lons = grid_glq.get_lons()
grid_glq.info()
grid.info()
l = 2
m = 0
value = 0.
clm.set_coeffs(value, l, m)
grid_dh2 = clm.expand(grid='DH2')
fig, ax = grid_dh2.plot_rawdata()
grid_dh2.info()
coeffs = clm.get_coeffs(lmax=4)
print(coeffs, end='\n\n')
print('c20 = {:f}'.format(coeffs[0,2,0]))
clat = 60.
clon = 180.
alpha = 0.
beta = -(90.-clat)
gamma = -clon
clm_rotated = clm.rotate(alpha, beta, gamma, degrees=True)
grid_rotated = clm_rotated.expand()
fig, ax = grid_rotated.plot_rawdata()
clm2 = pyshtools.SHCoeffs.from_random(power/10.)
grid2 = clm2.expand()
fig, ax = grid2.plot_rawdata()
grid3 = 100 + 2*grid2 + grid**2 + grid/2 -100 - grid2*2 -grid*grid -grid/2
grid4 = grid3.copy()
print(pyshtools.constant.gm_earth, end='\n\n')
pyshtools.constant.gm_earth.info()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Text processing with Scikit learn
Step2: Array vector for the first document
Step3: Number of times word "hard" occurs
Step4: Using the 20 Newsgroups dataset
Step5: Creating a CountVectorizer object
Step6: We can now see how frequently the word algorithm occurs in the subset of the 20Newgroups collection we are considering.
Step7: How many terms were extracted? use get_feature_names()
Step8: CountVectorizer can do more preprocessing. This can be stopword removal.
Step9: More preprocessing
Step10: Create an English stemmer
Step11: NLTK for text analytics
Step12: Integrating NLTK with SciKit's vectorizer
Step13: If we modify build_analyzer() to apply the NLTK stemmer to the output of default build_analyzer(), we get a version that does stemming as well
Step14: So now we can create an instance of this class
Step15: Use this vectorizer to extract features
Step16: Notes
|
<ASSISTANT_TASK:>
Python Code:
import sklearn
import numpy as np
import matplotlib.pyplot as plt
data = np.array([[1,2], [2,3], [3,4], [4,5], [5,6]])
x = data[:,0]
y = data[:,1]
data, x, y
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df = 1)
content = ["How to format my hard disk", " Hard disk format problems "]
# fit_transform returns array of two rows, one per 'document'.
# each row has 7 elements, each element being the number of items
# a given feature occurred in that document.
X = vectorizer.fit_transform(content)
vectorizer.get_feature_names(), X.toarray()
X.toarray()[0]
X.toarray()[1][vectorizer.get_feature_names().index('hard')]
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True,
random_state=42)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
train_counts = vectorizer.fit_transform(twenty_train.data)
vectorizer.vocabulary_.get(u'algorithm')
len(vectorizer.get_feature_names())
vectorizer = CountVectorizer(stop_words='english')
sorted(vectorizer.get_stop_words())[0:20]
import nltk
s = nltk.stem.SnowballStemmer('english')
s.stem("cats"), s.stem("ran"), s.stem("jumped")
from nltk.tokenize import word_tokenize
text = word_tokenize("And now for something completely different")
nltk.pos_tag(text)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(stop_words="english")
analyze = vectorizer.build_analyzer()
analyze("John bought carrots and potatoes")
import nltk.stem
english_stemmer = nltk.stem.SnowballStemmer('english')
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer=super(StemmedCountVectorizer, self).build_analyzer()
return lambda doc:(english_stemmer.stem(w) for w in analyzer(doc))
stem_vectorizer = StemmedCountVectorizer(min_df=1,
stop_words='english')
stem_analyze = stem_vectorizer.build_analyzer()
Y = stem_analyze("John bought carrots and potatoes")
[tok for tok in Y]
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',
categories=categories,
shuffle=True, random_state=42)
train_counts = stem_vectorizer.fit_transform(twenty_train.data)
len(stem_vectorizer.get_feature_names())
!ipython nbconvert --to script Lab1\ Text\ processing\ with\ python.ipynb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data variable contains both training and test datasets, as well as tag assignments and their labels
Step2: Now, we need to convert it into the Polara-compatible format of pandas DataFrames. Mind the field names for users and items in the result
Step3: We also need to convert item features (tags) into a dataframe with a ceartain structure. This will enable many Polara's built-in functions for data manipulation.
Step4: Preparing data model for training and validation
Step5: Let us look at the holdout data, which will serve as a validation set
Step6: The procedure for both validation and final evaluation will be the following
Step7: Setting the number of threads greater than 1 is not critical in our setup, as the result will fluctuate by a relatively small margin, which should not drammatically change the whole picture of the algorithm's performance. Nevertheless, you can try running in a fully deterministic setting and verify that. However, you'll have to wait a bit longer during the LightFM tuning phase due to a sequential execution.
Step8: Tuning
Step9: Note that tuning with high number of hyper-parameters exponentially increases the complexity of the task. The authors of optuna even recommend to refrain from tuning unimportant variables. We will therefore concentrate on 3 main hyper-parameters
Step10: One of the limitations of TPE is that it does not take into account possible dependencies between hyper-parameters. In addition to that, parameters like rank or epochs significantly influence the training time. In order to avoid spending too much time on getting a reasonably good model, we will use a specific tuning procedure.
Step11: We will target specifically the Precision@10 metric and will anylize other metrics during the final evaluation.
Step12: Our optimal configuration can be retrieved as follows (note that it can be slightly different on your machine)
Step13: Visualizing tuning results
Step14: We will treat low-score points (likely obtained at the initial tuning stage) as outliers and only take into account the top-20% results.
Step15: Even though this plot is not suitable for making conclusions about the actual perfromance of the LightFM model, it roughly indicates that, with the current selection of optimization mechanism, increasing the number of epochs further is unlikely to significantly improve the quality of predictions. In other words, our tuning procedure reached a nearly stable state and, most likely, it is somewhere close to a local optimum. We will, therefore, proceed with the best found configuration without any further adjustments. Below is a characterizing plot of all trials.
Step16: We see that even lower values of the number of epochs produce a relatively good score, whcih also supports the assumption that we have probably found nearly optimal values. How good is this result? To answer that question we need to introduce the baselines.
Step17: Tuning
Step18: Recall, the result is fully deterministic and reproducible in this case.
Step19: As we can see, the optimal rank values sit somewhere around the rank 50. The prediction quality is not impressive, though. Surprisingly, unlike many other examples, this time the scaling does not help too much and the model performs poorly. It may indicate that properly handling side information plays more critical role than data debiasing. Seemingly, SVD is unable to reliably connect tag-based description with the latent representation. We can try to fix this problem with the help of the hybrid version of SVD. This is going to be the second and the main baseline.
Step20: This similairty matrix will be fed into the special data model instance, which will take care of providing a consistent on-demand access to similarity information for all dependent recommender models. The input should be in the format of a dictionary, specifiyng types of entities, similarities between them and the corresponding indexing information to decode rows (or columns) of the similarity matrices
Step21: All other functionality remains the same as in the standard data model
Step22: We also use the the same configuration of the new data model to ensure the same data pre-processing and the same overall setup.
Step23: The model
Step24: On the technical side, the model takes as an input the product of several sparse matrices, which forms an auxiliary matrix. As these computations are based on matrix-vector products (due to the Lanczos procedure used in the truncated SVD), we are presented with the choice
Step25: Tuning
Step26: The result is much better than for PureSVD! Let us compare all three models together.
Step27: We see that HybridSVD has a very good capacity comparing to other models in terms of the prediction quality. It seems that HybridSVD can be further improved with higher rank values. There is also some room for improvement in a more careful tuning of the scaling parameter and the weighting coefficient $\alpha$.
Step28: It is time to finally verify the obtained result on the test data.
Step29: Note that in the data splitting provided by the fetch_stackexchange function some of the items belong to both the training and the test set. We will ignore it and treat all items in the testset as cold items. Recall that in our evaluation setup we generate a list of candidate users for every cold item and than verify the list against the actual interactions present in the holdout.
Step30: SVD
Step31: HybridSVD
Step32: Analyzing the results
Step33: During the tuning phase we focused on the precision metric. It is now time to also see other metrics as well, and particularly the coverage score. It is calculated as the ratio of all unique recommendations, generated by an algorithms, to all unique entities of the same type present in the training data. Obviously, the maximum value is 1 (100% coverage) and it can be typically only achieved with randomly generated recommendations. The coverage metric characterizes the tendency of an algorithm to generate the same recommendations over and over again and is, therefore, linked to the diversity of recommendations. Higher diversity allows mitigating the famous "Harry Potter" problem, sometimes also called the "bananas problem". I personally like the term "tyranny of the majority". More diverse recommendations are likely to improve an overall user experience as long as the relevance of recommendations remains high enough. It may also improve overall product sales. However, in practice, there is an inverse relationship between diversity and accuracy of recommendations
Step34: As indicated by green horizontal bars in the table above, the standard SVD-based model performs poorly both in terms of recommendations accuracy and in terms of diversity. Likewise, LightFM presents a trade-off between these two metrics.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from lightfm.datasets import fetch_stackexchange
from polara.recommender.coldstart.data import ItemColdStartData
from polara.tools.display import print_frames # to print df's side-by-side
data = fetch_stackexchange('crossvalidated',
test_set_fraction=0.1,
indicator_features=False,
tag_features=True)
print(data.keys())
entities = ['users', 'items']
training_data = pd.DataFrame(dict(zip(entities, data['train'].nonzero())))
test_data = pd.DataFrame(dict(zip(entities, data['test'].nonzero())))
seed = 321 # to be used in data model, LightFM, and optimization routines
training_data = training_data.sample(frac=1, random_state=seed) # shuffle data
# convert sparse matrix into `item - tag list` pd.Series object
# make use of sparse CSR format for efficiency
item_tags = (
pd.Series(
np.array_split(
# convert back fron indices to tag labels
np.take(data['item_feature_labels'],
data['item_features'].indices),
# split tags into groups by items
data['item_features'].indptr[1:-1]))
.rename_axis('items')
.to_frame('tags') # Polara expects dataframe
)
print_frames([training_data.head(), # data for training and validation
test_data.head(), # data for testing
item_tags.head()]) # item features data
data_model = ItemColdStartData(
training_data,
*training_data.columns, # userid, itemid
item_features=item_tags,
seed=seed)
print(data_model)
data_model.test_ratio = 0.05 # take 5% of items as cold (at random)
data_model.prepare()
print_frames([data_model.test.holdout.head(), data_model.test.holdout.tail()])
from polara.recommender.coldstart.models import LightFMItemColdStart
num_threads = 4 # number of parallel threads used by LightFM
max_rank = 200 # the max value or latent features used in tuning
def create_lightfm_model(data_model, item_features, num_threads, seed=None):
'Prepare LightFM model and fix initial configuration.'
model = LightFMItemColdStart(data_model, item_features=item_features)
model.loss = 'warp'
model.learning_schedule = 'adagrad'
model.seed = seed
model.fit_params['num_threads'] = num_threads
return model
lfm = create_lightfm_model(data_model, item_tags, num_threads, seed)
import optuna
try: # import lightweight progressbar
from ipypb import track
except ImportError: # fallback to default
from tqdm.auto import tqdm as track
def evaluate_lightfm(model):
'''Convenience function for evaluating LightFM.
It disables user bias terms to improve quality in cold start.'''
model._model.user_biases *= 0.0
return model.evaluate()
def find_target_metric(metrics, target_metric):
'Convenience function to quickly extract the required metric.'
for metric in metrics:
if hasattr(metric, target_metric):
return getattr(metric, target_metric)
def lightfm_objective(model, target_metric):
'Objective function factory for optuna trials.'
def objective(trial):
# sample hyper-parameter values
model.rank = trial.suggest_int('rank', 1, max_rank)
model.item_alpha = trial.suggest_loguniform('item_alpha', 1e-10, 1e-0)
# train model silently and evaluate
model.verbose = False
model.build()
metrics = evaluate_lightfm(model)
target = find_target_metric(metrics, target_metric)
# store trial-specific information for later use
trial.set_user_attr('epochs', model.fit_params['epochs'])
trial.set_user_attr('metrics', metrics)
return target
return objective
n_trials = {
# epochs: # trials
15: 30,
25: 25,
50: 20,
75: 15,
100: 10,
150: 5
}
target_metric = 'precision'
objective = lightfm_objective(lfm, target_metric)
study = optuna.create_study(
direction = 'maximize',
sampler = optuna.samplers.TPESampler(seed=seed)
)
optuna.logging.disable_default_handler() # do not report progress
for num_epochs, num_trials in track(n_trials.items()):
lfm.fit_params['epochs'] = num_epochs
study.optimize(objective, n_trials=num_trials, n_jobs=1, catch=None)
print(f'The best value of {target_metric}={study.best_value:0.4f} was achieved with '
f'rank={study.best_params["rank"]} and item_alpha={study.best_params["item_alpha"]:.02e} '
f'within {study.best_trial.user_attrs["epochs"]} epochs.')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use(['seaborn-notebook']) # see plt.style.available
trials_df = (study.trials_dataframe() # all trials history
.loc[:, ['value', 'params', 'user_attrs']]
.rename(columns={'': target_metric}))
trials_df.columns = trials_df.columns.get_level_values(1)
trials_df.sort_values('precision').tail()
# take only top-20% of the points for analysis
(trials_df.query(f'{target_metric}>{0.8*study.best_value}')
.boxplot(column=target_metric, by='epochs', grid=False))
plt.suptitle(''); # hide auto-generated box-plot title
plt.figure(figsize=(10, 5))
# limit plot display region
plt.xlim(0, 50)
plt.ylim(0.8*study.best_value, study.best_value + 0.002)
# plot points
marker_scale = 5
sc = plt.scatter(
trials_df['rank'], trials_df[target_metric],
# circle size indicates the number of epochs
s=trials_df['epochs']*marker_scale,
# color encodes the value of item_alpha (logscale)
c=np.log10(trials_df['item_alpha']),
alpha=0.5, cmap="viridis", marker='o'
)
# prepare legend handles for each number of epochs
legend_labels = n_trials.keys()
legend_handles = [
plt.scatter([], [], s=n_epochs*marker_scale, marker='o',
color='lightgrey', edgecolors='darkgrey')
for n_epochs in legend_labels
]
# add legend, ensuring 1:1 scale with the main plot
plt.legend(
legend_handles, legend_labels,
scatterpoints=1, ncol=len(legend_labels),
title='# epochs', borderpad=1.1,
labelspacing=1.5, markerscale=1,
loc='lower right'
)
# add colorbar for item_alpha values
clb = plt.colorbar(sc)
clb.set_label('item_alpha, [log10]', rotation=-90, labelpad=20)
# annotate plot
plt.title('LightFM hyper-parameters tuning')
plt.xlabel('rank')
plt.ylabel(target_metric);
from polara.recommender.coldstart.models import ScaledSVDItemColdStart
svd = ScaledSVDItemColdStart(data_model)
from polara.evaluation.pipelines import find_optimal_config # generic routine for grid-search
def fine_tune_scaledsvd(model, ranks, scale_params, target_metric):
'Efficiently tuning SVD rank for different scaling parameter values.'
# descending order helps avoiding model recomputation
rev_ranks = sorted(ranks, key=lambda x: -x)
param_grid = [(s, r) for s in scale_params for r in rev_ranks]
param_names = ('col_scaling', 'rank')
config, scores = find_optimal_config(
model, param_grid, param_names, target_metric,
return_scores=True, force_build=False, iterator=track
)
return config, scores
# define the hyper-parameters grid
rank_grid = [1,] + list(range(5, max_rank+1, 5)) # 1, 5, 10, ..., max_rank
scaling_grid = [-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8] # 1.0 is for PureSVD
# perform tuning
svd_best_config, svd_scores = fine_tune_scaledsvd(
svd, rank_grid, scaling_grid, target_metric
)
print(f'The best value of {target_metric}={svd_scores.max():.4f} was achieved with '
f'rank={svd_best_config["rank"]} and scaling parameter={svd_best_config["col_scaling"]}.')
(svd_scores.sort_index()
.unstack(level='col_scaling')
.iloc[::3] # don't display all rank values
.style
.format("{:.4f}")
.background_gradient(cmap='viridis', high=0.2, axis=None))
from polara.recommender.coldstart.data import ItemColdStartSimilarityData
from polara.lib.similarity import combine_similarity_data
training_tags = item_tags.reindex(training_data['items'].unique(), fill_value=[])
tag_similarity = combine_similarity_data(training_tags, similarity_type='cosine')
print('Similarity matrix density is '
f'{tag_similarity.nnz / np.prod(tag_similarity.shape):.1%}.')
similarities = {'users': None, # we have no user features
'items': tag_similarity}
sim_indices = {'users': None,
'items': training_tags.index}
data_model_sim = ItemColdStartSimilarityData(
training_data,
*training_data.columns,
relations_matrices=similarities, # new input args
relations_indices=sim_indices, # new input args
item_features=item_tags,
seed=seed
)
print(data_model_sim)
data_model_sim.test_ratio = data_model.test_ratio
data_model_sim.prepare()
from polara.recommender.coldstart.models import ScaledHybridSVDItemColdStart
hsvd = ScaledHybridSVDItemColdStart(data_model_sim)
hsvd.features_weight = 0.9 # the value of alpha
hsvd.precompute_auxiliary_matrix = True # faster in this case
from polara.evaluation.pipelines import find_optimal_svd_rank
hsvd.col_scaling = svd_best_config['col_scaling'] # reuse PureSVD config
# perform rank tuning (will compute the model only once for the max_rank value)
hsvd_best_rank, hsvd_rank_scores = find_optimal_svd_rank(
hsvd, rank_grid, target_metric, return_scores=True, iterator=track
)
# restore item features embeddings with the maximal rank value
hsvd.update_item_features_transform()
print(f'The best {target_metric}={hsvd_rank_scores.loc[hsvd_best_rank]:.4f} '
f'is achieved at rank={hsvd_best_rank}')
ax = hsvd_rank_scores.sort_index().plot(label='HybridSVD', marker='.')
svd_scores.groupby('rank').max().plot(ax=ax , label='SVD', ls=':')
ax.axhline(study.best_value, label='LightFM (best)', ls='--')
ax.legend()
ax.set_ylabel('precision')
ax.set_title('Item Cold Start');
hsvd.rank = 10 # Polara automatically truncates latent factors in SVD-based models
print(f'HybridSVD\'s {target_metric} at rank {hsvd.rank} is '
f'{find_target_metric(hsvd.evaluate(), target_metric):.4f}\n'
f'LightFM\'s best {target_metric} score is {study.best_value:.4f}')
data_model_sim.prepare_training_only()
data_model_sim.set_test_data(holdout=test_data)
lfm = create_lightfm_model(data_model_sim, item_tags, num_threads, seed)
lfm.rank = study.best_params['rank']
lfm.item_alpha = study.best_params['item_alpha']
lfm.fit_params['epochs'] = study.best_trial.user_attrs['epochs']
lfm.build()
svd = ScaledSVDItemColdStart(data_model_sim)
svd.col_scaling = svd_best_config['col_scaling']
svd.rank = svd_best_config['rank']
svd.build()
hsvd.rank = hsvd_best_rank
hsvd.build()
from polara.evaluation.evaluation_engine import consolidate_metrics
all_scores = {
'SVD (best)': svd.evaluate()
'LightFM (best)': evaluate_lightfm(lfm)
f'HybridSVD (rank {hsvd.rank})': hsvd.evaluate()
}
all_scores_df = pd.concat([consolidate_metrics(scores, model, False)
for model, scores in all_scores.items()])
(all_scores_df
.dropna(axis=1) # skip irrelevant metrics
.loc[:, :'coverage']
.style.bar(subset=[target_metric, 'coverage'], color='#5fba7d', align='mid')
)
hsvd.rank = 400
hsvd.build()
# add new HybridSVD scores to the evaluation results
all_scores_df.loc[f'HybridSVD (rank {hsvd.rank})', :] = [
score for metrics in hsvd.evaluate() for score in metrics
]
all_scores_df[['precision', 'recall', 'nDCG', 'coverage']].T.plot.bar(rot=0);
plt.title('Combined evaluation results');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths
Step2: Load example data
Step3: Setting up SourceMorph for SourceEstimate
Step4: Apply morph to (Vector) SourceEstimate
Step5: Plot results
Step6: As inflated surface
Step7: Reading and writing SourceMorph from and to disk
|
<ASSISTANT_TASK:>
Python Code:
# Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import mne
from mne.datasets import sample
print(__doc__)
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
# Read stc from file
stc = mne.read_source_estimate(fname_stc, subject='sample')
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage',
subjects_dir=subjects_dir)
stc_fsaverage = morph.apply(stc)
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# As spherical surface
brain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',
font_size=16)
brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)
# Add title
brain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',
font_size=16)
stc_fsaverage = mne.compute_source_morph(stc,
subjects_dir=subjects_dir).apply(stc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import statements
Step2: Compute site visits
Step3: Export to csv
|
<ASSISTANT_TASK:>
Python Code:
excel_filepath = ""
csv_output_filepath = ""
import pandas
file = pandas.read_excel(excel_filepath)
file = file.rename_axis({'NAME':'Name', 'TIME1':'Time', 'WEATHER1':'Weather', 'TEMPERATURE1':'Temperature'}, 1)
data = file[['ID', 'Name', 'Time', 'Weather', 'Temperature']].sort_values(by=['ID', 'Name', 'Time'])
data['ID'] = data['ID'].map('{:g}'.format)
data['Name'] = [n.strip() for n in data['Name']]
counts = data['ID'].value_counts().sort_index()
names = data[['ID', 'Name']].drop_duplicates().set_index('ID')
count = names.join(counts).rename_axis({'ID':'count'}, 1)
count
data
data.to_csv(csv_output_filepath)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Summarize data by state
Step2: 2. Poster popularity vs. prevalence
Step3: 3. Permutation tests
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
import pandas as pd
# Load data
df = pd.DataFrame.from_csv('./posterviewers_by_state.csv')
key_N = 'Number of people'
# 0. Count number of posters from each state
# Calculate mean poster popularity
states = df['State'].unique()
dict_state_counts = {'State':states,'count':np.zeros(len(states),dtype=int),'popularity':np.zeros(len(states))}
for i, s in enumerate(states):
dict_state_counts['count'][i] = int(sum(df['State']==s))
dict_state_counts['popularity'][i] = np.round(np.mean(df[df['State']==s][key_N]),3)
df_counts = pd.DataFrame.from_dict(dict_state_counts)
# Visualize dataframe
# count = total number of posters counted affiliated with that country
# popularity = average number of viewers at a poster affiliated with that country
df_counts.head()
print sp.stats.spearmanr(np.log10(df_counts['count']),df_counts['popularity'])
plt.figure(figsize=(3,3))
plt.semilogx(df_counts['count'],df_counts['popularity'],'k.')
plt.xlabel('Number of posters\nin the state')
plt.ylabel('Average number of viewers per poster')
plt.ylim((-.1,3.6))
plt.xlim((.9,1000))
# Simulate randomized data
Nperm = 100
N_posters = len(df)
rand_statepop = np.zeros((Nperm,len(states)),dtype=np.ndarray)
rand_statepopmean = np.zeros((Nperm,len(states)))
for i in range(Nperm):
# Random permutation of posters, organized by state
randperm_viewers = np.random.permutation(df[key_N].values)
for j, s in enumerate(states):
rand_statepop[i,j] = randperm_viewers[np.where(df['State']==s)[0]]
rand_statepopmean[i,j] = np.mean(randperm_viewers[np.where(df['State']==s)[0]])
# True data: Calculate all p-values for the difference between 1 state's popularity and the rest
min_N_posters = 10
states_big = states[np.where(df_counts['count']>=min_N_posters)[0]]
N_big = len(states_big)
t_true_all = np.zeros(N_big)
p_true_all = np.zeros(N_big)
for i, state in enumerate(states_big):
t_true_all[i], _ = sp.stats.ttest_ind(df[df['State']==state][key_N],df[df['State']!=state][key_N])
_, p_true_all[i] = sp.stats.mannwhitneyu(df[df['State']==state][key_N],df[df['State']!=state][key_N])
pmin_pop = np.min(p_true_all[np.where(t_true_all>0)[0]])
pmin_unpop = np.min(p_true_all[np.where(t_true_all<0)[0]])
print 'Most popular state: ', states_big[np.argmax(t_true_all)], '. p=', str(pmin_pop)
print 'Least popular state: ', states_big[np.argmin(t_true_all)], '. p=', str(pmin_unpop)
# Calculate minimum p-values for each permutation
# Calculate all p and t values
t_rand_all = np.zeros((Nperm,N_big))
p_rand_all = np.zeros((Nperm,N_big))
pmin_pop_rand = np.zeros(Nperm)
pmin_unpop_rand = np.zeros(Nperm)
for i in range(Nperm):
for j, state in enumerate(states_big):
idx_use = range(len(states_big))
idx_use.pop(j)
t_rand_all[i,j], _ = sp.stats.ttest_ind(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use]))
_, p_rand_all[i,j] = sp.stats.mannwhitneyu(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use]))
# Identify the greatest significance of a state being more popular than the rest
pmin_pop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]>0)[0]])
# Identify the greatest significance of a state being less popular than the rest
pmin_unpop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]<0)[0]])
# Test if most popular and least popular countries are outside of expectation
print 'Chance of a state being more distinctly popular than Minnesota: '
print sum(i < pmin_pop for i in pmin_pop_rand) / float(len(pmin_pop_rand))
print 'Chance of a state being less distinctly popular than Connecticut: '
print sum(i < pmin_unpop for i in pmin_unpop_rand) / float(len(pmin_unpop_rand))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First algorithm
Step2: Second algorithm
Step3: Third algorithm
Step4: 8.2 Stochastic predator-prey model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.sparse.linalg as sp
import sympy as sym
from scipy.linalg import toeplitz
import ipywidgets as widgets
from ipywidgets import IntSlider
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
plt.style.use('ggplot')
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
sym.init_printing()
from scipy.integrate import odeint
from ipywidgets import interact
def plot_matrices_with_values2(ax,M,flag_values=False):
N=M.shape[0]
cmap = plt.get_cmap('GnBu')
ax.matshow(M, cmap=cmap)
if flag_values:
for i in np.arange(0, N):
for j in np.arange(0, N):
# I had to change the order of indices to make it right
ax.text(i, j, '{:.2f}'.format(M[j,i]), va='center', ha='center', color='r')
ids = np.arange(64) # Making it Python compatible
I = np.reshape(ids,(8,8),order='F')
print(I)
# Defining occupied squares ([row,columns])
occupied = np.array([[2, 3], [5, 5], [6, 2], [6, 3]])-1 # Subtracting one to make it Python compatible!
print(occupied)
for oc in occupied:
print(I[oc[0],oc[1]])
# Defining the peice
piece = 'Knight' # 'King' or 'Knight'
if piece == 'King':
# Squares relative to the current one reached by the King
Mask = np.zeros((8,2))
k=0
piece_range=[-1,0,1]
for i in piece_range:
for j in piece_range:
if not ((i==0) and (j==0)):
Mask[k,0]=i
Mask[k,1]=j
k=k+1
print(Mask)
elif piece == 'Knight':
# Squares relative to the current one reached by the Knight
Mask = np.zeros((8,2))
k=0
piece_range=[-2,-1,0,1,2]
for i in [-2,-1,0,1,2]:
for j in [-2,-1,0,1,2]:
if np.linalg.norm([i,j],ord=1)==3:
Mask[k,0]=i
Mask[k,1]=j
k=k+1
print(Mask)
else:
print('Wrong piece!')
M = np.zeros((64,64))
# Scan through all starting squares
# find all possible destinations
for n in np.arange(8):
for m in np.arange(8):
k=I[n,m] # Index to current square
# Find possible destination square using the mask
P = np.array([n, m])+Mask
# Discard occupied squares
P_tmp=[]
for p in P:
flag=0
for oc_sq in occupied:
if abs(oc_sq[0]-p[0])+abs(oc_sq[1]-p[1])<1e-10:
flag=1
if flag==0:
P_tmp.append(p)
P=np.array(P_tmp,dtype=int)
# Discarding squares outside the board
P_tmp=[]
for p in P:
if p[0]>=0 and p[0]<=7 and p[1]>=0 and p[1]<=7:
P_tmp.append(p)
P=np.array(P_tmp,dtype=int)
nP = P.shape[0] # number of possible destinations
# Fill in the probabilities
for p in P:
Ip = I[p[0],p[1]] # Destination index
M[Ip,k] = 1/nP # Moving probability
fig, (ax1) = plt.subplots(1, 1, figsize=(10,10))
plot_matrices_with_values2(ax1,M,False)
def plot_random_walk_output(nsteps=1):
np.random.seed(0)
# nsteps = 100000
prev = I[1,2]
ncounts = np.zeros(64)
squares_ids = [prev]
for n in np.arange(nsteps):
# Random draw using the column M[:,prev]
next_step = np.random.choice(np.arange(64),p=M[:,prev])
ncounts[next_step] = ncounts[next_step] + 1
prev = next_step
squares_ids.append(next_step)
fig, (ax1) = plt.subplots(1, 1, figsize=(10,10))
plot_matrices_with_values2(ax1,np.log10(np.reshape(ncounts/np.sum(ncounts),(8,8),order='F')),True)
# print(ncounts)
# print(squares_ids)
# print(I)
interact(plot_random_walk_output,nsteps=(0,500000,1))
def plot_P_n(nj, I, M):
# Defining initial distribution
(i0,j0)=(0,1)
pcurr = np.zeros(64)
pcurr[I[i0,j0]] = 1
for j in np.arange(nj):
pcurr = np.dot(M,pcurr)
pcurrM = np.reshape(pcurr,(8,8),order='F')
fig, (ax1) = plt.subplots(1, 1, figsize=(10,10))
plot_matrices_with_values2(ax1,np.log10(pcurrM),True)
# print(pcurr)
interact(lambda nj=0: plot_P_n(nj, I=I, M=M), nj = (0,2000,1))
w, v = np.linalg.eig(M)
w
v0=np.real(v[:,0])/np.sum(np.real(v[:,0]))
print(v0)
v1=np.real(v[:,1])/np.sum(np.real(v[:,1]))
print(v1)
if piece == 'King':
v_selected = v0
elif piece == 'Knight':
v_selected = v1
else:
print('Wrong piece!')
v_selected=np.full((64),np.nan)
fig, (ax1) = plt.subplots(1, 1, figsize=(10,10))
plot_matrices_with_values2(ax1,np.log10(np.reshape(np.real(v_selected),(8,8),order='F')),True)
def plot_S_Pred_Prey(seed=0):
Nj = 200 # number of X adult pair at t = t(j)
Mj = 50 # number of Y adult pair at t = t(j)
p_mean = 10 # mean number of X offsprings/adult pair
p_std = 3 # std of the number of the X offspring/adult pair
beta = 0.3 # Survival probability of X in the absence of Y
Mstar = 50 # number of Y that halves the survival prob. of X
q_mean = 4 # mean number of Y offspring/adult pair
q_std = 2 # std of the number of the Y offsrping/adult pair
gamma = 0.6 # Survival prob. of Y in adundance of X
Nstar = 400 # Number of X pairs at which the survival prob. of Y is halved
nrounds = 40 # Number of generations allowed
N = np.full(nrounds,np.nan)
N[0] = Nj
M = np.full(nrounds,np.nan)
M[0] = Mj
Surv = np.full((nrounds-1,2),np.nan)
np.random.seed(seed)
for j in np.arange(1,nrounds):
# Draw the number of offsprings of species X
Pj = Nj*p_mean+np.sqrt(Nj)*p_std*np.random.randn()
# Survival probability of the offspring of species X
s = beta/(1+(Mj/Mstar)**2)
Nj = s*Pj+np.sqrt(s*Pj*(1-s))*np.random.randn()
N[j] = 0.5*Nj
# Draw the number of offsprings of species Y
Qj = Mj*q_mean+np.sqrt(Mj)*q_std*np.random.randn()
# Survival probability of the offspring of species Y
r = gamma*(1-1/(1+(Nj/Nstar)**2))
Mj = r*Qj+np.sqrt(r*Qj*(1-r))*np.random.randn()
M[j] = 0.5*Mj
# Saving the survival probabilities for later analysis
Surv[j-1,:]=[s,r]
s=Surv[:,0]
r=Surv[:,1]
smean = np.sum(s)/(nrounds-1)
rmean = np.sum(r)/(nrounds-1)
Surv_c = Surv-np.array([smean,rmean])
covSurv = np.dot(Surv_c.T,Surv_c)/(nrounds-1)
corrcoeff = covSurv[0,1]/np.sqrt(covSurv[0,0]*covSurv[1,1])
fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10,10))
ax4.plot(N,'r.-')
ax4.plot(M,'b.-')
ax4.grid(True)
s=Surv[:,0]
r=Surv[:,1]
ax1.plot(s,r,'.')
ax1.set_xlim(0,1)
ax1.set_ylim(0,1)
ax1.set_xlabel(r'$s_i$')
ax1.set_ylabel(r'$r_i$')
ax1.text(0.5, 0.2, 'c={:.5f}'.format(corrcoeff), va='center', ha='center', color='k')
ax2.hist(r,orientation='horizontal')
ax2.set_ylim(0,1)
ax3.hist(s)
ax3.set_xlim(0,1)
plt.show()
interact(plot_S_Pred_Prey,seed=(0,100,1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import pixiedust module
Step2: Use the python variable in Scala code
Step3: Define a variable in Scala and use it in Python
Step4: Invoke PixieDust display api on __dfFromScala dataframe to visualize it
|
<ASSISTANT_TASK:>
Python Code:
pythonString = "Hello From Python"
pythonInt = 20
import pixiedust
%%scala
print(pythonString)
print(pythonInt + 10)
%%scala
//Reuse the sqlContext object available in the python scope
val c = sqlContext.asInstanceOf[org.apache.spark.sql.SQLContext]
import c.implicits._
val __dfFromScala = Seq(
(2010, "Camping Equipment", 3, 200),
(2010, "Golf Equipment", 1, 240),
(2010, "Mountaineering Equipment", 1, 348),
(2010, "Outdoor Protection", 2, 200),
(2010, "Personal Accessories", 2, 200),
(2011, "Camping Equipment", 4, 489),
(2011, "Golf Equipment", 5, 234),
(2011, "Mountaineering Equipment",2, 123),
(2011, "Outdoor Protection", 4, 654),
(2011, "Personal Accessories", 2, 234),
(2012, "Camping Equipment", 5, 876),
(2012, "Golf Equipment", 5, 200),
(2012, "Mountaineering Equipment", 3, 156),
(2012, "Outdoor Protection", 5, 200),
(2012, "Personal Accessories", 3, 345),
(2013, "Camping Equipment", 8, 987),
(2013, "Golf Equipment", 5, 434),
(2013, "Mountaineering Equipment", 3, 278),
(2013, "Outdoor Protection", 8, 134),
(2013, "Personal Accessories", 4, 200)).toDF("year", "zone", "unique_customers", "revenue")
print(__dfFromScala)
display(__dfFromScala)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Almost every aspect of everyday life is affected by some kind of control system.
Step2: Example
Step3: Uses feedback from the output to the input.
Step4: For example, what happens in an elevator in both cases? And in the system of opening and closing of a door?
Step5: It can be shown that its transfer function is
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import SVG
SVG('img/intro_fig1.svg')
from IPython.display import SVG
SVG('img/intro_fig2.svg')
from IPython.display import SVG
SVG('img/intro_fig3.svg')
from IPython.display import SVG
SVG('img/intro_fig4.svg')
from IPython.display import SVG
SVG('img/intro_fig5.svg')
from IPython.display import SVG
SVG('img/intro_fig6.svg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Proper use of Matplotlib
Step2: Add a cruve with a title to the plot
Step3: A long list of markers can be found at http
Step4: Add a labels to the x and y axes
Step5: Finally dump the figure to a png file
Step6: Lets define a function that creates an empty base plot to which we will add
Step7: Log plots
Step8: This is equivelant to
Step9: Histograms
Step10: Subplots
Step11: create a 3x3 grid of plots
Step12: Images and contours
Step13: Animation
Step14: Styles
Step15: Content of the style files
Step16: Maybe the most interesting feature of this style file is the redefinition of the color cycle using hexadecimal notation. This allows the user to define is own color palette for its multi-line plots.
Step17: The 'ggplot' style has been applied to the current session. One of its features that differs from standard matplotlib configuration is to put the ticks outside the main figure (axes.axisbelow
Step18: Now using the 'dark_background' style as a context, we can spot the main changes (background, line color, axis color) and we can also see the outside ticks, although they are not part of this particular style. This is the 'ggplot' axes.axisbelow setup that has not been overwritten by the new style.
Step19: Custom style file
Step20: Note
Step21: One can now copy an existing style file to serve as a boilerplate.
Step22: D3
Step23: Seaborn
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib
import numpy as np
import matplotlib.pyplot as plt
# To get interactive plotting (otherwise you need to
# type plt.show() at the end of the plotting commands)
plt.ion()
x = np.linspace(0, 10)
y = np.sin(x)
# basic X/Y line plotting with '--' dashed line and linewidth of 2
plt.plot(x, y, '--', label='first line')
# overplot a dotted line on the previous plot
plt.plot(x, np.cos(x)*np.cos(x/2), '.', linewidth=3, label='other')
x_axis_label = plt.xlabel('x axis') #change the label of the xaxis
# change your mind about the label : you do not need to replot everything !
plt.xlabel('another x axis')
# or you can use the re-tuned object
x_axis_label.set_text('changed it from the object itself')
# simply add the legend (from the previous label)
legend = plt.legend()
plt.savefig('plot.png') # save the current figure in png
plt.savefig('plot.eps') # save the current figure in ps, no need to redo it !
!ls
%matplotlib
import matplotlib.pyplot as plt
import numpy as np
# define a figure which can contains several plots, you can define resolution and so on here...
fig2 = plt.figure()
# add one axis, axes are actual plots where you can put data.fits (nx, ny, index)
ax = fig2.add_subplot(1, 1, 1)
x = np.linspace(0, 2*np.pi)
ax.plot(x, np.sin(x), '+')
ax.set_title('this title')
plt.show()
# is a simpler syntax to add one axis into the figure (we will stick to this)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x), '+')
ax.set_title('simple subplot')
print(type(fig))
print(dir(fig))
print(fig.axes)
print('This is the x-axis object', fig.axes[0].xaxis)
print('And this is the y-axis object', fig.axes[0].yaxis)
# arrow pointing to the origin of the axes
ax_arrow = ax.annotate('ax = fig.axes[0]',
xy=(0, -1), # tip of the arrow
xytext=(1, -0.5), # location of the text
arrowprops={'facecolor':'red', 'shrink':0.05})
# arrow pointing to the x axis
x_ax_arrow = ax.annotate('ax.xaxis',
xy=(3, -1), # tip of the arrow
xytext=(3, -0.5), # location of the text
arrowprops={'facecolor':'red', 'shrink':0.05})
xax = ax.xaxis
# arrow pointing to the y axis
y_ax_arrow = ax.annotate('ax.yaxis',
xy=(0, 0), # tip of the arrow
xytext=(1, 0.5), # location of the text
arrowprops={'facecolor':'red', 'shrink':0.05})
# add some ascii text label
# this is equivelant to:
# ax.set_xlabel('x')
xax.set_label_text('x')
# add latex rendered text to the y axis
ax.set_ylabel('$sin(x)$', size=20, color='g', rotation=0)
fig.savefig('myplot.png')
!ls
!eog myplot.png
from matplotlib import pyplot as plt
import numpy as np
def create_base_plot():
fig, ax = plt.subplots()
ax.set_title('sample figure')
return fig, ax
def plot_something():
fig, ax = create_base_plot()
x = np.linspace(0, 2*np.pi)
ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')
plt.show()
fig, ax = create_base_plot()
# normal-xlog plots
ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')
# clear the plot and plot a function using the y axis in log scale
ax.clear()
ax.semilogy(x, np.exp(x))
# you can (un)set it, whenever you want
#ax.set_yscale('linear') # change they y axis to linear scale
#ax.set_yscale('log') # change the y axis to log scale
# you can also make loglog plots
#ax.clear()
#ax.loglog(x, np.exp(x)*np.sin(x))
plt.setp(ax, **dict(yscale='log', xscale='log'))
plt.setp(ax, 'xscale', 'linear', 'xlim', [1, 5], 'ylim', [0.1, 10], 'xlabel', 'x',
'ylabel', 'y', 'title', 'foo',
'xticks', [1, 2, 3, 4, 5],
'yticks', [0.1, 1, 10],
'yticklabels', ['low', 'medium', 'high'])
fig1, ax = create_base_plot()
n, bins, patches = ax.hist(np.random.normal(0, 0.1, 10000), bins=50)
# Create one figure with two plots/axes, with their xaxis shared
fig, (ax1, ax2) = plt.subplots(2, sharex=True)
ax1.plot(x, np.sin(x), '-.', color='r', label='first line')
other = ax2.plot(x, np.cos(x)*np.cos(x/2), 'o-', linewidth=3, label='other')
ax1.legend()
ax2.legend()
# adjust the spacing between the axes
fig.subplots_adjust(hspace=0.0)
# add a scatter plot to the first axis
ax1.scatter(x, np.sin(x)+np.random.normal(0, 0.1, np.size(x)))
fig, axs = plt.subplots(3, 3)
print(axs.shape)
# add an index to all the subplots
for ax_index, ax in enumerate(axs.flatten()):
ax.set_title(ax_index)
# remove all ticks
for ax in axs.flatten():
plt.setp(ax, 'xticks', [], 'yticks', [])
fig.subplots_adjust(hspace=0, wspace=0)
# plot a curve in the diagonal subplots
for ax, func in zip(axs.diagonal(), [np.sin, np.cos, np.exp]):
ax.plot(x, func(x))
xx, yy = np.mgrid[-2:2:100j, -2:2:100j]
img = np.sin(xx) + np.cos(yy)
fig, ax = create_base_plot()
# to have 0,0 in the lower left corner and no interpolation
img_plot = ax.imshow(img, origin='lower', interpolation='None')
# to add a grid to any axis
ax.grid()
img_plot.set_cmap('hot') # changing the colormap
img_plot.set_cmap('spectral') # changing the colormap
colorb = fig.colorbar(img_plot) # adding a color bar
img_plot.set_clim(-0.5, 0.5) # changing the dynamical range
# add contour levels
img_contours = ax.contour(img, [-1, -0.5, 0.0, 0.5])
plt.clabel(img_contours, inline=True, fontsize=20)
from IPython.display import HTML
import matplotlib.animation as animation
def f(x, y):
return np.sin(x) + np.cos(y)
fig, ax = create_base_plot()
im = ax.imshow(f(xx, yy), cmap=plt.get_cmap('viridis'))
def updatefig(*args):
global xx, yy
xx += np.pi / 15.
yy += np.pi / 20.
im.set_array(f(xx, yy))
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=50, blit=True)
_ = ani.to_html5_video()
# change title during animation!!
ax.set_title('runtime title')
print('\n'.join(plt.style.available))
x = np.arange(0, 10, 0.01)
def f(x, t):
return np.sin(x) * np.exp(1 - x / 10 + t / 2)
def simple_plot(style):
plt.figure()
with plt.style.context(style, after_reset=True):
for t in range(5):
plt.plot(x, f(x, t))
plt.title('Simple plot')
simple_plot('ggplot')
simple_plot('dark_background')
simple_plot('grayscale')
simple_plot('fivethirtyeight')
simple_plot('bmh')
import os
ggplotfile = os.path.join(plt.style.core.BASE_LIBRARY_PATH, 'ggplot.mplstyle')
!cat $ggplotfile
plt.style.use('ggplot')
plt.figure()
plt.plot(x, f(x, 0))
with plt.style.context('dark_background'):
plt.figure()
plt.plot(x, f(x, 1))
plt.figure()
plt.plot(x, f(x, 2))
print(plt.style.core.USER_LIBRARY_PATHS)
styledir = plt.style.core.USER_LIBRARY_PATHS[0]
!mkdir -p $styledir
mystylefile = os.path.join(styledir, 'mystyle.mplstyle')
!cp $ggplotfile $mystylefile
!cd $styledir
%%file mystyle.mplstyle
font.size: 16.0 # large font
axes.linewidth: 2
axes.grid: True
axes.titlesize: x-large
axes.labelsize: x-large
axes.labelcolor: 555555
axes.axisbelow: True
xtick.color: 555555
xtick.direction: out
ytick.color: 555555
ytick.direction: out
grid.color: white
grid.linestyle: : # dotted line
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import mpld3
mpld3.enable_notebook()
# Scatter points
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
ax.grid(color='white', linestyle='solid')
N = 50
scatter = ax.scatter(np.random.normal(size=N),
np.random.normal(size=N),
c=np.random.random(size=N),
s = 1000 * np.random.random(size=N),
alpha=0.3,
cmap=plt.cm.jet)
ax.set_title("D3 Scatter Plot", size=18);
import mpld3
mpld3.display(fig)
from mpld3 import plugins
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
ax.grid(color='white', linestyle='solid')
N = 50
scatter = ax.scatter(np.random.normal(size=N),
np.random.normal(size=N),
c=np.random.random(size=N),
s = 1000 * np.random.random(size=N),
alpha=0.3,
cmap=plt.cm.jet)
ax.set_title("D3 Scatter Plot (with tooltips!)", size=20)
labels = ['point {0}'.format(i + 1) for i in range(N)]
fig.plugins = [plugins.PointLabelTooltip(scatter, labels)]
%matplotlib
plot_something()
import seaborn
plot_something()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To use the K-means clustering algorithm from Scikit-learn, we import it and specify the number of clusters (that is the k), and the random state to initialize the centroid centers of the clusters. We assume that the data can be grouped into two clusters
Step2: We can now apply the clustering algorithm to the datapoints in $X$
Step3: The output above shows the assignment of datapoints to clusters.
Step4: We can also output the centroids of the two clusters
Step5: If we want to use K-Nearest Neighbor in Scikit Learn, we need to import the KNeighborsClassifier from the neighbors submodule
Step6: We instantiate a KNN-classifier
Step7: We use the following dataset X and class-vector y
Step8: We train the classifier
Step9: We ask the classifier to suggest a class for an unseen vector
Step10: It can also give us the likelihoods for the probability of a data-point being in any of the classes
Step11: We might not have a class assignment for a sample set. If we want to use a sample set to find the closes data-point, we could use the KNeighborsClassifier as well. Here is a sample.
Step12: We can now train for K=1, the nearest neighbor model
Step13: We could ask for the nearest neighbor of a concrete data-point
Step14: The returned result [[0.5]] and [[2]] means that nearest neighbor is the third sample in samples and that the distance between the two is 0.5. One can also query for the distance of multiple data-points. In this case the output of the distance is supressed
|
<ASSISTANT_TASK:>
Python Code:
import numpy
X = numpy.array([[1, 2], [1, 4], [1, 0], [4, 2], [4, 4], [4, 0]])
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2, random_state=0)
kmeans.fit(X)
print(kmeans.labels_)
print(kmeans.predict([[0, 0], [4, 4]]))
print(kmeans.cluster_centers_)
from sklearn.neighbors import KNeighborsClassifier
KNNClassifier = KNeighborsClassifier(n_neighbors = 3)
X = [[0, 1], [1, 1], [2, 4], [3, 4]]
y = [0, 0, 1, 1]
KNNClassifier.fit(X, y)
print(KNNClassifier.predict([[1.1, 0.9]]))
print(KNNClassifier.predict_proba([[2.9, 3.1]]))
samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
from sklearn.neighbors import NearestNeighbors
KNNClassifier = NearestNeighbors(n_neighbors=1)
KNNClassifier.fit(samples)
print(KNNClassifier.kneighbors([[1., 1., 1.]]))
X = [[0., 1., 0.], [1., 0., 1.]]
KNNClassifier.kneighbors(X, return_distance=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get started with qsimcirq
Step2: Simulating Cirq circuits with qsim is easy
Step3: To sample from this state, you can invoke Cirq's sample_state_vector method
Step4: Measurement sampling
Step5: The warning above highlights an important distinction between the simulate and run methods
Step6: Amplitude evaluation
Step7: Performance benchmark
Step8: qsim performance can be tuned further by passing options to the simulator constructor. These options use the same format as the qsim_base binary - a full description can be found in the qsim usage doc. The example below demonstrates enabling multithreading in qsim; for best performance, use the same number of threads as the number of cores (or virtual cores) on your machine.
Step9: Another option is to adjust the maximum number of qubits over which to fuse gates. Increasing this value (as demonstrated below) increases arithmetic intensity, which may improve performance with the right environment settings.
Step10: Advanced applications
Step11: In order to let qsimh know how we want to split up the circuit, we need to pass it some additional options. More detail on these can be found in the qsim usage doc, but the fundamentals are explained below.
Step12: Now to run the other side of the cut...
Step13: ...and add the two together. The results of a normal qsim simulation are shown for comparison.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import cirq
except ImportError:
!pip install cirq --quiet
import cirq
try:
import qsimcirq
except ImportError:
!pip install qsimcirq --quiet
import qsimcirq
# Define qubits and a short circuit.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with Cirq and return the full state vector.
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.simulate(circuit)
print(cirq_results)
print()
# Simulate the circuit with qsim and return the full state vector.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.simulate(circuit)
print(qsim_results)
samples = cirq.sample_state_vector(
qsim_results.state_vector(), indices=[0, 1], repetitions=10)
print(samples)
# Define a circuit with measurements.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(
cirq.H(q0), cirq.X(q1), cirq.CX(q0, q1),
cirq.measure(q0, key='qubit_0'),
cirq.measure(q1, key='qubit_1'),
)
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with Cirq and return just the measurement values.
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.run(circuit, repetitions=5)
print(cirq_results)
print()
# Simulate the circuit with qsim and return just the measurement values.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.run(circuit, repetitions=5)
print(qsim_results)
# Define a circuit with intermediate measurements.
q0 = cirq.LineQubit(0)
circuit = cirq.Circuit(
cirq.X(q0)**0.5, cirq.measure(q0, key='m0'),
cirq.X(q0)**0.5, cirq.measure(q0, key='m1'),
cirq.X(q0)**0.5, cirq.measure(q0, key='m2'),
)
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with qsim and return just the measurement values.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.run(circuit, repetitions=5)
print(qsim_results)
# Define a simple circuit.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with qsim and return the amplitudes for |00) and |01).
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01])
print(cirq_results)
print()
# Simulate the circuit with qsim and return the amplitudes for |00) and |01).
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01])
print(qsim_results)
import time
# Get a rectangular grid of qubits.
qubits = cirq.GridQubit.rect(4, 5)
# Generates a random circuit on the provided qubits.
circuit = cirq.experiments.random_rotations_between_grid_interaction_layers_circuit(
qubits=qubits, depth=16)
# Simulate the circuit with Cirq and print the runtime.
cirq_simulator = cirq.Simulator()
cirq_start = time.time()
cirq_results = cirq_simulator.simulate(circuit)
cirq_elapsed = time.time() - cirq_start
print(f'Cirq runtime: {cirq_elapsed} seconds.')
print()
# Simulate the circuit with qsim and print the runtime.
qsim_simulator = qsimcirq.QSimSimulator()
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
# Use eight threads to parallelize simulation.
options = {'t': 8}
qsim_simulator = qsimcirq.QSimSimulator(options)
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
# Increase maximum fused gate size to three qubits.
options = {'f': 3}
qsim_simulator = qsimcirq.QSimSimulator(options)
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
# Pick a pair of qubits.
q0 = cirq.GridQubit(0, 0)
q1 = cirq.GridQubit(0, 1)
# Create a circuit that entangles the pair.
circuit = cirq.Circuit(
cirq.H(q0), cirq.CX(q0, q1), cirq.X(q1)
)
print("Circuit:")
print(circuit)
options = {}
# 'k' indicates the qubits on one side of the cut.
# We'll use qubit 0 for this.
options['k'] = [0]
# 'p' and 'r' control when values are assigned to cut indices.
# There are some intricacies in choosing values for these options,
# but for now we'll set p=1 and r=0.
# This allows us to pre-assign the value of the CX indices
# and distribute its execution to multiple jobs.
options['p'] = 1
options['r'] = 0
# 'w' indicates the value pre-assigned to the cut.
# This should change for each execution.
options['w'] = 0
# Create the qsimh simulator with those options.
qsimh_simulator = qsimcirq.QSimhSimulator(options)
results_0 = qsimh_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print(results_0)
options['w'] = 1
qsimh_simulator = qsimcirq.QSimhSimulator(options)
results_1 = qsimh_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print(results_1)
results = [r0 + r1 for r0, r1 in zip(results_0, results_1)]
print("qsimh results:")
print(results)
qsim_simulator = qsimcirq.QSimSimulator()
qsim_simulator.compute_amplitudes(circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print("qsim results:")
print(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a list from the dictionary keys
Step2: Create a list from the dictionary values
|
<ASSISTANT_TASK:>
Python Code:
dict = {'county': ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'],
'year': [2012, 2012, 2013, 2014, 2014],
'fireReports': [4, 24, 31, 2, 3]}
# Create a list of keys
list(dict.keys())
# Create a list of values
list(dict.values())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-8s', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Enrich your data
Step2: 3. Create a visualization
Step3: 4. Share your visualization
|
<ASSISTANT_TASK:>
Python Code:
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
from cartoframes.data.observatory import Enrichment
from cartoframes.data.services import Geocoding, Isolines
import pandas as pd
stores_df = pd.read_csv('http://libs.cartocdn.com/cartoframes/files/starbucks_brooklyn.csv')
stores_gdf, _ = Geocoding().geocode(stores_df, street='address')
aoi_gdf, _ = Isolines().isochrones(stores_gdf, [15*60], mode='walk')
aoi_enriched_gdf = Enrichment().enrich_polygons(aoi_gdf, ['total_pop_3cf008b3'])
from cartoframes.viz import Map, Layer, color_continuous_style, size_continuous_style
result_map = Map([
Layer(aoi_enriched_gdf, color_continuous_style('total_pop', stroke_width=0, opacity=0.7)),
Layer(stores_gdf, size_continuous_style('revenue', stroke_color='white'), default_widget=True)
])
result_map
result_map.publish('onboarding_analysis', password=None, if_exists='replace')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show that the we can read the index
Step2: Create the Spark Session or get an existing one
Step3: Create an RDD from the Rosbag file
|
<ASSISTANT_TASK:>
Python Code:
%%bash
ls -tralFh /root/project/doc/el_camino_north.bag
%%bash
# same size, no worries, just the -h (human) formating differs in rounding
hdfs dfs -ls -h
%%time
out = !java -jar ../lib/rosbaginputformat.jar -f /root/project/doc/el_camino_north.bag
%%bash
ls -tralFh /root/project/doc/el_camino_north.bag*
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sparkConf = SparkConf()
sparkConf.setMaster("local[*]")
sparkConf.setAppName("ros_hadoop")
sparkConf.set("spark.jars", "../lib/protobuf-java-3.3.0.jar,../lib/rosbaginputformat.jar,../lib/scala-library-2.11.8.jar")
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
fin = sc.newAPIHadoopFile(
path = "hdfs://127.0.0.1:9000/user/root/el_camino_north.bag",
inputFormatClass = "de.valtech.foss.RosbagMapInputFormat",
keyClass = "org.apache.hadoop.io.LongWritable",
valueClass = "org.apache.hadoop.io.MapWritable",
conf = {"RosbagInputFormat.chunkIdx":"/root/project/doc/el_camino_north.bag.idx.bin"})
fin
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The samples will be token from a normal distribution and they will be used to train the neural network.
Step2: Now, in this step we are going to define the Multilayer Perceptron. This neural network allow us to map a set of input data onto a set of appropiate outputs. It consist of multiple layers of nodes in a direct graph. Each layer is fully conected with the next one. In this particular case, this neural network is going to have different learnable parameters.
Step3: In this step we are going to define the function that will allow the learning process to search for better values of the learnable parameters
Step4: At this moment we have both the Multilayer Perceptron and the optimizer function. Now is a good moment to define a pre training process for the discriminator. This will help it to have a base in order to not start knowing nothing and this will reduce the time of convergence. From a math point of view, this process will help the discriminator to have a distribution function more similar to the original one that we will use to generate the real samples.
Step5: As it follows, we are going to define a plot function in order to see how the decision boundary is for a linspace of samples in a defined range. It is easy to think that the results are going to be different in each execution because the learneable parameters are random at the beginning. This parameters are initialized when run the session variable D with the input_node x which is a node that contains the samples mentioned at the beginning of this paragraph.
Step6: Generator and Discriminator
Step7: At this point we have defined what is a generator a what is a discriminator. In the moment we run the code Generator, Discriminator1 and Discriminator2 will have the result of apliying the multilayer percpetron which are the weighted values after passing the 3 layers. This values will be changing while learnables variables change. Now we have to say the network how to improve this learnable values. For that we have to use the teorethical formula
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plots
import seaborn as sns # for pretty plots
from scipy.stats import norm
mu,sigma = 0,1
linespace = np.linspace(-6,6,1000)
plots.plot(linespace, norm.pdf(linespace, loc=mu, scale=sigma))
plots.show()
# As have been said before, this function allow us to choose the correct number of ouputs we want.
def MultilayerPerceptron(input, output_dimension):
w1 = tf.get_variable("w0", [input.get_shape()[1], 7], initializer= tf.random_normal_initializer())
b1 = tf.get_variable("b0", [7], initializer = tf.constant_initializer(0.0))
w2 = tf.get_variable("w1", [7, 5], initializer = tf.random_normal_initializer())
b2 = tf.get_variable("b1", [5], initializer = tf.constant_initializer(0.0))
w3 = tf.get_variable("w2", [5, output_dimension], initializer = tf.random_normal_initializer())
b3 = tf.get_variable("b2", [output_dimension], initializer = tf.constant_initializer(0.0))
# Now we will define the operators
op1=tf.nn.tanh(tf.matmul(input,w1)+b1)
op2=tf.nn.tanh(tf.matmul(op1,w2)+b2)
op3=tf.nn.tanh(tf.matmul(op2,w3)+b3)
return op3, [w1,b1,w2,b2,w3,b3]
train_iterations=20000
def optimizer(loss, varlist):
batch = tf.Variable(0)
rateoflearning = tf.train.exponential_decay(0.001,batch,train_iterations // 4, 0.95, staircase = True)
optimizer = tf.train.MomentumOptimizer(rateoflearning,0.6).minimize(loss,global_step = batch, var_list = varlist)
return optimizer
# First of all we are going to define the size of the minibatch that will be used as input in each iteration
M=200
# Now we are going to define a special discriminator different from the original one
with tf.variable_scope("trainedD"):
# This will take the input in a Mx1 dimension (200x1) which is the batch size
input_node = tf.placeholder(tf.float32, shape=(M,1))
# Same size than the batch. As many labels as possible values.
labels = tf.placeholder(tf.float32, shape=(M,1))
# The batch as input and only 1 output
D,output = MultilayerPerceptron(input_node, 1)
lossfunction = tf.reduce_mean(tf.square(D-labels))
optimizer_pre=optimizer(lossfunction,None)
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()
# plot decision surface
def plot_function(D,input_node):
figure,axis=plots.subplots(1)
# original distribution
xaxis=np.linspace(-6,6,1000)
axis.plot(xaxis, norm.pdf(xaxis,loc=mu,scale=sigma), label='p_distribution')
# decision boundary
r=1000 # r is the resolution. It's a good thing to use the same that was used at the beginning with the original distribution
xaxis=np.linspace(-6,6,r)
# decision space where the algorithm is going to save initial decisor values
decisor_beginning=np.zeros((r,1))
for i in range(int(r/M)):
x=np.reshape(xaxis[M*i:M*(i+1)],(M,1))
decisor_beginning[M*i:M*(i+1)]=sess.run(D,{input_node: x})
axis.plot(xaxis, decisor_beginning, label='decision boundary', color ='black')
axis.set_ylim(0,1.1)
plots.legend()
plot_function(D,input_node)
plots.title('Initial Decision Boundary')
print(type(plots.show()))
loss_function_training=np.zeros(1000)
for i in range(1000):
d=(np.random.random(M)-0.5) * 10.0
# We do this in order to sample from all the domain and not only from the gaussian distribution
labelsnew=norm.pdf(d,loc=mu,scale=sigma)
loss_function_training[i],_=sess.run([lossfunction,optimizer_pre], {input_node: np.reshape(d,(M,1)), labels: np.reshape(labelsnew,(M,1))})
# Loss function with training
plots.plot(loss_function_training, color ='black')
plots.title('Loss Function')
plots.show()
plot_function(D,input_node)
plots.show()
learned_weights = sess.run(output)
# Here we are saving the weiights learned by the pretraining
sess.close()
with tf.variable_scope("Generator"):
z_input_node = tf.placeholder(tf.float32, shape=(M,1))
Generator,output_gen = MultilayerPerceptron(z_input_node,1)
# We have to rescale by 6 to match the range between possible values
# because z values goes from 0 to 1 and the range of X is bigger
Generator = tf.multiply(6.0,Generator)
with tf.variable_scope("Discriminator") as scope:
# We have to define two different discriminators. One of them will have real
# values as input, and the other the fake values generated by the generator
# This allow us to reuse the same learnable values for both
x_input_node = tf.placeholder(tf.float32, shape=(M,1))
op, output_dis = MultilayerPerceptron(x_input_node,1)
Discriminator1 = tf.maximum(tf.minimum(op,0.99), 0.01) #boundaries for the values
scope.reuse_variables()
op,output_dis = MultilayerPerceptron(Generator, 1)
Discriminator2 = tf.maximum(tf.minimum(op,0.99), 0.01)
discriminator_objective = tf.reduce_mean(tf.log(Discriminator1)+tf.log(1-Discriminator2))
generator_objective = tf.reduce_mean(tf.log(Discriminator2))
lossfunction_dis = 1 - discriminator_objective
lossfunction_gen = 1 - generator_objective
optimized_discriminator = optimizer(lossfunction_dis, output_dis)
optimized_generator = optimizer(lossfunction_gen, output_gen)
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()
for i,v in enumerate(output_dis):
sess.run(v.assign(learned_weights[i]))
def plot_function2():
# plots data generated, original distribution, decision boundary
figure,axis=plots.subplots(1)
# original distribution
xaxis=np.linspace(-6,6,1000)
axis.plot(xaxis, norm.pdf(xaxis,loc=mu,scale=sigma), label='p_original_distribution')
# decision boundary
r=1000 # resolution (number of points)
xaxis=np.linspace(-6,6,r)
decisor=np.zeros((r,1)) # decision surface
# process multiple points in parallel in same minibatch
for i in range(int(r/M)):
x=np.reshape(xaxis[M*i:M*(i+1)],(M,1))
decisor[M*i:M*(i+1)]=sess.run(Discriminator1,{x_input_node: x})
axis.plot(xaxis, decisor, label='decision boundary',color='black')
# data generated
zs=np.linspace(-6,6,r)
generated=np.zeros((r,1)) # generator function
for i in range(int(r/M)):
z=np.reshape(zs[M*i:M*(i+1)],(M,1))
generated[M*i:M*(i+1)]=sess.run(Generator,{z_input_node: z})
histo_gendata, binedges = np.histogram(generated)
axis.plot(np.linspace(-6,6,10), histo_gendata/float(r), label='data_generated',color ='yellow')
# ylim, legend
axis.set_ylim(0,1.1)
plots.legend()
plot_function2()
plots.title('Before Training')
plots.show()
# Algorithm 1 of Goodfellow et al 2014
k=1
histdis, histgen= np.zeros(train_iterations), np.zeros(train_iterations)
for i in range(train_iterations):
for j in range(k):
x= np.random.normal(mu,sigma,M) # sampled m-batch from p_data
x.sort()
z= np.linspace(-6.0,6.0,M)+np.random.random(M)*0.01 # sample m-batch from noise prior
histdis[i],_=sess.run([discriminator_objective,optimized_discriminator], {x_input_node: np.reshape(x,(M,1)), z_input_node: np.reshape(z,(M,1))})
z= np.linspace(-6.0,6.0,M)+np.random.random(M)*0.01 # sample noise prior
histgen[i],_=sess.run([generator_objective,optimized_generator], {z_input_node: np.reshape(z,(M,1))}) # update generator
plot_function2()
plots.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El codi següent comprova el sensor de tacte
Step2: <img src="img/While-loop-diagram.svg.png" align="right">
Step3: Sembla complicat? Les ordres que li esteu donant al robot són molt semblants a les del llenguatge natural, només que en anglès. Si no vos convenç, mireu la traducció
|
<ASSISTANT_TASK:>
Python Code:
from functions import connect, touch, forward, backward, stop, disconnect, next_notebook
from time import sleep
connect()
if touch():
backward()
sleep(0.2)
stop()
else:
forward()
sleep(0.2)
stop()
while not touch():
forward()
stop()
disconnect()
next_notebook('navigation')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load wave file and convert to mono if stereo
Step2: Uniform Quantization. The quantizer is given by
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import wavfile
import IPython.display as ipd
from ipywidgets import interactive
import ipywidgets as widgets
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 6) )
fs, data = wavfile.read('./audio/33711__acclivity__excessiveexposure.wav')
#fs, data = wavfile.read('./audio/E-Core - Pingouin-Banquise_45s.wav')
#fs, data = wavfile.read('./audio/KIMIKO ISHIZAKA - Goldberg Variations BWV 988 - 01 - Aria_45s.wav')
audiodata = np.array(data.astype(float))
# Convert audio to mono if is stereo
if audiodata.ndim > 1:
audiodata = np.array((audiodata[:,0] + audiodata[:,1]) / 2)
time = np.arange(len(audiodata)) / fs
def quantize(b, w):
# b_max from signal
b_max = np.max(np.abs(b))
# saturation and clipping (slightly less than maximum value to avoid numerical issues)
b_scaled = np.clip(b, -b_max*0.999, b_max*0.999)
K = 2**w
Delta = 2*b_max / K
b_bar = np.sign(b_scaled)*Delta*(np.floor(np.abs(b_scaled)/Delta)+0.5)
return b_bar
def show_and_play(w):
x = quantize(audiodata, w)
b_max = np.max(np.abs(audiodata))
Delta = 2*b_max / 2**w
q_levels = np.linspace(-(2**w - 1)*Delta/2, (2**w - 1)*Delta/2, 2**w)
ipd.display(ipd.Audio(x, rate=fs))
plt.figure(figsize=(12,8))
plt.subplot(2,1,1)
plt.plot(time, audiodata)
if w < 7:
plt.hlines(q_levels, time[0], time[-1], colors='k', linestyles='dotted', linewidth=1)
plt.xlim((time[0], time[-1]))
plt.ylim((-np.max(np.abs(audiodata)),np.max(np.abs(audiodata))))
plt.xlabel('Time $t$ (s)', fontsize=20)
plt.ylabel('$b[k]$', fontsize=20)
plt.title('Signal and quantization levels', fontsize=20)
plt.subplot(2,1,2)
plt.plot(time, x)
if w < 7:
plt.hlines(q_levels, time[0], time[-1], colors='k', linestyles='dotted', linewidth=1)
plt.xlim((time[0], time[-1]))
plt.ylim((-np.max(np.abs(audiodata)),np.max(np.abs(audiodata))))
plt.xlabel('Time $t$ (s)', fontsize=20)
plt.ylabel(r'$\bar{b}[k]$', fontsize=20)
plt.title('Quantized signal', fontsize=20)
plt.tight_layout()
#plt.savefig('audio_quantized_w%d.pdf' % w, bbox_inches='tight')
interactive_update = interactive(lambda w: show_and_play(w), \
w = widgets.IntSlider(min=1, max=12, step=1, value=4, continuous_update=False, description='w'))
output = interactive_update.children[-1]
output.layout.height = '650px'
interactive_update
# Schreibe Daten in eine Textdatei für w = 8
w = 8
b = audiodata
# b_max from signal
b_max = np.max(np.abs(b))
# saturation and clipping (slightly less than maximum value to avoid numerical issues)
b_scaled = np.clip(b, -b_max*0.999, b_max*0.999)
K = 2**w
Delta = 2*b_max / K
# generate Binary file with 8bit integers
indices = (np.sign(b_scaled)*(np.floor(np.abs(b_scaled)/Delta) + 0.5) + K/2 + 0.5)
indices.astype('int8').tofile('indices.bin')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize the random seed so the results are reproducible, below.
Step2: Output path and filenames.
Step6: Read the BGS basis templates.
Step7: Compute the NxN chi2 matrix.
Step9: Compute and plot the number of archetypes vs chi2 threshold.
Step11: Choose a chi2 threshold value then get the final set of archetypes.
Step12: Sort by Dn(4000).
Step17: Generate some QAplots.
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from desispec.io.util import write_bintable, makepath
from desisim.io import write_templates
from desisim.archetypes import compute_chi2, ArcheTypes
import multiprocessing
nproc = multiprocessing.cpu_count() // 2
plt.style.use('seaborn-talk')
%matplotlib inline
seed = 123
rand = np.random.RandomState(seed)
version = 'v1.0'
outdir = os.path.join(os.getenv('DESI_ROOT'), 'spectro', 'templates', 'archetypes', 'bgs', version)
print('Setting output directory to {}'.format(outdir))
os.makedirs(outdir, exist_ok=True)
chi2file = os.path.join(outdir, 'bgs_archetypes_chi2_{}.fits'.format(version))
archfile = os.path.join(outdir, 'bgs_archetypes_{}.fits'.format(version))
def _build_templates(args):
Filler function for the multiprocessing.
build_templates(*args)
def build_templates(bgs, input_meta, verbose=False):
flux, _, meta = bgs.make_templates(input_meta=input_meta, novdisp=True,
nocolorcuts=True, verbose=verbose)
return [flux.astype('f4'), meta]
def read_and_normalize(verbose=False, nproc=1, minwave=1200, maxwave=2e4,
cdelt=0.2, nominal_rmag=18.0, nominal_vdisp=1000,
subset=False):
Read and normalize the full set of basis templates.
from astropy.table import vstack
from desisim.templates import BGS
from desisim.io import empty_metatable
bgs = BGS(minwave=minwave, maxwave=maxwave, cdelt=cdelt)
bgs.normline = None # no emission line
nspec = len(bgs.basemeta)
if subset:
nspec = 1000
these = rand.choice(len(bgs.basemeta), nspec)
print('Selecting a subset of {} / {} templates!'.format(nspec, len(bgs.basemeta)))
else:
these = np.arange(nspec)
input_meta = empty_metatable(nmodel=nspec, objtype='BGS')
input_meta['TEMPLATEID'] = these
input_meta['REDSHIFT'] = 0.0
input_meta['MAG'] = nominal_rmag
input_meta['VDISP'] = nominal_vdisp
input_meta['SEED'] = rand.randint(2**32, size=nspec)
# Not sure why multiprocessing isn't working in this case.
if nproc > 1:
chunk = np.array_split(these, nproc)
tempargs = list()
for ii in range(nproc):
tempargs.append((bgs, input_meta[chunk[ii]], verbose))
pool = multiprocessing.Pool(nproc)
out = pool.map(_build_templates, tempargs)
flux = np.vstack(out[0])
meta = vstack(out[1])
else:
flux, meta = build_templates(bgs, input_meta, verbose)
nspec, npix = flux.shape
print('Generated {} rest-frame BGS spectra with {} pixels.'.format(nspec, npix))
return flux, bgs.wave, meta, bgs.basemeta[these]
%time flux, wave, meta, basemeta = read_and_normalize(nproc=1, cdelt=1.0, minwave=3600, maxwave=7000, subset=True)
nspec, npix = flux.shape
%time hiresflux, hireswave, _, _ = read_and_normalize(nproc=1, cdelt=0.2, minwave=1200, maxwave=2e4, subset=True)
_, hiresnpix = hiresflux.shape
def plot_subset(nplot=25, ncol=5):
Plot a random sampling of the basis templates.
nspec, npix = flux.shape
nrow = np.ceil(nplot / ncol).astype('int')
these = rand.choice(nspec, nplot, replace=False)
fig, ax = plt.subplots(nrow, ncol, figsize=(2.2*ncol, 2.2*nrow), sharey=True, sharex=True)
for thisax, indx in zip(ax.flat, these):
thisax.plot(wave, flux[indx, :])
thisax.text(0.95, 0.93, '{:0d}'.format(indx), ha='right',
va='top', transform=thisax.transAxes, fontsize=11)
thisax.xaxis.set_major_locator(plt.MaxNLocator(3))
fig.subplots_adjust(wspace=0.05, hspace=0.05)
plot_subset()
def write_chi2(chi2):
from astropy.io import fits
print('Writing {}'.format(chi2file))
hdu = fits.PrimaryHDU(chi2)
hdu.writeto(chi2file, overwrite=True)
%time chi2, amp = compute_chi2(flux)
write_chi2(chi2)
prec = 0.1
chi2min_nominal = npix*prec**2
print(chi2min_nominal, np.log10(chi2min_nominal)) # seems high...
with np.errstate(divide='ignore'):
logchi2 = np.log10(chi2)
logchi2[chi2 == 0] = -1
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
im = ax[0].imshow(logchi2, origin='lower', interpolation='nearest',
vmin=-1.0, cmap='viridis')
ax[0].set_xlabel('Spectrum Number')
ax[0].set_ylabel('Spectrum Number')
plt.colorbar(im, label='$log_{10}(\chi^{2})$', ax=ax[0])
_ = ax[1].hist(logchi2.reshape(nspec * nspec), bins=30, range=(-1.2, np.max(logchi2)))
ax[1].set_ylabel('Number')
ax[1].set_xlabel('$log_{10}(\chi^{2}$)')
def narch_vs_chi2min(Arch):
Determine the number of archtypes vs chi2 threshold.
cost = np.ones(nspec) # uniform cost
chi2min = np.logspace(1, 5, 10)
print(chi2min)
narch = np.zeros_like(chi2min)
for ii, cmin in enumerate(chi2min):
iarch = Arch.get_archetypes(chi2_thresh=cmin)
narch[ii] = len(iarch)
return narch, chi2min
def qa_narch_vs_chi2min():
fig, ax = plt.subplots()
ax.scatter(np.log10(chi2min), narch)
ax.set_xlabel('$log_{10}(\chi^{2})$ Threshold')
ax.set_ylabel('Number of Archetypes')
ax.axvline(x=np.log10(chi2min_nominal), color='red', ls='-')
ax.grid(True)
Arch = ArcheTypes(chi2)
narch, chi2min = narch_vs_chi2min(Arch)
qa_narch_vs_chi2min()
def write_archetypes():
ToDo: Write out the responsibility indices for each archetype.
from astropy.table import Column
outmeta = meta[iarch]
outmeta.add_column(Column(name='RESPONSIBILITY', length=len(iarch), dtype='int8'))
outmeta['RESPONSIBILITY'] = resp
print('Writing {}'.format(archfile))
write_templates(archfile, hiresflux[iarch, :], hireswave, outmeta, objtype='BGS Archetypes')
chi2_thresh = 10**2.5
print('Choosing a log10(chi2) threshold value of {:.1f}.'.format(np.log10(chi2_thresh)))
_iarch, _resp, _respindx = Arch.get_archetypes(chi2_thresh=chi2_thresh, responsibility=True)
print('Generated {} archetypes.'.format(len(_iarch)))
srt = np.argsort(meta['D4000'][_iarch])
iarch = _iarch[srt]
resp = _resp[srt]
respindx = []
for ss in srt:
respindx.append(_respindx[ss])
write_archetypes()
def _markers():
d4000 = meta['D4000']
size = 110 * (1+(resp - resp.min()) / resp.ptp())
shade = (d4000[iarch] - d4000[iarch].min()) / d4000[iarch].ptp()
col = plt.cm.coolwarm(shade)
return size, col
def qa_responsibility():
Generate a color-color plot with the symbol size scaled by the responsibility.
rz = -2.5 * np.log10(meta['FLUX_R'] / meta['FLUX_Z'])
d4000 = meta['D4000']
size, col = _markers()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(11, 4), sharey=True)
ax1.scatter(rz[iarch], resp, c=col, marker='o', s=size,
edgecolor='k')
ax1.set_xlabel('r - z')
ax1.set_ylabel('Responsibility')
ax1.grid(True)
ax2.scatter(d4000[iarch], resp, c=col, marker='o', s=size,
edgecolor='k')
ax2.set_xlabel('$D_{n}(4000)$')
ax2.grid(True)
fig.subplots_adjust(wspace=0.05)
qa_responsibility()
def qa_colorcolor():
Generate a color-color plot with the symbol size scaled by the responsibility.
gr = -2.5 * np.log10(meta['FLUX_G'] / meta['FLUX_R'])
rz = -2.5 * np.log10(meta['FLUX_R'] / meta['FLUX_Z'])
d4000 = meta['D4000']
size, col = _markers()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.scatter(rz, gr, s=30, c='lightgray', edgecolor='k')
ax1.scatter(rz[iarch], gr[iarch], c=col, marker='o', s=size,
alpha=1, edgecolor='k')
ax1.set_xlabel(r'$r - z$')
ax1.set_ylabel(r'$g - r$')
ax1.grid(True)
ax2.scatter(d4000, rz, s=30, c='lightgray', edgecolor='k')
ax2.scatter(d4000[iarch], rz[iarch], c=col, marker='o', s=size,
alpha=1, edgecolor='k')
ax2.set_xlabel('$D_{n}(4000)$')
ax2.set_ylabel(r'$g - r$')
ax2.grid(True)
fig.subplots_adjust(wspace=0.3)
qa_colorcolor()
def qa_archetypes(ncol=5, nfilter=11):
Plot the archetypes and the spectra for which they're responsible.
from scipy.signal import medfilt
_, col = _markers()
narch = len(iarch)
nrow = np.ceil(narch / ncol).astype('int')
fig, ax = plt.subplots(nrow, ncol, figsize=(2.5*ncol, 2.5*nrow), sharey=True, sharex=True)
ww = (hireswave > 3000) * (hireswave < 1e4)
for jj, (thisax, indx, rindx, rr) in enumerate(zip(ax.flat, iarch, respindx, resp)):
if rr > 1:
for ii in rindx:
thisax.plot(hireswave[ww], hiresflux[ii, ww], color='lightgrey')
smoothflux = medfilt(hiresflux[indx, ww], 11)
thisax.plot(hireswave[ww], smoothflux, color=col[jj])
thisax.xaxis.set_major_locator(plt.MaxNLocator(2))
thisax.text(0.95, 0.93, '{:04d}\nResp={}'.format(indx, rr), ha='right',
va='top', transform=thisax.transAxes, fontsize=11)
fig.subplots_adjust(wspace=0.05, hspace=0.05)
qa_archetypes()
def qa_ages_colormag():
Generate a color-magnitude plot for the original AGES sample.
Mr = basemeta['SDSS_UGRIZ_ABSMAG_Z01'][:, 2]
gr = basemeta['SDSS_UGRIZ_ABSMAG_Z01'][:, 1]-basemeta['SDSS_UGRIZ_ABSMAG_Z01'][:, 2]
size, col = _markers()
fig, ax = plt.subplots(figsize=(6, 4))
ax.scatter(Mr, gr, s=30, c='lightgray', edgecolor='k')
ax.scatter(Mr[iarch], gr[iarch], c=col, marker='o', s=size,
alpha=1, edgecolor='k')
ax.set_xlabel(r'$M_{0.1r}$')
ax.set_ylabel(r'$^{0.1}(g - r)$')
ax.set_xlim(-16, -23)
ax.set_ylim(0, 1.3)
ax.grid(True)
qa_ages_colormag()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: interactions diagram
|
<ASSISTANT_TASK:>
Python Code:
%%bash
sudo apt-get update -y
sudo apt-get install -y imagemagick
%%bash
convert -resize 10% ../mldp_cover.png logo_small.png
convert -resize 25% ../mldp_cover.png logo_medium.png
# convert -resize 65% ../mldp_cover.png -bordercolor green -border 5 logo_large.png
convert -resize 180% ../mldp_cover_color.jpg -bordercolor green -border 5 logo_large.png
%%bash
dot -Tpng connected-patterns.gv -o tmp.png
convert tmp.png logo_small.png -gravity north -geometry +50+10 -composite connected-patterns.png
rm tmp.png
def display_image(filename):
from IPython.display import HTML, display
html = '<img src="{}" style="float: left; margin-right: 10px;" />'.format(filename)
print(html)
display(HTML(html))
display_image('./connected-patterns.png')
%%bash
convert mldp_interactions.png logo_large.png \
-gravity center -geometry +0+0 -composite \
-bordercolor red -border 10 \
-caption "Machine Learning Design Patterns; by Lakshmanan, Robinson, Munn; O'Reilly 2020" \
machine-learning-design-patterns.png
display_image('./machine-learning-design-patterns.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Execution
Step3: HTTP
|
<ASSISTANT_TASK:>
Python Code:
help('learning_lab.03_interface_configuration')
from importlib import import_module
script = import_module('learning_lab.03_interface_configuration')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
run ../learning_lab/03_interface_configuration.py
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Methods
Step2: Parameters to pass to Google Drive
Step3: Unzip the data to a directory
|
<ASSISTANT_TASK:>
Python Code:
# imports
import requests
import zipfile
import os
# Methods to pull from google drive
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768 #iterate by chunks of data to not loose information
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
# download the zip
file_id = '0Bx1nyj-4aTk9RnNnS3JfRDZ6MVU' # the ID to the file on Google Drive
destination = 'public_data.zip'
# pass through the arguements
download_file_from_google_drive(file_id, destination)
# unzip the data
zip_ref = zipfile.ZipFile(destination, 'r')
zip_ref.extractall(os.getcwd())
zip_ref.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step2: There are two training modes that we can choose from for this lab
Step4: Data generation
Step5: Data caching
Step7: Utility for data fetching
Step8: Understand the data format
Step10: LSTM network setup
Step11: Training
Step12: Time to start training.
Step13: A look how the loss function shows how the model is converging
Step14: Let us evaluate our performance for the training, validation, and test datasets. We use mean squared error as our metric which might be a little simplistic. A method that would define a ratio of how many predictions have been inside a given tolerance would make a better measure.
Step15: Visualize the prediction
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot as plt
import math
import numpy as np
import os
import pandas as pd
import random
import time
import cntk as C
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
%matplotlib inline
# to make things reproduceable, seed random
np.random.seed(0)
C.cntk_py.set_fixed_random_seed(1)
C.cntk_py.force_deterministic_algorithms()
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Test for CNTK version
if not C.__version__ == "2.0":
raise Exception("this notebook was designed to work with 2.0. Current Version: " + C.__version__)
isFast = True
# we need around 2000 epochs to see good accuracy. For testing 100 epochs will do.
EPOCHS = 200 if isFast else 2000
def generate_solar_data(input_url, time_steps, normalize=1, val_size=0.1, test_size=0.1):
generate sequences to feed to rnn based on data frame with solar panel data
the csv has the format: time ,solar.current, solar.total
(solar.current is the current output in Watt, solar.total is the total production
for the day so far in Watt hours)
# try to find the data file local. If it doesn't exists download it.
cache_path = os.path.join("data", "iot")
cache_file = os.path.join(cache_path, "solar.csv")
if not os.path.exists(cache_path):
os.makedirs(cache_path)
if not os.path.exists(cache_file):
urlretrieve(input_url, cache_file)
print("downloaded data successfully from ", input_url)
else:
print("using cache for ", input_url)
df = pd.read_csv(cache_file, index_col="time", parse_dates=['time'], dtype=np.float32)
df["date"] = df.index.date
# normalize data
df['solar.current'] /= normalize
df['solar.total'] /= normalize
# group by day, find the max for a day and add a new column .max
grouped = df.groupby(df.index.date).max()
grouped.columns = ["solar.current.max", "solar.total.max", "date"]
# merge continuous readings and daily max values into a single frame
df_merged = pd.merge(df, grouped, right_index=True, on="date")
df_merged = df_merged[["solar.current", "solar.total",
"solar.current.max", "solar.total.max"]]
# we group by day so we can process a day at a time.
grouped = df_merged.groupby(df_merged.index.date)
per_day = []
for _, group in grouped:
per_day.append(group)
# split the dataset into train, validatation and test sets on day boundaries
val_size = int(len(per_day) * val_size)
test_size = int(len(per_day) * test_size)
next_val = 0
next_test = 0
result_x = {"train": [], "val": [], "test": []}
result_y = {"train": [], "val": [], "test": []}
# generate sequences a day at a time
for i, day in enumerate(per_day):
# if we have less than 8 datapoints for a day we skip over the
# day assuming something is missing in the raw data
total = day["solar.total"].values
if len(total) < 8:
continue
if i >= next_val:
current_set = "val"
next_val = i + int(len(per_day) / val_size)
elif i >= next_test:
current_set = "test"
next_test = i + int(len(per_day) / test_size)
else:
current_set = "train"
max_total_for_day = np.array(day["solar.total.max"].values[0])
for j in range(2, len(total)):
result_x[current_set].append(total[0:j])
result_y[current_set].append([max_total_for_day])
if j >= time_steps:
break
# make result_y a numpy array
for ds in ["train", "val", "test"]:
result_y[ds] = np.array(result_y[ds])
return result_x, result_y
# there are 14 lstm cells, 1 for each possible reading we get per day
TIMESTEPS = 14
# 20000 is the maximum total output in our dataset. We normalize all values with
# this so our inputs are between 0.0 and 1.0 range.
NORMALIZE = 20000
X, Y = generate_solar_data("https://www.cntk.ai/jup/dat/solar.csv",
TIMESTEPS, normalize=NORMALIZE)
# process batches of 10 days
BATCH_SIZE = TIMESTEPS * 10
def next_batch(x, y, ds):
get the next batch for training
def as_batch(data, start, count):
return data[start:start + count]
for i in range(0, len(x[ds]), BATCH_SIZE):
yield as_batch(X[ds], i, BATCH_SIZE), as_batch(Y[ds], i, BATCH_SIZE)
X['train'][0:3]
Y['train'][0:3]
#Define the size of the internal state
H_DIMS = 14 # error 0.007
# H_DIMS = 7 # error 0.008
# H_DIMS = 28 # erro 0.00862598
# dropout 0.2, test error: 0.007
# dropout 0.5, test error: 0.0083
# dropout 0.9, test error: 0.013
def create_model(x):
Create the model for time series prediction
with C.layers.default_options(initial_state = 0.1):
m = C.layers.Recurrence(C.layers.LSTM(H_DIMS))(x)
#m = C.layers.Recurrence(C.layers.RNNStep(H_DIMS))(x) # 0.028
m = C.sequence.last(m)
m = C.layers.Dropout(0.2)(m)
# m = C.layers.Dropout(0.5)(m)
# m = C.layers.Dropout(0.9)(m)
m = C.layers.Dense(1)(m)
return m
# input sequences
x = C.sequence.input_variable(1)
# create the model
z = create_model(x)
# expected output (label), also the dynamic axes of the model output
# is specified as the model of the label input
l = C.input_variable(1, dynamic_axes=z.dynamic_axes, name="y")
# the learning rate
learning_rate = 0.005
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
# loss and error function
loss = C.squared_error(z, l)
error = C.squared_error(z, l)
# loss = C.abs(z-l)
# error = C.abs(z-l)
# use adam optimizer
momentum_time_constant = C.momentum_as_time_constant_schedule(BATCH_SIZE / -math.log(0.9))
learner = C.fsadagrad(z.parameters,
lr = lr_schedule,
momentum = momentum_time_constant)
trainer = C.Trainer(z, (loss, error), [learner])
# training
loss_summary = []
start = time.time()
for epoch in range(0, EPOCHS):
for x_batch, l_batch in next_batch(X, Y, "train"):
trainer.train_minibatch({x: x_batch, l: l_batch})
if epoch % (EPOCHS / 10) == 0:
training_loss = trainer.previous_minibatch_loss_average
loss_summary.append(training_loss)
print("epoch: {}, loss: {:.4f}".format(epoch, training_loss))
print("Training took {:.1f} sec".format(time.time() - start))
plt.plot(loss_summary, label='training loss');
# evaluate the specified X and Y data on our model
def get_error(X,Y,labeltxt):
result = 0.0
numbatches = 0
for x1, y1 in next_batch(X, Y, labeltxt):
eval_error = trainer.test_minibatch({x : x1, l : y1})
result += eval_error
numbatches += 1
return float(result)/numbatches
# Print the training and validation errors
for labeltxt in ["train", "val"]:
print("Error for {}: {:.8f}".format(labeltxt, get_error(X, Y, labeltxt)))
# Print the test error
labeltxt = "test"
print("Error for {}: {:.8f}".format(labeltxt, get_error(X, Y, labeltxt)))
# predict
f, a = plt.subplots(2, 1, figsize=(12, 8))
for j, ds in enumerate(["val", "test"]):
results = []
for x_batch, _ in next_batch(X, Y, ds):
pred = z.eval({x: x_batch})
results.extend(pred[:, 0])
# because we normalized the input data we need to multiply the prediction
# with SCALER to get the real values.
a[j].plot((Y[ds] * NORMALIZE).flatten(), label=ds + ' raw');
a[j].plot(np.array(results) * NORMALIZE, label=ds + ' pred');
a[j].legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data we want to investigate is stored in the file 'fake-data.csv'. It is data that I have found somewhere. I am not sure whether this data is real or fake. Therefore, I won't discuss the attributes of the data. The point of the data is that it is a classification problem that can not be solved with
Step2: We extract the features from the data frame and convert it into a NumPy <em style="color
Step3: We extract the target column and convert it into a NumPy array.
Step4: In order to plot the instances according to their class we divide the feature matrix $X$ into two parts. $\texttt{X_pass}$ contains those examples that have class $1$, while $\texttt{X_fail}$ contains those examples that have class $0$.
Step5: Let us plot the data.
Step6: We want to split the data into a training set and a test set.
Step7: We will split the data at a ratio of $4
Step8: In order to build a <em style="color
Step9: The function $\texttt{logistic_regression}(\texttt{X_train}, \texttt{Y_train}, \texttt{X_test}, \texttt{Y_test})$ takes a feature matrix $\texttt{X_train}$ and a corresponding vector $\texttt{Y_train}$ and computes a logistic regression model $M$ that best fits these data. Then, the accuracy of the model is computed using the test data $\texttt{X_test}$ and $\texttt{Y_test}$.
Step10: We use this function to build a model for our data. Initially, we will take all the available data to create the model.
Step11: Given that there are only two classes, the accuracy of our first model is quite poor.
Step12: Clearly, pure logistic regression is not working for this example. The reason is, that a linear decision boundary is not able to separate the positive examples from the negative examples. Let us add polynomial features. This enables us to create more complex decision boundaries.
Step13: This seems to work better. Let us compute the decision boundary and plot it.
Step14: The decision boundary is now given by the following equation
Step15: Let us try to add <em style="color
Step16: Let us fit the quartic model.
Step17: The accuracy on the training set has increased, but we observe that the accuracy on the training set is actually not improving. Again, we proceed to plot the decision boundary.
Step18: Plotting the decision boundary starts to get tedious.
Step19: The decision boundary looks strange. Let's get bold and try to add features of a higher power.
Step20: In order to automatize the process, we define some auxiliary functions.
Step21: Let's check this out for $n=4$.
Step22: The function $\texttt{polynomial_grid}(n, M)$ takes a number $n$ and a model $M$. It returns a meshgrid that can be used to plot the decision boundary of the model.
Step23: The function $\texttt{plot_nth_degree_boundary}(n)$ creates a polynomial logistic regression model of degree $n$. It plots both the training data and the decision boundary.
Step24: Let us test this for the polynomial logistic regression model of degree $4$.
Step25: This seems to be the same shape that we have seen earlier. It looks like the function $\texttt{plot_nth_degree_boundary}(n)$ is working. Let's try higher degree polynomials.
Step26: The score on the training set has improved. What happens if we try still higher degrees?
Step27: We captured one more of the training examples. Let's get bold, we want a $100\%$ training accuracy.
Step28: The model is getting more complicated, but it is not getting better, as the accuracy on the test set has not improved.
Step29: Let us check whether regularization can help. Below, the regularization parameter prevents the decision boundary from becoming to wiggly and thus the accuracy on the test set can increase. The function below plots all the data.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
DF = pd.read_csv('fake-data.csv')
DF.head()
X = np.array(DF[['x','y']])
Y = np.array(DF['class'])
X_pass = X[Y == 1.0]
X_fail = X[Y == 0.0]
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(-0.9, 1.1, step=0.1))
plt.yticks(np.arange(-0.8, 1.2, step=0.1))
plt.scatter(X_pass[:,0], X_pass[:,1], color='b')
plt.scatter(X_fail[:,0], X_fail[:,1], color='r')
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1)
import sklearn.linear_model as lm
def logistic_regression(X_train, Y_train, X_test, Y_test, reg=10000):
M = lm.LogisticRegression(C=reg, tol=1e-6)
M.fit(X_train, Y_train)
train_score = M.score(X_train, Y_train)
yPredict = M.predict(X_test)
accuracy = np.sum(yPredict == Y_test) / len(Y_test)
return M, train_score, accuracy
M, score, accuracy = logistic_regression(X, Y, X, Y)
score, accuracy
ϑ0 = M.intercept_[0]
ϑ1, ϑ2 = M.coef_[0]
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(-0.9, 1.1, step=0.1))
plt.yticks(np.arange(-0.8, 1.2, step=0.1))
plt.scatter(X_pass[:,0], X_pass[:,1], color='b')
plt.scatter(X_fail[:,0], X_fail[:,1], color='r')
H = np.arange(-0.8, 1.0, 0.05)
P = -(ϑ0 + ϑ1 * H)/ϑ2
plt.plot(H, P, color='green')
def extend(X):
n = len(X)
fx = np.reshape(X[:,0], (n, 1)) # extract first column
fy = np.reshape(X[:,1], (n, 1)) # extract second column
return np.hstack([fx, fy, fx*fx, fy*fy, fx*fy]) # stack everthing horizontally
X_train_quadratic = extend(X_train)
X_test_quadratic = extend(X_test)
M, score, accuracy = logistic_regression(X_train_quadratic, Y_train, X_test_quadratic, Y_test)
score, accuracy
ϑ0 = M.intercept_[0]
ϑ1, ϑ2, ϑ3, ϑ4, ϑ5 = M.coef_[0]
a = np.arange(-1.0, 1.0, 0.005)
b = np.arange(-1.0, 1.0, 0.005)
A, B = np.meshgrid(a,b)
A
B
Z = ϑ0 + ϑ1 * A + ϑ2 * B + ϑ3 * A * A + ϑ4 * B * B + ϑ5 * A * B
Z
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(-0.9, 1.1, step=0.1))
plt.yticks(np.arange(-0.8, 1.2, step=0.1))
plt.scatter(X_pass[:,0], X_pass[:,1], color='b')
plt.scatter(X_fail[:,0], X_fail[:,1], color='r')
CS = plt.contour(A, B, Z, 0, colors='green')
from sklearn.preprocessing import PolynomialFeatures
quartic = PolynomialFeatures(4, include_bias=False)
X_train_quartic = quartic.fit_transform(X_train)
X_test_quartic = quartic.fit_transform(X_test)
print(quartic.get_feature_names(['x', 'y']))
M, score, accuracy = logistic_regression(X_train_quartic, Y_train, X_test_quartic, Y_test)
score, accuracy
ϑ0 = M.intercept_[0]
ϑ1, ϑ2, ϑ3, ϑ4, ϑ5, ϑ6, ϑ7, ϑ8, ϑ9, ϑ10, ϑ11, ϑ12, ϑ13, ϑ14 = M.coef_[0]
a = np.arange(-1.0, 1.0, 0.005)
b = np.arange(-1.0, 1.0, 0.005)
A, B = np.meshgrid(a,b)
Z = ϑ0 + ϑ1 * A + ϑ2 * B + \
ϑ3 * A**2 + ϑ4 * A * B + ϑ5 * B**2 + \
ϑ6 * A**3 + ϑ7 * A**2 * B + ϑ8 * A * B**2 + ϑ9 * B**3 + \
ϑ10 * A**4 + ϑ11 * A**3 * B + ϑ12 * A**2 * B**2 + ϑ13 * A * B**3 + ϑ14 * B**4
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(-0.9, 1.1, step=0.1))
plt.yticks(np.arange(-0.8, 1.2, step=0.1))
plt.scatter(X_pass[:,0], X_pass[:,1], color='b')
plt.scatter(X_fail[:,0], X_fail[:,1], color='r')
CS = plt.contour(A, B, Z, 0, colors='green')
X_pass_train = X_train[Y_train == 1.0]
X_fail_train = X_train[Y_train == 0.0]
def polynomial(n):
sum = 'Θ[0]'
cnt = 0
for k in range(1, n+1):
for i in range(0, k+1):
cnt += 1
sum += f' + Θ[{cnt}] * A**{k-i} * B**{i}'
print('number of features:', cnt)
return sum
polynomial(4)
def polynomial_grid(n, M):
Θ = [M.intercept_[0]] + list(M.coef_[0])
a = np.arange(-1.0, 1.0, 0.005)
b = np.arange(-1.0, 1.0, 0.005)
A, B = np.meshgrid(a,b)
return eval(polynomial(n))
def plot_nth_degree_boundary(n, C=10000):
poly = PolynomialFeatures(n, include_bias=False)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.fit_transform(X_test)
M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C)
print('The accuracy on the training set is:', score)
print('The accuracy on the test set is:', accuracy)
Z = polynomial_grid(n, M)
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(-0.9, 1.11, step=0.1))
plt.yticks(np.arange(-0.8, 1.21, step=0.1))
plt.scatter(X_pass_train[:,0], X_pass_train[:,1], color='b')
plt.scatter(X_fail_train[:,0], X_fail_train[:,1], color='r')
CS = plt.contour(A, B, Z, 0, colors='green')
plot_nth_degree_boundary(4)
plot_nth_degree_boundary(5)
plot_nth_degree_boundary(6)
plot_nth_degree_boundary(14)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=2)
X_pass_train = X_train[Y_train == 1.0]
X_fail_train = X_train[Y_train == 0.0]
def plot_nth_degree_boundary_all(n, C):
poly = PolynomialFeatures(n, include_bias=False)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.fit_transform(X_test)
M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C)
print('The accuracy on the training set is:', score)
print('The accuracy on the test set is:', accuracy)
Z = polynomial_grid(n, M)
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(-0.9, 1.11, step=0.1))
plt.yticks(np.arange(-0.8, 1.21, step=0.1))
plt.scatter(X_pass[:,0], X_pass[:,1], color='b')
plt.scatter(X_fail[:,0], X_fail[:,1], color='r')
CS = plt.contour(A, B, Z, 0, colors='green')
plot_nth_degree_boundary_all(14, 100.0)
plot_nth_degree_boundary_all(20, 100000.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Le même principe peut être appliqué pour calculer un volume.
Step2: The function to integrate
Step3: Random points
Step4: Numerical computation of the integral with Monte-Carlo
Step5: The actual integral value
Step6: The error ratio
Step7: Graphical illustration
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(0., 2. * np.pi, 100)
x = np.cos(t) + np.cos(2. * t)
y = np.sin(t)
N = 100
rand = np.array([np.random.uniform(low=-3, high=3, size=N), np.random.uniform(low=-3, high=3, size=N)]).T
fig, ax = plt.subplots(1, 1, figsize=(7, 7))
ax.plot(rand[:,0], rand[:,1], '.k')
ax.plot(x, y, "-r", linewidth=2)
ax.plot([-3, -3, 3, 3, -3], [-3, 3, 3, -3, -3], "-b", linewidth=2)
ax.set_axis_off()
ax.set_xlim([-4, 4])
ax.set_ylim([-4, 4])
plt.show()
import sympy as sp
sp.init_printing()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return -x**2 + 3.
N = 100000 # The number of random points
x_lower_bound = -4.0
x_upper_bound = 4.0
y_lower_bound = -16.0
y_upper_bound = 16.0
random_points = np.array([np.random.uniform(low=x_lower_bound, high=x_upper_bound, size=N),
np.random.uniform(low=y_lower_bound, high=y_upper_bound, size=N)]).T
# Points between f and the abscissa
random_points_in_pos = np.array([p for p in random_points if 0 <= p[1] <= f(p[0])])
random_points_in_neg = np.array([p for p in random_points if 0 > p[1] >= f(p[0])])
ratio_pos = float(len(random_points_in_pos)) / float(N)
ratio_neg = float(len(random_points_in_neg)) / float(N)
print('Percentage of "positive" points between f and the abscissa: {:.2f}%'.format(ratio_pos * 100))
print('Percentage of "negative" points between f and the abscissa: {:.2f}%'.format(ratio_neg * 100))
s2 = (x_upper_bound - x_lower_bound) * (y_upper_bound - y_lower_bound)
print("Box surface:", s2)
s1 = ratio_pos * s2 - ratio_neg * s2
print("Function integral (numerical computation using Monte-Carlo):", s1)
x = sp.symbols("x")
integ = sp.Integral(f(x), (x, x_lower_bound, x_upper_bound))
sp.Eq(integ, integ.doit())
actual_s1 = float(integ.doit())
error = actual_s1 - s1
print("Error ratio = {:.6f}%".format(abs(error / actual_s1) * 100.))
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
x_array = np.arange(x_lower_bound, x_upper_bound, 0.01)
y_array = f(x_array)
plt.axis([x_lower_bound, x_upper_bound, y_lower_bound, y_upper_bound])
plt.plot(random_points[:,0], random_points[:,1], ',k')
plt.plot(random_points_in_pos[:,0], random_points_in_pos[:,1], ',r')
plt.plot(random_points_in_neg[:,0], random_points_in_neg[:,1], ',r')
plt.hlines(y=0, xmin=x_lower_bound, xmax=x_upper_bound)
plt.plot(x_array, y_array, '-r', linewidth=2)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Our Taylor Series approximation of $e^2$ was calculated as 7.0. Let's compare our Taylor Series approximation to Python's math.exp() function. Python's math.exp() function raises $e$ to any power. In our case, we want to use math.exp(2) because we want to calculate $e^2$.
Step2: Our Taylor Series approximation 7.0 is not that far off the calculated value 7.389056... using Python's exp() function.
Step3: The result 7.0 is the same as the result we calculated when we wrote out each term of the Taylor Series individually.
Step4: The result is 7.38871.... Let's see how close that is to $e^2$ calculated with Python's exp() function.
Step5: The result is 7.38905.... We get closer to the value of $e^2$ when 10 terms are used in the Taylor Series compared to when 5 terms were used in the Taylor Series.
Step6: If we call our function func_e_to_2() with the input argument 10, the result is the same as when we ran the for loop 10 times.
Step7: The output of our func_e_to_2() function is 7.38871....
Step8: Let's calculate $e^2$ using 10 terms with our new func_e() function.
Step9: The result is 7.38871..., the same result as before.
Step10: The result is 143.68945.... Let's see how close this value is to Python's exp() function when we make the same calculation $e^5$.
Step11: The result is 148.41315.... The Taylor Series approximation calculated by our func_e() function is pretty close to the value calculated by Python's exp() function.
Step12: Note how the error decreases as we add terms to the Taylor Series. When the Taylor Series only has 1 term, the error is 147.41.... When 10 terms are used in the Taylor Series, the error goes down to 4.7237....
Step13: The output shows that it takes 12 terms in the Taylor Series to drop the error below 1.
Step14: Let's use our func_cos() function to estimate the cosine of 45 degrees. Note that func_cos() function computes the cosine of an angle in radians. If we want to calculate the cosine of 45 degrees using our function, we first have to convert 45 degrees into radians. Luckily, Python's math module has a function called radians() that makes the angle conversion early.
Step15: Using our func_cos() function and 5 terms in the Taylor Series approximation, we estimate the cosine of 45 degrees is 0.707106805.... Let's check our func_cos() function compared to Python's cos() function from the math module.
Step16: Using Python's cos() function, the cosine of 45 degrees returns 0.707106781... This value is very close to the approximation calculated using our func_cos() function.
Step17: We can use a for loop to see how much better adding additional terms to our Taylor Series approximation compares to Python's cos() function.
|
<ASSISTANT_TASK:>
Python Code:
import math
x = 2
e_to_2 = x**0/math.factorial(0) + x**1/math.factorial(1) + x**2/math.factorial(2) + x**3/math.factorial(3) + x**4/math.factorial(4)
print(e_to_2)
print(math.exp(2))
import math
x = 2
e_to_2 = 0
for i in range(5):
e_to_2 += x**i/math.factorial(i)
print(e_to_2)
import math
x = 2
e_to_2 = 0
for i in range(10):
e_to_2 += x**i/math.factorial(i)
print(e_to_2)
print(math.exp(2))
import math
def func_e_to_2(n):
x = 2
e_to_2 = 0
for i in range(n):
e_to_2 += x**i/math.factorial(i)
return e_to_2
out = func_e_to_2(10)
print(out)
import math
def func_e(x, n):
e_approx = 0
for i in range(n):
e_approx += x**i/math.factorial(i)
return e_approx
out = func_e(2,10)
print(out)
out = func_e(5,10)
print(out)
out = math.exp(5)
print(out)
import math
x = 5
for i in range(1,11):
e_approx = func_e(x,i)
e_exp = math.exp(x)
e_error = abs(e_approx - e_exp)
print(f'{i} terms: Taylor Series approx= {e_approx}, exp calc= {e_exp}, error = {e_error}')
import math
x = 5
for i in range(1,20):
e_approx = func_e(x,i)
e_exp = math.exp(x)
e_error = abs(e_approx - e_exp)
if e_error < 1:
break
print(f'{i} terms: Taylor Series approx= {e_approx}, exp calc= {e_exp}, error = {e_error}')
import math
def func_cos(x, n):
cos_approx = 0
for i in range(n):
coef = (-1)**i
num = x**(2*i)
denom = math.factorial(2*i)
cos_approx += ( coef ) * ( (num)/(denom) )
return cos_approx
angle_rad = (math.radians(45))
out = func_cos(angle_rad,5)
print(out)
out = math.cos(angle_rad)
print(out)
import math
import numpy as np
import matplotlib.pyplot as plt
# if using a Jupyter notebook, include:
%matplotlib inline
angles = np.arange(-2*np.pi,2*np.pi,0.1)
p_cos = np.cos(angles)
t_cos = [func_cos(angle,3) for angle in angles]
fig, ax = plt.subplots()
ax.plot(angles,p_cos)
ax.plot(angles,t_cos)
ax.set_ylim([-5,5])
ax.legend(['cos() function','Taylor Series - 3 terms'])
plt.show()
import math
import numpy as np
import matplotlib.pyplot as plt
# if using a Jupyter notebook, include:
%matplotlib inline
angles = np.arange(-2*np.pi,2*np.pi,0.1)
p_cos = np.cos(angles)
fig, ax = plt.subplots()
ax.plot(angles,p_cos)
# add lines for between 1 and 6 terms in the Taylor Series
for i in range(1,6):
t_cos = [func_cos(angle,i) for angle in angles]
ax.plot(angles,t_cos)
ax.set_ylim([-7,4])
# set up legend
legend_lst = ['cos() function']
for i in range(1,6):
legend_lst.append(f'Taylor Series - {i} terms')
ax.legend(legend_lst, loc=3)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ages of people living in Kelham island
Step2: Predicted range of weights of a child
Step3: We now have a set of
|
<ASSISTANT_TASK:>
Python Code:
#the observations
times = np.array([2.0,4.0, 6.0, 8.0])[:,None]
distances = np.array([2.0,8.0,18.0,32.0])[:,None]
#model configuration
kernel = GPy.kern.Integral(input_dim=1,variances=10.0)
m = GPy.models.GPRegression(times,distances,kernel)
m.optimize()
#m.plot_f()
#prediction for after five seconds
(speed, var) = m.predict_noiseless(np.array([[5.0]]))
print("After 5 seconds the robot was travelling at %0.2f m/s" % speed);
import numpy as np
bins = np.array([19,52,114,45,22,20])[None,:]
bin_ranges = np.array([0,10,20,30,40,60,100])
#The domain of each bin is set by pairs of values in the first two dimensions (see output).
#These could overlap!
X=np.vstack([bin_ranges[1:],bin_ranges[0:-1]]).T
print(X)
Y = bins.T
kernel = GPy.kern.Integral_Limits(input_dim=2, variances=0.1, lengthscale=10.0)
m = GPy.models.GPRegression(X,Y,kernel)
m.Gaussian_noise=0.001
m.optimize()
Xtest = np.arange(0,100,1.0) #for plotting
Xpred = np.array([Xtest,np.zeros(len(Xtest))])
Ypred,YpredCov = m.predict_noiseless(Xpred.T)
SE = np.sqrt(YpredCov)
plot_hist(bins.T,bin_ranges,0,alpha=0.2)
plt.plot(Xtest,Ypred,'r-',label='Mean')
plt.plot(Xtest,Ypred+SE*1.96,'r:',label='95% CI')
plt.plot(Xtest,Ypred-SE*1.96,'r:')
plt.title('Population density of people of different ages')
plt.xlabel('Age')
plt.ylabel('Density (Frequency)')
plt.ylim([0,15])
#generate 100 simulated children heights, weights and ages.
Nchildren = 100
ages = np.random.random(Nchildren)*3.0+3.0 #ages 3 to 6
heights = 70 + ages * 7.0
weights = 9.0 + (heights-70)*0.2
weights += np.random.normal(0,0.3,weights.shape)
heights += np.random.normal(0,3,weights.shape)
ages += np.random.normal(0,0.3,weights.shape)
#group into a histogram (this is often the sort of data we'll actually have)
spacing_age = 1.0
spacing_height = 5.0
area = spacing_age * spacing_height
inputs, simple_inputs, outputs = histogram_data(ages,heights,weights,spacing_age,spacing_height)
print(" Age Range Height Range Average Weight (rounded)")
np.hstack([inputs,np.round(outputs[:,None])])
outputmean = np.mean(outputs)
outputs -= outputmean
#we have twice as many dimensions as the actual data as each
#pair of dimensions is used to specify one dimension in the domain of each bin.
#the input matrix above is used, so dimension 0 and 1 specify the start
#and end of each bin in the 'age' direction. etc...
kernel = GPy.kern.Multidimensional_Integral_Limits(input_dim=4, variances=1.0, lengthscale=[8.0,4.0])
m = GPy.models.GPRegression(1.0*inputs,1.0*area*outputs[:,None],kernel)
m.optimize()
a = 4.0
h = 100.0
Xpred = []
test_heights = range(70,140)
for h in test_heights:
Xpred.append([a,0,h,0])
Xpred = np.array(Xpred)
weight, err = m.predict_noiseless(Xpred)
plt.plot(test_heights,weight+outputmean)
plt.plot(test_heights,weight+outputmean+np.sqrt(err),':')
plt.plot(test_heights,weight+outputmean-np.sqrt(err),':')
plt.xlabel('Height / cm')
plt.ylabel('Weight / kg')
plt.title('Results from Children Age, Height, Weight simulation')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To collect information about model training times (both on CPU and GPU), one must set profile option in the model configuration to True
Step2: To gather statistics about how long each action takes, we must set profile to True inside run call
Step3: Pipeline profiling
Step4: Note that elapsed_time attribute is created whether or not we set profile to True.
Step5: Note that there is a detailed information about exact methods that are called inside each of the actions. That is a lot of data which can give us precise understanding of parts of the code, that are our bottlenecks.
Step6: Model profiling
Step7: There is an info property that, unsurprisingly, shows a lot of interesting details regarding model itself or the training process
Step8: As with pipeline, there is a profile_info attribute, as well as show_profile_info method. Depending on type of the used device (CPU or GPU)
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("../../..")
from batchflow import B, V, W
from batchflow.opensets import MNIST
from batchflow.models.torch import ResNet18
dataset = MNIST()
model_config = {
'inputs/labels/classes': 10,
'loss': 'ce',
'profile': True,
}
pipeline = (dataset.train.p
.init_variable('loss_history', [])
.to_array(channels='first', dtype='float32')
.multiply(multiplier=1/255., preserve_type=False)
.init_model('dynamic', ResNet18,
'resnet', config=model_config)
.train_model('resnet',
B.images, B.labels,
fetches='loss',
save_to=V('loss_history', mode='a'))
)
BATCH_SIZE = 64
N_ITERS = 50
pipeline.run(BATCH_SIZE, n_iters=N_ITERS, bar=True, profile=True,
bar_desc=W(V('loss_history')[-1].format('Loss is {:7.7}')))
pipeline.elapsed_time
pipeline.profile_info.head()
# timings for each action
pipeline.show_profile_info(per_iter=False, detailed=False)
# for each action show 2 of the slowest methods, based on maximum `ncalls`
pipeline.show_profile_info(per_iter=False, detailed=True, sortby=('ncalls', 'max'), limit=2)
# timings for each action for each iter
pipeline.show_profile_info(per_iter=True, detailed=False,)
# for each iter each action show 3 of the slowest methods, based on maximum `ncalls`
pipeline.show_profile_info(per_iter=True, detailed=True, sortby='tottime', limit=3)
model = pipeline.m('resnet')
model.info
# one row for every operation inside model; limit at 5 rows
model.show_profile_info(per_iter=False, limit=5)
# for each iteration show 3 of the slowest operations
model.show_profile_info(per_iter=True, limit=3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Querying a Series
Step2: The DataFrame Data Structure
Step3: Dataframe Indexing and Loading
Step4: Querying a DataFrame
Step5: Indexing Dataframes
Step6: Missing values
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.Series?
animals = ['Tiger', 'Bear', 'Moose']
pd.Series(animals)
numbers = [1, 2, 3]
pd.Series(numbers)
animals = ['Tiger', 'Bear', None]
df = pd.Series(animals)
df['number_column'] = -99999
df
numbers = [1, 2, None]
pd.Series(numbers)
import numpy as np
np.nan == None
np.nan == np.nan
np.isnan(np.nan)
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports)
s
s.index
s = pd.Series(['Tiger', 'Bear', 'Moose'], index=['India', 'America', 'Canada'])
s
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports, index=['Golf', 'Sumo', 'Hockey'])
s
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports)
s
s.iloc[3]
s.loc['Golf']
s[3]
s['Golf']
sports = {99: 'Bhutan',
100: 'Scotland',
101: 'Japan',
102: 'South Korea'}
s = pd.Series(sports)
s.iloc[0] #This won't call s.iloc[0] as one might expect, it generates an error instead
s = pd.Series([100.00, 120.00, 101.00, 3.00])
s
total = 0
for item in s:
total+=item
print(total)
import numpy as np
total = np.sum(s)
print(total)
#this creates a big series of random numbers
s = pd.Series(np.random.randint(0,1000,10000))
s.head()
len(s)
%%timeit -n 100
summary = 0
for item in s:
summary+=item
%%timeit -n 100
summary = np.sum(s)
s+=2 #adds two to each item in s using broadcasting
s.head()
for label, value in s.iteritems():
s.set_value(label, value+2)
s.head()
%%timeit -n 10
s = pd.Series(np.random.randint(0,1000,10000))
for label, value in s.iteritems():
s.loc[label]= value+2
%%timeit -n 10
s = pd.Series(np.random.randint(0,1000,10000))
s+=2
s = pd.Series([1, 2, 3])
s.loc['Animal'] = 'Bears'
s
original_sports = pd.Series({'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'})
cricket_loving_countries = pd.Series(['Australia',
'Barbados',
'Pakistan',
'England'],
index=['Cricket',
'Cricket',
'Cricket',
'Cricket'])
all_countries = original_sports.append(cricket_loving_countries)
original_sports
cricket_loving_countries
all_countries
all_countries.loc['Cricket']
import pandas as pd
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
df.head()
df.loc['Store 2']
type(df.loc['Store 2'])
df.loc['Store 1']
df.loc['Store 1', 'Cost']
df.T
df.T.loc['Cost']
df['Cost']
df.loc['Store 1']['Cost']
df.loc[:,['Name', 'Cost']]
df[['Name', 'Cost']]
df['Name']
df.drop('Store 1')
df
copy_df = df.copy()
copy_df = copy_df.drop('Store 1')
copy_df
copy_df.drop?
del copy_df['Name']
copy_df
df['Location'] = None
df
costs = df['Cost']
costs
costs+=2
costs
df
!cat olympics.csv
df = pd.read_csv('olympics.csv')
df.head()
df = pd.read_csv('olympics.csv', index_col = 0, skiprows=1)
df.head()
df.columns
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold' + col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver' + col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze' + col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#' + col[1:]}, inplace=True)
df.head()
df['Gold'] > 0
#boulean mask
only_gold = df.where(df['Gold'] > 0)
only_gold.head()
only_gold['Gold'].sum()
df['Gold'].count()
only_gold = only_gold.dropna()
only_gold.head()
only_gold = df[df['Gold'] > 0]
only_gold.head()
len(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)])
df[(df['Gold.1'] > 0) & (df['Gold'] == 0)]
df.head()
# if changing index, he index column will be deleted so create a new column for the old index country
df['country'] = df.index
df = df.set_index('Gold')
df.head()
df = df.reset_index()
df.head()
df = pd.read_csv('census.csv')
df.head()
df['SUMLEV'].unique()
df=df[df['SUMLEV'] == 50]
df.head()
columns_to_keep = ['STNAME',
'CTYNAME',
'BIRTHS2010',
'BIRTHS2011',
'BIRTHS2012',
'BIRTHS2013',
'BIRTHS2014',
'BIRTHS2015',
'POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']
df = df[columns_to_keep]
df.head()
df = df.set_index(['STNAME', 'CTYNAME'])
df.head()
df.loc['Michigan', 'Washtenaw County']
df.loc[ [('Michigan', 'Washtenaw County'),
('Michigan', 'Wayne County')] ]
df = pd.read_csv('log.csv')
df
df.fillna?
df = df.set_index('time')
df = df.sort_index()
df
df = df.reset_index()
df = df.set_index(['time', 'user'])
df
df = df.fillna(method='ffill')
df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The result is a NumPy Record Array where the fields of the array correspond to the properties of a VesselTubeSpatialObjectPoint.
Step2: The length of the array corresponds to the number of points that make up the tubes.
Step3: Individual points can be sliced, or views can be created on individual fields.
Step4: We can easily create a histogram of the radii or visualize the point positions.
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
from itk import tubes_from_file
tubes = tubes_from_file("data/Normal071-VascularNetwork.tre")
print(type(tubes))
print(tubes.dtype)
print(len(tubes))
print(tubes.shape)
print('Entire points 0, 2:')
print(tubes[:4:2])
print('\nPosition of points 0, 2')
print(tubes['PositionInWorldSpace'][:4:2])
%pylab inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(1, 2, 1)
ax.hist(tubes['RadiusInWorldSpace'], bins=100)
ax.set_xlabel('Radius')
ax.set_ylabel('Count')
ax = fig.add_subplot(1, 2, 2, projection='3d')
subsample = 100
position = tubes['PositionInWorldSpace'][::subsample]
radius = tubes['RadiusInWorldSpace'][::subsample]
ax.scatter(position[:,0], position[:,1], position[:,2], s=(2*radius)**2)
ax.set_title('Point Positions')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For now, start from here...
Step2: Part 3 - Missing Values
Step3: Part 4 - Join & output
|
<ASSISTANT_TASK:>
Python Code:
with open("./cat_food.txt", 'r') as fi:
food = fi.read().splitlines()
# Example:
# target_cat = ['Restaurants', 'Food'] # to be continued...
target_cat = food
df = pd.read_pickle("./UC01_df_uc_open.p")
print df.shape
df = df[df.categories.apply(lambda x: not set(x).isdisjoint(set(target_cat)))]
print df.shape
# load data
# df = pd.read_pickle("./UC01_df_uc_open.p")
df.index = df.business_id.values
att = df.attributes
# extract attributes
a = att.apply(lambda x: yaml.load('['+','.join(x)+']'))
# find full-size attribute set
# if subattribute exists, use '_' to connect them.
att_all = set()
for row in a:
for i in row:
if not isinstance(i.values()[0], dict):
att_all.add(i.keys()[0])
else:
prefix = i.keys()[0]
for k in i.values()[0].iterkeys():
suffix = k
temp = prefix + '_' + suffix
att_all.add(temp)
len(att_all)
# create full-size attribute table
# index = business_id
# col = att_all
tab = pd.DataFrame(columns=att_all, index=df.index)
for ind in tab.index:
for j in a[ind]:
if not isinstance(j.values()[0], dict):
tab.loc[ind, j.keys()[0]] = j.values()[0]
else:
prefix = j.keys()[0]
for k, v in j.values()[0].iteritems():
suffix = k
temp = prefix + '_' + suffix
tab.loc[ind, temp] = v
print tab.shape[0]
tab.count(axis=0).sort_values(ascending=False)[1:20]
# 729 * 50% = 360 -> 3 attributes -> RestaurantsPriceRange2 / BusinessParking / BikeParking
print tab.shape[1]
tab.count(axis=1).sort_values(ascending=False)[1:20]
pl.violinplot(tab.count(axis=1).values,vert=False)
# sort column by alphabeta
tab.columns = tab.columns.sort_values()
# shape
print df.shape, tab.shape
# join two table
df_with_attribute = df.join(tab)
df_with_attribute.to_pickle("./UC02_df_uc_food_att.p")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Muki NN
|
<ASSISTANT_TASK:>
Python Code:
img_count = 0
def showimg(img):
muki_pr = np.zeros((500,500,3))
l =img.tolist()
count = 0
for x in range(500):
for y in range(500):
if l[count][0] >= .5:
muki_pr[y][x] = 1
else:
muki_pr[y][x] = 0
count += 1
plt.imshow(muki_pr)
def saveimg(fname,img):
muki_pr = np.zeros((500,500,3))
l =img.tolist()
count = 0
for x in range(500):
for y in range(500):
if l[count][0] >= .5:
muki_pr[y][x] = 1
else:
muki_pr[y][x] = 0
count += 1
plt.imsave(fname,muki_pr)
def read_muki():
img_data = np.random.randn(250000,2)
xy_data = []
import random
f = open('./muki.txt','rb')
count = 0
for line in f:
y,x,c = line.split()
xy_data.append([float(x),float(y)])
x = (float(x) )*100. + 250
y = (float(y) )*100. + 250
c = float(c)
if c >= 1:
img_data[count] = (1,0)
else:
img_data[count] = (0,1)
# img_data[count] = c
count = count + 1
return np.matrix(xy_data),img_data
xy_data,img_data = read_muki()
showimg(img_data)
print xy_data[:10]
print img_data[:10]
from keras.models import Sequential
model = Sequential()
from keras.layers import Dense, Activation
from keras.optimizers import SGD
model.add(Dense(output_dim=128, input_dim=2))
model.add(Activation("relu"))
model.add(Dense(output_dim=128, input_dim=128))
model.add(Activation("relu"))
model.add(Dense(output_dim=8, input_dim=128))
model.add(Activation("relu"))
model.add(Dense(output_dim=2, input_dim=8))
model.add(Activation("softplus"))
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.05, momentum=0.9, nesterov=True), metrics=['accuracy'])
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
for ii in range(50):
model.fit(xy_data, img_data, nb_epoch=5, batch_size=500)
result = model.predict(xy_data)
saveimg('./imgs/muki_relu_'+ str(ii) +'.png', result)
result = model.predict(xy_data)
showimg(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A function to simulate trajectories.
Step2: Simulation of a large number of events
Step3: In this case, the info we are interested in is the rate column - this has not had noise added to it. The rates are drawn from a normal distribution centred at 80 nt/s, with a standard deviation of 40 nt/s. Each row is implicitly one second, and the distance travelled in each time step is therefore numerically equal to the rate.
Step4: Plot the rate distribution and overlay the above pdf.
Step5: A key point to note here is that rates are implicitly time-weighted. If you were to fit line segments and make each segment an equal-contributor to the rate distribution, then the distribution would look quite different.
Step6: Not a very good fit. Distance weighting shifts the distribution to the right, and also narrows it. So what <i>is</i> a good fit?
Step7: This fit is OK for a guess, but how can we get there from first principles? Let's multiply by x and normalise, i.e. the same thing we did to the histogram.
Step8: Perfect, but so far this is merely a tautology
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
import random
import numpy as np
import beadpy
import pandas as pd
import math
%matplotlib inline
def trajectory_simulator(pre_duration = 250, #Mean event start time
pre_sigma = 50, #Sigma of event start time distribution
post_duration = 250, #The bead stays on for this long at the end of the trajectory
mean_duration = 100, #Mean event duration
min_duration = 10, #Minimum event duration
mean_rate = 500, #Mean rate (distance units/timestep)
rate_sigma = 50, #Sigma of the rate distribution
noise_sigma = 500, #Mean sigma for the bead movement
noise_sigma_sigma = 100, #Sigma of the noise sigma distribution
pause_prob = 0.001, #Probability of entering a pause in a given timestep
pause_duration_prob = 0.2, #Probability of remaining paused in a given timestep once a pause has begun.
rate_change_prob = 0.1, #Probablity that the rate will change in a given timestep
DNA_length = 15000, #Length of the DNA - a hard limit on the event length
trajectory_number = 0):
length = int(np.random.exponential(mean_duration)) #Length is drawn from an exponential distribution.
while length < min_duration:
length = int(np.random.exponential(mean_duration)) #The length should be at least a certain value.
current_rate = 0
pre = int(np.random.normal(loc=pre_duration, scale = pre_sigma))
post = post_duration
rate = 0
ratesequence = [0]*pre
noise_sigmaval = int(np.random.normal(loc=noise_sigma, scale = noise_sigma_sigma))
position = [0]*pre
nucleotides = []
current_position = 0
for i in range(0,pre):
nucleotides.append(float(position[i]+np.random.normal(loc=0.0, scale = noise_sigmaval)))
for i in range(0,length):
randomnumber = random.random() #generate a random float between 0 and 1
if i == 0: #Start the event
rate = np.random.normal(loc=mean_rate, scale = rate_sigma)
elif not rate == 0: #When during an event/no pause.
if (randomnumber <= pause_prob): #Start a pause.
rate = 0
elif (randomnumber > pause_prob) & (randomnumber <= (pause_prob + rate_change_prob)): #Change the rate
rate = np.random.normal(loc=mean_rate, scale = rate_sigma)
else: #No rate change
rate = rate #just FYI!
elif (rate == 0) & (not i ==0): #When in a pause.
if (randomnumber < (1- pause_duration_prob)): #End the pause.
rate = np.random.normal(loc=mean_rate, scale = rate_sigma)
else:
rate = 0 #Continue the pause.
ratesequence.append(rate)
current_position = current_position + rate
position.append(current_position)
nucleotides.append(float(current_position+np.random.normal(loc=0.0, scale = noise_sigmaval)))
if current_position > DNA_length:
length = i
break
for i in range(0,post):
ratesequence.append(0)
position.append(current_position)
nucleotides.append(float(current_position+np.random.normal(loc=0.0, scale = noise_sigmaval)))
time = range(0,len(nucleotides))
results = pd.DataFrame({'time' : time,
'nucleotides' : nucleotides,
'rate' : ratesequence,
'position' : position})
results['trajectory'] = trajectory_number
return results
phi29results = pd.DataFrame()
for j in range(0,1000):
temp = trajectory_simulator(pre_duration = 300,
pre_sigma = 20,
post_duration = 250,
mean_duration = 100,
min_duration = 10,
mean_rate = 80,
rate_sigma = 40,
noise_sigma = 100,
noise_sigma_sigma = 20,
pause_prob = 0.1,
pause_duration_prob = 0.5,
rate_change_prob = 0.01,
DNA_length = 15000,
trajectory_number = j)
phi29results = phi29results.append(temp)
phi29results.head()
def pdf(sigma, mu, x_range):
x = x_range
y = []
for i in x:
y.append((1/(math.sqrt(2*3.14159*sigma**2)))*(math.exp(-1*(((i - (mu))**2) / (2*sigma**2)))))
return x,y
x,y = pdf(40, 80,np.arange(0,250,1.0))
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, normed = True)
plt.plot(x,y,color="red", lw = 4)
plt.title("Time-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Time (fractional)")
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, weights = phi29results.rate[phi29results.rate > 0], normed = True)
plt.plot(x,y,color="red", lw = 4)
plt.title("distance-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Nucleotides synthesised (fractional)")
x,y = pdf(35, 98,np.arange(0,250,1.0))
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, weights = phi29results.rate[phi29results.rate > 0], normed = True)
plt.plot(x,y,color="red",lw=4)
plt.title("distance-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Nucleotides synthesised (fractional)")
x,y = pdf(40, 80,np.arange(0,250,1.0)) #regenerate the original pdf with sigma = 40 and mu = 80
a = []
for i in range(0,len(x)):
a.append(y[i]*i) #Multiply y by x
asum = sum(a)
z = []
for i in a:
z.append(i/asum) #Normalise
plt.plot(x,y,color="red")
plt.plot(x,z,color="green")
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, weights = phi29results.rate[phi29results.rate > 0], normed = True)
plt.plot(x,z,color="green",lw=4)
plt.title("distance-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Nucleotides synthesised (fractional)")
def weighted_pdf(sigma, mu, x_range):
x = x_range
a = []
for i in x:
a.append(((i/(math.sqrt(2*3.14159*sigma**2)))*(math.exp(-1*(((i - (mu))**2) / (2*sigma**2))))))
#Note that x is now in the numerator of the first part of the Gaussian function.
y = []
for i in a:
y.append(i/sum(a))
return x,y
x,y = pdf(40,80,np.arange(0,250,1.0))
x, z = weighted_pdf(40,80,np.arange(0,250,1.0))
plt.plot(x,y,color="red")
plt.plot(x,z,color="green")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 10
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
mx = np.amax(x)
mn = np.amin(x)
return (x - mn) / (mx - mn)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return np.eye(10)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, \
shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, \
shape=(None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, \
shape=(None), name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape[3].value, conv_num_outputs],
mean=0, stddev=0.1))
b = tf.Variable(tf.zeros(conv_num_outputs))
x = tf.nn.conv2d(x_tensor, W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
x = tf.nn.bias_add(x, b)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, \
ksize=[1, pool_ksize[0], pool_ksize[1], 1], \
strides=[1, pool_strides[0], pool_strides[1], 1], \
padding='SAME')
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
x = x_tensor.get_shape().as_list()
size = x[1] * x[2] * x[3]
return tf.reshape(x_tensor, [-1, size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
W = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], mean=0, stddev=0.1))
b = tf.Variable(tf.zeros(num_outputs))
x = tf.add(tf.matmul(x_tensor, W), b)
x = tf.nn.relu(x)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
W = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], mean=0, stddev=0.1))
b = tf.Variable(tf.zeros(num_outputs))
x = tf.add(tf.matmul(x_tensor, W), b)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = {
"layer1": 16,
"layer2": 64,
"layer3": 32,
}
conv_ksize = {
"layer1": (4,4),
"layer2": (4,4),
"layer3": (4,4),
}
conv_strides = {
"layer1": (1,1),
"layer2": (1,1),
"layer3": (1,1),
}
pool_ksize = {
"layer1": (2,2),
"layer2": (2,2),
"layer3": (2,2),
}
pool_strides = {
"layer1": (2,2),
"layer2": (2,2),
"layer3": (2,2),
}
x = conv2d_maxpool(x, conv_num_outputs["layer1"], conv_ksize["layer1"], conv_strides["layer1"], pool_ksize["layer1"], pool_strides["layer1"])
x = conv2d_maxpool(x, conv_num_outputs["layer2"], conv_ksize["layer2"], conv_strides["layer2"], pool_ksize["layer2"], pool_strides["layer2"])
x = conv2d_maxpool(x, conv_num_outputs["layer3"], conv_ksize["layer3"], conv_strides["layer3"], pool_ksize["layer3"], pool_strides["layer3"])
# TODO: Apply a Flatten Layer
# Function Definition from Above:
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
x = fully_conn(x, 16)
x = fully_conn(x, 32)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
x = output(x, 10)
# TODO: return output
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0})
validation_accuracy = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: ' + str(loss) + ', Validation accuracy: ' + str(validation_accuracy))
# TODO: Tune Parameters
epochs = 15
batch_size = 128
keep_probability = 0.75
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assignments versus equations
Step2: Calling Functions
Step3: Later we will discuss how to built our own function
Step4: The type function
Step5: Null values
Step6: When we fetch the value None in the interactive interpreter, no result is printed out.
Step7: We can check whether there is a result or not using the is operator
Step8: Converting values between types
Step9: Converting to and from ASCII values
Step10: Variables are not typed
Step11: Polymorphism
Step12: Conditional Statements and Indentation
Step13: Lists
Step14: Lists can contain different types of variable, even in the same list.
Step15: Mutable Datastructures
Step16: References
Step17: The above code creates two different references (named X and Y) to the same value [1, 2, 3]
Step18: State and identity
Step19: Iteration
Step20: To perform a statement a certain number of times, we can iterate over a list of the required size.
Step21: For loops with the range function
Step22: List Indexing
Step23: List Slicing
Step24: If we leave out the starting index it implies the beginning of the list
Step25: If we leave out the final index it implies the end of the list
Step26: Negative Indexing
Step27: Collections
Step28: Tuples are immutable
Step29: Sets
Step30: Alternatively we can write a set literal using the { and } brackets.
Step31: Sets are mutable
Step32: Duplicates are automatically removed
Step33: Sets are unordered
Step34: Operations on sets
Step35: Intersection
Step36: Difference $X - Y$
Step37: Dictionary
Step38: Defining new functions
Step39: Local Variables
Step40: Functional Programming
Step41: Mapping the elements of a collection
Step42: List Comprehensions
Step43: If we want a set instead of a list we can use a set comprehension
Step44: Cartesian product using list comprehensions
Step45: Cartesian products with other collections
Step46: Anonymous Function Literals
Step47: Filtering data
Step48: We can use both filter() and map() on other collections such as strings or sets.
Step49: Filtering using a list comprehension
|
<ASSISTANT_TASK:>
Python Code:
5 + 5
x = 5
y = 'Hello There'
z = 10.5
x + 5
x = 1
print ('The value of x is ', x)
x = 2.5
print ('Now the value of x is ', x)
x = 'hello there'
print ('Now it is ', x)
print (round(3.14))
help(round)
import builtins
dir(builtins)
print (y)
y + 5
type(1)
type('hello')
type(2.5)
type(True)
type(y)
result = None
print(result)
type(result)
result is None
x = 5
x is None
x = 100
print (type(x))
print (x)
y = float(x)
print (type(y))
print (y)
print (int(y))
print (ord('a'))
print (chr(97))
y = 'hello'
print ('The type of the value referred to by y is ', type(y))
y = 5.0
print ('And now the type of the value is ', type(y))
1//5
1/5
1.0 / 5.0
1 + 1
'a' + 'b'
'1' + '1'
x = 5
if x > 0:
print ('x is strictly positive')
print (x)
print ('finished.')
x = 0
if x > 0:
print ('x is strictly positive')
print (x)
print ('finished')
l = ['first', 'second', 'third']
print (l)
another_list = ['first', 'second', 'third', 1, 2, 3]
print (another_list)
l.append('fourth')
print (l)
X = [1, 2, 3]
Y = X
X.append(4)
print(X)
print(Y)
X = [1, 2]
Y = [1]
Y.append(2)
X == Y
X is Y
Y.append(3)
X
X = Y
X is Y
for i in l:
print (i)
for i in [0, 1, 2, 3]:
print ("Hello!")
for i in range(4):
print ("Hello!")
print (l[0])
print (l[1])
print (l[0:2])
print (l[:2])
print (l[2:])
print (l[-1])
print (l[:-1])
tuple1 = (50, 'hello')
print(tuple1)
print(tuple1[0])
type(tuple1)
tuple1.append(2)
tuple1[0] = 5
X = set([1, 2, 3, 3, 4])
print(X)
type(X)
X = {1, 2, 3, 4}
type(X)
X.add(5)
print(X)
X.add(5)
print(X)
X[0]
X = {1, 2, 3}
Y = {4, 5, 6}
X.union(Y)
X = {1, 2, 3, 4}
Y = {3, 4, 5}
X.intersection(Y)
X - Y
person = {'name':'sofian','gender':'male', 'age':17}
person['new'] = 'new'
print (person.keys())
print (person.values())
print (person['name'])
print (person)
def squared(x):
return x ** 2
print (squared(5))
def squared(x):
result = x ** 2
return result
print (squared(5))
print (result)
print(squared)
y = squared
print (y)
print (y(5))
list(map(squared, [1, 2, 3, 4]))
[squared(i) for i in [1, 2, 3, 4]]
{squared(i) for i in [1, 2, 3, 4]}
A = {'x', 'y', 'z'}
B = {1, 2, 3}
{(a,b) for a in A for b in B}
first_names = ('Steve', 'John', 'Peter')
surnames = ('Smith', 'Doe')
[(first_name, surname) for first_name in first_names for surname in surnames]
list(map(lambda x: x ** 2, [1, 2, 3, 4]))
list(filter(lambda x: x > 0, [-5, 2, 3, -10, 0, 1]))
list(map(ord, 'hello world'))
''.join(list(filter(lambda x: x != 'l' , 'hello world')))
data = [-5, 2, 3, -10, 0, 1]
[x for x in data if x > 0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's notice a few key things
Step2: Slicing
Step3: Exercises
Step4: Exercises
Step5: Dictionaries
Step6: Unlike with lists, we cannot access elements of the dictionary with slicing. We must instead use the keys. Let's see how to do this.
Step7: As we can see, the key returns us the value. This can be useful if you have a bunch of items that need to be paired together.
Step8: Exercises
Step9: Converting Between Containers
|
<ASSISTANT_TASK:>
Python Code:
list1=[1,2,3,4]
print(list1)
list2=["a","b","c","d"]
print(list2)
list3=[True, False, True]
print(list3)
list3=[1,2,"c","d"]
print(list3)
list4=[2>3, 2, 4, 3>2]
print(list4)
la=[1,2,3]
lb=["1","2","3"]
print(la+lb)
la.append(4)
print(la)
lb+=la
print(lb)
la=[1,2,3,4,5,6,7,8,9,10]
print(la[0])
print(la[3])
print(la[-1])
print(la[-3])
print(la[0:10:2])
print(la[3::2])
print(la[2:4:])
print(la[2:4])
t = {1,2,3,3,3,3,3,3,3,3,3,3,3,3,3,4,5}
print(t)
s = {1,4,5,7,8}
print(t-s)
print(s-t)
print(t^s)
print(t-s|s-t)
print(t^s==t-s|s-t)
a = (1,2,3,4,5)
b = ('a', 'b', 'c')
print(a)
print(b)
print(a+b)
#Let's say we have some students take a test and we want to store their scores
scores = {'Sally':89, 'Lucy':75, 'Jeff':45, 'Jose':96}
print(scores)
#We can, however, not combine two different dictionaries
scores2 = {'Devin':64, 'John':23, 'Jake':75}
print(scores2)
print(scores+scores2)
print(scores['Sally'])
print(scores2['John'])
print(scores.keys())
print(scores.values())
print('Devin' in scores2)
print(2 in a)
print('Hello World' not in scores)
a = [1,2,3]
b = (1,2,3)
c = {1,2,3}
d = {1:2,3:4}
print(list(b))
print(list(c))
print(list(d))
print(tuple(a))
print(tuple(c))
print(tuple(d))
print(set(a))
print(set(b))
print(set(d))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using Mathematica, I can find the 4*2 equations
Step2: I am going to substitute all density matrix elements using their corrosponding network expressions.
Step3: Minimization
Step4: Functions
Step5: Practice
|
<ASSISTANT_TASK:>
Python Code:
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
%load_ext snakeviz
import numpy as np
from scipy.optimize import minimize
from scipy.special import expit
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import timeit
import pandas as pd
import plotly.plotly as py
from plotly.graph_objs import *
import plotly.tools as tls
# hbar=1.054571726*10**(-34)
hbar=1
delm2E=1
lamb=1 ## lambda for neutrinos
lambb=1 ## lambda for anti neutrinos
gF=1
nd=1 ## number density
ndb=1 ## number density
omega=1
omegab=1
## Here are some matrices to be used
elM = np.array([[1,0],[0,0]])
bM = 1/2*np.array( [ [ - 0.38729833462,0.31622776601] , [0.31622776601,0.38729833462] ] )
## sqareroot of 2
sqrt2=np.sqrt(2)
#r11prime(t)
## The matrix eqn for neutrinos. Symplify the equation to the form A.X=0. Here I am only writing down the LHS.
## Eqn for r11'
# 1/2*( r21(t)*( bM12*delm2E - 2*sqrt2*gF*rb12(t) ) + r12(t) * ( -bM21*delm2E + 2*sqrt2*gF*rb21(t) ) - 1j*r11prime(t) )
## Eqn for r12'
# 1/2*( r22(t)* ( bM12 ) )
### wait a minute I don't actually need to write down this. I can just do this part in numpy.
def trigf(x):
#return 1/(1+np.exp(-x)) # It's not bad to define this function here for people could use other functions other than expit(x).
return expit(x)
## The time derivative part
### Here are the initial conditions
init = np.array( [[1,0],[0,0]] )
initb = np.array([[0,0],[0,0]])
### For neutrinos
def rho(x,ti,initialCondition): # x is the input structure arrays, ti is a time point
v11,w11,u11,v12,w12,u12,v21,w21,u21,v22,w22,u22 = np.split(x,12)[:12]
elem11= np.sum(ti * v11 * trigf( ti*w11 +u11 ) )
elem12= np.sum(ti * v12 * trigf( ti*w12 +u12 ) )
elem21= np.sum(ti * v21 * trigf( ti*w21 +u21 ) )
elem22= np.sum(ti * v22 * trigf( ti*w22 +u22 ) )
return initialCondition + np.array([[ elem11 , elem12 ],[elem21, elem22]])
def rhob(xb,ti,initialConditionb): # x is the input structure arrays, ti is a time point
vb11,wb11,ub11,vb12,wb12,ub12,vb21,wb21,ub21,vb22,wb22,ub22 = np.split(xb,12)[:12]
elem11= np.sum(ti * vb11 * trigf( ti*wb11 +ub11 ) )
elem12= np.sum(ti * vb12 * trigf( ti*wb12 +ub12 ) )
elem21= np.sum(ti * vb21 * trigf( ti*wb21 +ub21 ) )
elem22= np.sum(ti * vb22 * trigf( ti*wb22 +ub22 ) )
return initialConditionb + np.array([[ elem11 , elem12 ],[elem21, elem22]])
## Test
xtemp=np.ones(120)
rho(xtemp,1,init)
## Define Hamiltonians for both
def hamil(x,xb,ti,initialCondition,initialConditionb):
return delm2E*bM + lamb*elM + sqrt2*gF*( rho(x,ti,initialCondition) - rhob(xb,ti,initialConditionb) )
def hamilb(x,xb,ti,initialCondition,initialConditionb):
return -delm2E*bM + lambb*elM + sqrt2*gF*( rho(x,ti,initialCondition) - rhob(xb,ti,initialConditionb) )
## The commutator
def comm(x,xb,ti,initialCondition,initialConditionb):
return np.dot(hamil(x,xb,ti,initialCondition,initialConditionb), rho(x,ti,initialCondition) ) - np.dot(rho(x,ti,initialCondition), hamil(x,xb,ti,initialCondition,initialConditionb) )
def commb(x,xb,ti,initialCondition,initialConditionb):
return np.dot(hamilb(x,xb,ti,initialCondition,initialConditionb), rhob(xb,ti,initialConditionb) ) - np.dot(rhob(xb,ti,initialConditionb), hamilb(x,xb,ti,initialCondition,initialConditionb) )
## Test
print "neutrino\n",comm(xtemp,xtemp,1,init,initb)
print "antineutrino\n",commb(xtemp,xtemp,0.5,init,initb)
## The COST of the eqn set
def costTi(x,xb,ti,initialCondition,initialConditionb):
v11,w11,u11,v12,w12,u12,v21,w21,u21,v22,w22,u22 = np.split(x,12)[:12]
vb11,wb11,ub11,vb12,wb12,ub12,vb21,wb21,ub21,vb22,wb22,ub22 = np.split(xb,12)[:12]
fvec11 = np.array(trigf(ti*w11 + u11) ) # This is a vector!!!
fvec12 = np.array(trigf(ti*w12 + u12) )
fvec21 = np.array(trigf(ti*w21 + u21) )
fvec22 = np.array(trigf(ti*w22 + u22) )
fvecb11 = np.array(trigf(ti*wb11 + ub11) ) # This is a vector!!!
fvecb12 = np.array(trigf(ti*wb12 + ub12) )
fvecb21 = np.array(trigf(ti*wb21 + ub21) )
fvecb22 = np.array(trigf(ti*wb22 + ub22) )
costi11= ( np.sum (v11*fvec11 + ti * v11* fvec11 * ( 1 - fvec11 ) * w11 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[0,0] ) )
costi12= ( np.sum (v12*fvec12 + ti * v12* fvec12 * ( 1 - fvec12 ) * w12 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[0,1] ) )
costi21= ( np.sum (v21*fvec21 + ti * v21* fvec21 * ( 1 - fvec21 ) * w21 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[1,0] ) )
costi22= ( np.sum (v22*fvec22 + ti * v22* fvec22 * ( 1 - fvec22 ) * w22 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[1,1] ) )
costbi11= ( np.sum (vb11*fvecb11 + ti * vb11* fvecb11 * ( 1 - fvecb11 ) * wb11 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[0,0] ) )
costbi12= ( np.sum (vb12*fvecb12 + ti * vb12* fvecb12 * ( 1 - fvecb12 ) * wb12 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[0,1] ) )
costbi21= ( np.sum (vb21*fvecb21 + ti * vb21* fvecb21 * ( 1 - fvecb21 ) * wb21 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[1,0] ) )
costbi22= ( np.sum (vb22*fvecb22 + ti * vb22* fvecb22 * ( 1 - fvecb22 ) * wb22 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[1,1] ) )
return (np.real(costi11))**2 + (np.real(costi12))**2+ (np.real(costi21))**2 + (np.real(costi22))**2 + (np.real(costbi11))**2 + (np.real(costbi12))**2 +(np.real(costbi21))**2 + (np.real(costbi22))**2 + (np.imag(costi11))**2 + (np.imag(costi12))**2+ (np.imag(costi21))**2 + (np.imag(costi22))**2 + (np.imag(costbi11))**2 + (np.imag(costbi12))**2 +(np.imag(costbi21))**2 + (np.imag(costbi22))**2
costTi(xtemp,xtemp,0,init,initb)
## Calculate the total cost
def cost(xtot,t,initialCondition,initialConditionb):
x,xb = np.split(xtot,2)[:2]
t = np.array(t)
costTotal = np.sum( costTList(x,xb,t,initialCondition,initialConditionb) )
return costTotal
def costTList(x,xb,t,initialCondition,initialConditionb): ## This is the function WITHOUT the square!!!
t = np.array(t)
costList = np.asarray([])
for temp in t:
tempElement = costTi(x,xb,temp,initialCondition,initialConditionb)
costList = np.append(costList, tempElement)
return np.array(costList)
ttemp = np.linspace(0,10)
print ttemp
ttemp = np.linspace(0,10)
print costTList(xtemp,xtemp,ttemp,init,initb)
print cost(xtemp,ttemp,init,initb)
tlin = np.linspace(0,0.5,3)
initGuess = np.ones(120)
# initGuess = np.random.rand(1,30)+2
costF = lambda x: cost(x,tlin,init,initb)
cost(initGuess,tlin,init,initb)
## %%snakeviz
# startCG = timeit.default_timer()
#costFResultCG = minimize(costF,initGuess,method="CG")
#stopCG = timeit.default_timer()
#print stopCG - startCG
#print costFResultCG
%%snakeviz
startSLSQP = timeit.default_timer()
costFResultSLSQP = minimize(costF,initGuess,method="SLSQP")
stopSLSQP = timeit.default_timer()
print stopSLSQP - startSLSQP
print costFResultSLSQP
costFResultSLSQP.get('x')
np.savetxt('./assets/homogen/optimize_ResultSLSQPT2120.txt', costFResultSLSQP.get('x'), delimiter = ',')
# costFResultSLSQPx = np.genfromtxt('./assets/homogen/optimize_ResultSLSQP.txt', delimiter = ',')
## The first element of neutrino density matrix
xresult = np.split(costFResultSLSQP.get('x'),2)[0]
xresultb = np.split(costFResultSLSQP.get('x'),2)[1]
#xresult = np.split(costFResultSLSQPx,2)[0]
#xresultb = np.split(costFResultSLSQPx,2)[1]
## print xresult11
plttlin=np.linspace(0,5,100)
pltdata11 = np.array([])
pltdata22 = np.array([])
for i in plttlin:
pltdata11 = np.append(pltdata11 ,rho(xresult,i,init)[0,0] )
print pltdata11
for i in plttlin:
pltdata22 = np.append(pltdata22 ,rho(xresult,i,init)[1,1] )
print pltdata22
print "----------------------------------------"
pltdatab11 = np.array([])
pltdatab22 = np.array([])
for i in plttlin:
pltdatab11 = np.append(pltdatab11 ,rho(xresultb,i,init)[0,0] )
print pltdatab11
for i in plttlin:
pltdatab22 = np.append(pltdatab22 ,rho(xresultb,i,init)[1,1] )
print pltdatab22
#np.savetxt('./assets/homogen/optimize_pltdatar11.txt', pltdata11, delimiter = ',')
#np.savetxt('./assets/homogen/optimize_pltdatar22.txt', pltdata22, delimiter = ',')
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rho11')
plt.plot(plttlin,pltdata11,"b4-",label="rho11")
py.iplot_mpl(plt.gcf(),filename="HG-rho11")
# tls.embed("https://plot.ly/~emptymalei/73/")
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rho22')
plt.plot(plttlin,pltdata22,"r4-",label="rho22")
py.iplot_mpl(plt.gcf(),filename="HG-rho22")
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rhob11')
plt.plot(plttlin,pltdatab11,"b*-",label="rhob11")
py.iplot_mpl(plt.gcf(),filename="HG-rhob11")
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rhob22')
plt.plot(plttlin,pltdatab11,"b*-",label="rhob22")
py.iplot_mpl(plt.gcf(),filename="HG-rhob22")
MMA_optmize_pltdata = np.genfromtxt('./assets/homogen/MMA_optmize_pltdata.txt', delimiter = ',')
plt.figure(figsize=(20,12.36))
plt.ylabel('MMArho11')
plt.xlabel('Time')
plt.plot(np.linspace(0,5,501),MMA_optmize_pltdata,"r-",label="MMArho11")
plt.plot(plttlin,pltdata11,"b4-",label="rho11")
py.iplot_mpl(plt.gcf(),filename="MMA-rho11")
xtemp1 = np.arange(4)
xtemp1.shape = (2,2)
print xtemp1
xtemp1[0,1]
np.dot(xtemp1,xtemp1)
xtemp1[0,1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1. Identify the Problem
Step2: Part 3. Parse, Mine, and Refine the data
Step3: Check for missing values and drop or impute
Step4: Wrangle the data to address any issues from above checks
Step5: Perform exploratory data analysis
Step6: Check and convert all data types to numerical
Step7: Part 4. Build a Model
Step8: Build a model
Step9: Evaluate the model
Step10: Part 5
Step11: Load Kaggle test data, make predictions using model, and generate submission file
Step12: Kaggle score
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sklearn import cross_validation
from sklearn import metrics
from sklearn import linear_model
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid", font_scale=1)
%matplotlib inline
# Load train data
# Given size of training data, I chose to use only 10% for speed reasons
# QUESTION - how can i randomize with python? i used sql to create the random sample below.
df_train = pd.read_csv("train_random10percent.csv")
# Check head
df_train.head()
# Load test data
df_test = pd.read_csv("test.csv")
# Check head. I noticed that I will have to drop certain columns so that test and train sets have the same features.
df_test.head()
#given that i cannot use a significant amount of variables in train data, i created additoinal features using the mean
#i grouped on product id since i will ultimately be predicting demand for each product
df_train_mean = df_train.groupby('Producto_ID').mean().add_suffix('_mean').reset_index()
df_train_mean.head()
#from above, adding 2 additional features, the average sales units and the average demand
df_train2 = df_train.merge(df_train_mean[['Producto_ID','Venta_uni_hoy_mean', 'Demanda_uni_equil_mean']],how='inner',on='Producto_ID')
df_train2.sample(5)
# Adding features to the test set in order to match train set
df_test2 = df_test.merge(df_train_mean[['Producto_ID','Venta_uni_hoy_mean', 'Demanda_uni_equil_mean']],how='left',on='Producto_ID')
df_test2.head()
# Check columns
print "train dataset columns:"
print df_train2.columns.values
print
print "test dataset columns:"
print df_test2.columns.values
# Check counts
print "train dataset counts:"
print df_train2.count()
print
print "test dataset counts:"
print df_test2.count()
# Check counts for missing values in each column
print "train dataset missing values:"
print df_train2.isnull().sum()
print
print "test dataset missing values:"
print df_test2.isnull().sum()
# Drop columns not included in test dataset
df_train2 = df_train2.drop(['Venta_uni_hoy', 'Venta_hoy', 'Dev_uni_proxima', 'Dev_proxima'], axis=1)
# Check data
df_train2.head()
# Drop blank values in test set and replace with mean
# Replace missing values for venta_uni_hoy_mean using mean
df_test2.loc[(df_test2['Venta_uni_hoy_mean'].isnull()), 'Venta_uni_hoy_mean'] = df_test2['Venta_uni_hoy_mean'].dropna().mean()
# Replace missing values for demand using mean
df_test2.loc[(df_test2['Demanda_uni_equil_mean'].isnull()), 'Demanda_uni_equil_mean'] = df_test2['Demanda_uni_equil_mean'].dropna().mean()
print "test dataset missing values:"
print df_test2.isnull().sum()
# Get summary statistics for data
df_train2.describe()
#RE RUN THIS LAST
# Get pair plot for data
sns.pairplot(df_train2)
#show demand by weeks
timing = pd.read_csv('train_random10percent.csv', usecols=['Semana','Demanda_uni_equil'])
print(timing['Semana'].value_counts())
plt.hist(timing['Semana'].tolist(), bins=7, color='blue')
plt.show()
#QUESTION - is this a time series problem since we are predicting demand for weeks 10 and 11? and beyond?
#Show box plot of demand by week
sns.factorplot(
x='Semana',
y='Demanda_uni_equil',
data=df_train2,
kind='box')
# Check data types
df_train.dtypes
#these are all numerical but are not continuous values and therefore don't have relative significant to one another, except for week
#however, creating dummy variables for all these is too memory intensive. as such, might have to explore using a random forest model
#in addition to the linear regression model
#create cross validation sets
#set target variable name
target = 'Demanda_uni_equil'
#set X and y
X = df_train2.drop([target], axis=1)
y = df_train2[target]
# create separate training and test sets with 60/40 train/test split
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size= .4, random_state=0)
#create linear regression object
lm = linear_model.LinearRegression()
#train the model using the training data
lm.fit(X_train,y_train)
# Check R^2 on test set
print "R^2: %0.3f" % lm.score(X_test,y_test)
# Check MSE on test set
#http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
print "MSE: %0.3f" % metrics.mean_squared_error(y_test, lm.predict(X_test))
#QUESTION - should i check this on train set?
print "MSE: %0.3f" % metrics.mean_squared_error(y_train, lm.predict(X_train))
# Set target variable name
target = 'Demanda_uni_equil'
# Set X_train and y_train
X_train = df_train2.drop([target], axis=1)
y_train = df_train2[target]
# Build tuned model
#create linear regression object
lm = linear_model.LinearRegression()
#train the model using the training data
lm.fit(X_train,y_train)
# Score tuned model
print "R^2: %0.3f" % lm.score(X_train, y_train)
print "MSE: %0.3f" % metrics.mean_squared_error(y_train, lm.predict(X_train))
#create data frame for submission
df_sub = df_test2[['id']]
df_test2 = df_test2.drop('id', axis=1)
#predict using tuned model
df_sub['Demanda_uni_equil'] = lm.predict(df_test2)
df_sub.describe()
d = df_sub['Demanda_uni_equil']
d[d<0] = 0
df_sub.describe()
# Write submission file
df_sub.to_csv("mysubmission3.csv", index=False)
#notes
#want to try to use a classifier like random forest or logistic regression
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The class allows use to replace all layers after a given index in the model. In this example, we replace the last layer (a single softmax activation) with a series of 3 layers, followed by a final softmax activation.
Step2: We can also change the optimizer being used. Here we adjust the learning rate and momentum, and replace the previous optimizer.
Step3: Lastly, we can directly any parameters we've exposed in the class. In this case, we have number of epochs and batch size, along with a verbose output parameter.
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
class SimpleNN(ContinuousTransform):
def init_func(self,target_df,X_train_df,y_train_df,X_test_df,y_test_df):
model=Sequential()
model.add(Dense(64, input_dim=784, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64,init='uniform'))
model.add(Activation('softmax'))
self.opt = SGD(lr=0.1,decay=1e-6,momentum=0.9,nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=self.opt)
self.model = model
self.target = target_df
self.nb_epoch = 2
self.batch_size = 16
self.verbose = 0
target_df.set_matrix(0)
def continuous_func(self,target_df,X_train_df,y_train_df,X_test_df,y_test_df):
self.model.fit(X_train_df.get_matrix(),
y_train_df.get_matrix(),
nb_epoch=self.nb_epoch,
batch_size=self.batch_size,
verbose=self.verbose)
score = self.model.evaluate(X_test_df.get_matrix(),
y_test_df.get_matrix(),
batch_size=self.batch_size,
verbose=self.verbose)
target_df.set_matrix(np.array([[score]]))
def add_layers(self,layers,idx):
self.target.stop()
while len(self.model.layers) > idx:
self.model.layers.pop()
for l in layers:
self.model.add(l)
self.model.compile(loss='categorical_crossentropy', optimizer=self.opt)
self.target.go()
def set_optimizer(self,opt):
self.target.stop()
self.opt = opt
self.model.compile(loss='categorical_crossentropy', optimizer=self.opt)
self.target.go()
df["output/","score/"] = SimpleNN(df["data/train/","input/raw/"],
df["data/train/","input/label/"],
df["data/test/","input/raw/"],
df["data/test/","input/label/"])
new_layers = [
Activation('tanh'),
Dropout(0.5),
Dense(10,init='uniform'),
Activation('softmax')
]
df["output/","score/"].T.add_layers(new_layers,4)
new_opt = SGD(lr=0.01,decay=1e-6,momentum=0.8,nesterov=True)
df["output/","score/"].T.set_optimizer(new_opt)
df["output/","score/"].T.nb_epoch = 4
df["output/","score/"].T.batch_size = 32
df["output/","score/"].T.verbose = 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Introduction
Step2: Note that we have computed two data matrices
Step3: Note, again, that we have computed both the sorted (${\bf X}_{2s}$) and the shuffled (${\bf X}_2$) versions of the dataset in the code above.
Step4: Spectral clustering algorithms are focused on connectivity
Step5: 2.3. Visualization
Step6: Despite the apparent randomness of the affinity matrix, it contains some hidden structure, that we can uncover by visualizing the affinity matrix computed with the sorted data matrix, ${\bf X}_s$.
Step7: Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns.
Step8: 3. Affinity matrix and data graph
Step9: Note that, for this dataset, the graph connects edges from the same cluster only. Therefore, the number of diagonal blocks in $\overline{\bf K}_s$ is equal to the number of connected components in the graph.
Step10: Exercise 4
Step11: Exercise 5
Step12: Note that the position of 1's in eigenvectors ${\bf v}_i$ points out the samples in the $i$-th connected component. This suggest the following tentative clustering algorithm
Step13: 4.3. Non block diagonal matrices.
Step14: Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure.
Step15: Complete step 2, 3 and 4, and draw a scatter plot of the samples in ${\bf Z}$
Step16: Complete step 5
Step17: Finally, complete step 6 and show, in a scatter plot, the result of the clustering algorithm
Step18: 5.2. Scikit-learn implementation.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs, make_circles
from sklearn.utils import shuffle
from sklearn.metrics.pairwise import rbf_kernel
from sklearn.cluster import SpectralClustering
# For the graph representation
import networkx as nx
N = 300
nc = 4
Xs, ys = make_blobs(n_samples=N, centers=nc,
random_state=6, cluster_std=0.60, shuffle = False)
X, y = shuffle(Xs, ys, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=30);
plt.axis('equal')
plt.show()
X2s, y2s = make_circles(n_samples=N, factor=.5, noise=.05, shuffle=False)
X2, y2 = shuffle(X2s, y2s, random_state=0)
plt.scatter(X2[:, 0], X2[:, 1], s=30)
plt.axis('equal')
plt.show()
# <SOL>
est = KMeans(n_clusters=4)
clusters = est.fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=clusters, s=30, cmap='rainbow')
plt.axis('equal')
clusters = est.fit_predict(X2)
plt.figure()
plt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=30, cmap='rainbow')
plt.axis('equal')
plt.show()
# </SOL>
gamma = 0.5
K = rbf_kernel(X, X, gamma=gamma)
plt.imshow(K, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
Ks = rbf_kernel(Xs, Xs, gamma=gamma)
plt.imshow(Ks, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
t = 0.001
# Kt = <FILL IN> # Truncated affinity matrix
Kt = K*(K>t) # Truncated affinity matrix
# Kst = <FILL IN> # Truncated and sorted affinity matrix
Kst = Ks*(Ks>t) # Truncated and sorted affinity matrix
G = nx.from_numpy_matrix(Kt)
graphplot = nx.draw(G, X, node_size=40, width=0.5,)
plt.axis('equal')
plt.show()
Dst = np.diag(np.sum(Kst, axis=1))
Lst = Dst - Kst
# Next, we compute the eigenvalues of the matrix
w = np.linalg.eigvalsh(Lst)
plt.figure()
plt.plot(w, marker='.');
plt.title('Eigenvalues of the matrix')
plt.show()
# <SOL>
print(np.linalg.norm(Lst.dot(np.ones((N,1)))))
for i in range(nc):
vi = (ys==i)
print(np.linalg.norm(Lst.dot(vi)))
# </SOL>
# <SOL>
Dt = np.diag(np.sum(Kt, axis=1))
Lt = Dt - Kt
print(np.linalg.norm(Lt.dot(np.ones((N,1)))))
for i in range(nc):
vi = (y==i)
print(np.linalg.norm(Lt.dot(vi)))
# </SOL>
wst, vst = np.linalg.eigh(Lst)
for n in range(nc):
plt.plot(vst[:,n], '.-')
plt.imshow(vst[:,:nc], aspect='auto')
plt.grid(False)
plt.title('Display of first 4 eigenvectors of Ks')
# <SOL>
D = np.diag(np.sum(K, axis=1))
L = D - K
w, v = np.linalg.eigh(L)
for n in range(nc):
plt.plot(v[:,n], '.-')
# </SOL>
# <SOL>
g = 20
t = 0.1
K2 = rbf_kernel(X2, X2, gamma=g)
K2t = K2*(K2>t)
G2 = nx.from_numpy_matrix(K2t)
graphplot = nx.draw(G2, X2, node_size=40, width=0.5)
plt.axis('equal')
plt.show()
# </SOL>
# <SOL>
D2t = np.diag(np.sum(K2t, axis=1))
L2t = D2t - K2t
w2t, v2t = np.linalg.eigh(L2t)
Z2t = v2t[:,0:2]
plt.scatter(Z2t[:,0], Z2t[:,1], s=20)
plt.show()
# </SOL>
est = KMeans(n_clusters=2)
clusters = est.fit_predict(Z2t)
plt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=50, cmap='rainbow')
plt.axis('equal')
plt.show()
n_clusters = 4
gamma = .1 # Warning do not exceed gamma=100
SpClus = SpectralClustering(n_clusters=n_clusters,affinity='rbf',
gamma=gamma)
SpClus.fit(X)
plt.scatter(X[:, 0], X[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
nc = 2
gamma = 50 #Warning do not exceed gamma=300
SpClus = SpectralClustering(n_clusters=nc, affinity='rbf', gamma=gamma)
SpClus.fit(X2)
plt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
nc = 5
SpClus = SpectralClustering(n_clusters=nc, affinity='nearest_neighbors')
SpClus.fit(X2)
plt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It is also possible to show the areas in a dataframe by using the table attribute.
Step2: Like Niche, NicheDelta also has a write method, which takes a directory as an argument.
Step3: Creating deviation maps
Step4: The deviation maps can be plotted by specifying either mhw or mlw with the vegetation type, eg mhw_14 (to show the deviation between mhw and the required mhw for vegetation type 14).
Step5: Creating statistics per shape object
Step6: Using abiotic grids
Step7: Now it is possible to adapt the acidity and nutrient_level grids outside niche. For this demo, we will use some Python magic to make all nutrient levels one level lower. Note that there is no need to do this in Python, any other tool could be used as well. So if you don't understand this code - don't panic (and ignore the warning)!
Step8: Next we will create a new niche model using the same options as our previous full models, but we will also add the previously calculated acidity and nutrient level values as input, and run with the abiotic=True option. Note that we use the read_config_file method (and not run_config_file) because we still want to edit the configuration before running.
Step9: Overwriting standard code tables
Step10: Example of changed potential area of Caricion gracilis vegetation type because of the changes set in the vegetation code table
|
<ASSISTANT_TASK:>
Python Code:
import niche_vlaanderen as nv
import matplotlib.pyplot as plt
simple = nv.Niche()
simple.run_config_file("simple.yml")
full = nv.Niche()
full.run_config_file("full.yml")
delta = nv.NicheDelta(simple, full)
ax = delta.plot(7)
plt.show()
delta.table.head()
delta.write("comparison_output", overwrite_files=True)
dev = nv.Niche()
dev.set_input("mhw","../testcase/zwarte_beek/input/mhw.asc")
dev.set_input("mlw","../testcase/zwarte_beek/input/mhw.asc")
dev.set_input("soil_code","../testcase/zwarte_beek/input/soil_code.asc")
dev.run(deviation=True, full_model=False)
dev.plot("mlw")
dev.plot("mlw_14")
plt.show()
df = full.zonal_stats("../testcase/zwarte_beek/input/study_area_l72.geojson")
df
import niche_vlaanderen as nv
import matplotlib.pyplot as plt
full = nv.Niche()
full.run_config_file("full.yml")
full.write("output_abiotic", overwrite_files=True)
import rasterio
with rasterio.open("output_abiotic/full_nutrient_level.tif") as src:
nutrient_level = src.read(1)
profile = src.profile
nodata = src.nodatavals[0]
nutrient_level[nutrient_level != nodata] = nutrient_level[nutrient_level != nodata] -1
# we can not have nutrient level 0, so we set all places where this occurs to 1
nutrient_level[nutrient_level ==0 ] = 1
with rasterio.open("output_abiotic/adjusted_nutrient.tif", 'w', **profile) as dst:
dst.write(nutrient_level, 1)
adjusted = nv.Niche()
adjusted.read_config_file("full.yml")
adjusted.set_input("acidity", "output_abiotic/full_acidity.tif")
adjusted.set_input("nutrient_level", "output_abiotic/adjusted_nutrient.tif")
adjusted.name = "adjusted"
adjusted.run(abiotic=True)
adjusted.plot(7)
full.plot(7)
plt.show()
adjusted_ct = nv.Niche(ct_vegetation = "ct_vegetation_adj7.csv")
adjusted_ct.read_config_file("full.yml")
adjusted_ct.run()
adjusted_ct.plot(7)
full.plot(7)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the General Inquirer.
Step2: The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site
Step3: Load model predictions about volumes
Step4: And get the wordcounts themselves
Step5: Now calculate the representation of each Inquirer category in each doc
Step6: Calculate correlations
Step7: Load expanded names of Inquirer categories
Step8: Print results
|
<ASSISTANT_TASK:>
Python Code:
# some standard modules
import csv, os, sys
from collections import Counter
import numpy as np
from scipy.stats import pearsonr
# now a module that I wrote myself, located
# a few directories up, in the software
# library for this repository
sys.path.append('../../lib')
import FileCabinet as filecab
# start by loading the dictionary
dictionary = set()
with open('../../lexicons/MainDictionary.txt', encoding = 'utf-8') as f:
reader = csv.reader(f, delimiter = '\t')
for row in reader:
word = row[0]
count = int(row[2])
if count < 10000:
continue
# that ignores very rare words
# we end up with about 42,700 common ones
else:
dictionary.add(word)
inquirer = dict()
suffixes = dict()
suffixes['verb'] = ['s', 'es', 'ed', 'd', 'ing']
suffixes['noun'] = ['s', 'es']
allinquirerwords = set()
with open('../../lexicons/inquirerbasic.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
fields = reader.fieldnames[2:-2]
for field in fields:
inquirer[field] = set()
for row in reader:
term = row['Entry']
if '#' in term:
parts = term.split('#')
word = parts[0].lower()
sense = int(parts[1].strip('_ '))
partialsense = True
else:
word = term.lower()
sense = 0
partialsense = False
if sense > 1:
continue
# we're ignoring uncommon senses
pos = row['Othtags']
if 'Noun' in pos:
pos = 'noun'
elif 'SUPV' in pos:
pos = 'verb'
forms = {word}
if pos == 'noun' or pos == 'verb':
for suffix in suffixes[pos]:
if word + suffix in dictionary:
forms.add(word + suffix)
if pos == 'verb' and word.rstrip('e') + suffix in dictionary:
forms.add(word.rstrip('e') + suffix)
for form in forms:
for field in fields:
if len(row[field]) > 1:
inquirer[field].add(form)
allinquirerwords.add(form)
print('Inquirer loaded')
print('Total of ' + str(len(allinquirerwords)) + " words.")
# the folder where wordcounts will live
# we're only going to load predictions
# that correspond to files located there
sourcedir = '../sourcefiles/'
docs = []
logistic = []
with open('../modeloutput/fullfiction.results.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
genre = row['realclass']
docid = row['volid']
if not os.path.exists(sourcedir + docid + '.tsv'):
continue
docs.append(row['volid'])
logistic.append(float(row['logistic']))
logistic = np.array(logistic)
numdocs = len(docs)
assert numdocs == len(logistic)
print("We have information about " + str(numdocs) + " volumes.")
wordcounts = filecab.get_wordfreqs(sourcedir, '.tsv', docs)
# Initialize empty category vectors
categories = dict()
for field in fields:
categories[field] = np.zeros(numdocs)
# Now fill them
for i, doc in enumerate(docs):
ctcat = Counter()
allcats = 0
for word, count in wordcounts[doc].items():
if word in dictionary:
allcats += count
if word not in allinquirerwords:
continue
for field in fields:
if word in inquirer[field]:
ctcat[field] += count
for field in fields:
categories[field][i] = ctcat[field] / (allcats + 0.00000001)
# Laplacian smoothing there to avoid div by zero, among other things.
# notice that, since these are normalized freqs, we need to use a very small decimal
# If these are really normalized freqs, it may not matter very much
# that we divide at all. The denominator should always be 1, more or less.
# But I'm not 100% sure about that.
if i % 100 == 1:
print(i, allcats)
logresults = []
for inq_category in fields:
l = pearsonr(logistic, categories[inq_category])[0]
logresults.append((l, inq_category))
logresults.sort()
short2long = dict()
with open('../../lexicons/long_inquirer_names.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
short2long[row['short_name']] = row['long_name']
print('Printing the correlations of General Inquirer categories')
print('with the predicted probabilities of being fiction in allsubset2.csv:')
print()
print('First, top positive correlations: ')
print()
for prob, n in reversed(logresults[-15 : ]):
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
print()
print('Now, negative correlations: ')
print()
for prob, n in logresults[0 : 15]:
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Query ICPW projects
Step2: 2. Get station list
Step3: 3. Get parameters
Step4: 4. Get chemistry data
Step5: So, there are 262 stations within the "core" ICPW projects, but 24 of these have no data whatsoever associated with them (listed above).
|
<ASSISTANT_TASK:>
Python Code:
# Connect to db
eng = nivapy.da.connect()
# Query projects
prj_grid = nivapy.da.select_resa_projects(eng)
prj_grid
prj_df = prj_grid.get_selected_df()
print(len(prj_df))
prj_df
# Get stations
stn_df = nivapy.da.select_resa_project_stations(prj_df, eng)
print(len(stn_df))
stn_df.head()
# Map
nivapy.spatial.quickmap(stn_df, popup='station_code')
# Select parameters
par_grid = nivapy.da.select_resa_station_parameters(stn_df,
'1970-01-01',
'2019-01-01',
eng)
par_grid
# Get selected pars
par_df = par_grid.get_selected_df()
par_df
# Get data
wc_df, dup_df = nivapy.da.select_resa_water_chemistry(stn_df,
par_df,
'1970-01-01',
'2019-01-01',
eng,
lod_flags=False,
drop_dups=True)
wc_df.head()
# How many stations have pH data
len(wc_df['station_code'].unique())
# Which stations do not have pH data?
all_stns = set(stn_df['station_code'].unique())
no_ph = list(all_stns - set(wc_df['station_code'].unique()))
no_ph_stns = stn_df.query('station_code in @no_ph').reset_index()
print(len(no_ph_stns))
no_ph_stns
# What data do these stations have?
par_grid2 = nivapy.da.select_resa_station_parameters(no_ph_stns,
'1970-01-01',
'2019-01-01',
eng)
par_grid2
# Most recent datab
for idx, row in prj_df.iterrows():
# Get stations
cnt_stns = nivapy.da.select_resa_project_stations([row['project_id'],], eng)
# Get pH data
wc, dups = nivapy.da.select_resa_water_chemistry(cnt_stns,
[1,], # pH
'1970-01-01',
'2019-01-01',
eng,
lod_flags=False,
drop_dups=True)
# Print results
print(row['project_name'], '\t', len(cnt_stns), '\t', wc['sample_date'].max())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Toy Example
Step2: Let's generate the data that follows the distribution, and split the data into 90% for training and 10% for testing.
Step3: Model construction
Step4: Then, we specify the cross network (with 1 cross layer of size 3) and the ReLU-based DNN (with layer sizes [512, 256, 128])
Step5: Model training
Step6: Then, we define the number of epochs as well as the learning rate.
Step7: Alright, everything is ready now and let's compile and train the models. You could set verbose=True if you want to see how the model progresses.
Step8: Model evaluation
Step9: We see that the cross network achieved magnitudes lower RMSE than a ReLU-based DNN, with magnitudes fewer parameters. This has suggested the efficieny of a cross network in learning feaure crosses.
Step10: Darker colours represent stronger learned interactions - in this case, it's clear that the model learned that purchasing babanas and cookbooks together is important.
Step11: Next, we randomly split the data into 80% for training and 20% for testing.
Step12: Then, we create vocabulary for each feature.
Step13: Model construction
Step14: Model training
Step15: Let's define a function that runs a model multiple times and returns the model's RMSE mean and standard deviation out of multiple runs.
Step16: We set some hyper-parameters for the models. Note that these hyper-parameters are set globally for all the models for demonstration purpose. If you want to obtain the best performance for each model, or conduct a fair comparison among models, then we'd suggest you to fine-tune the hyper-parameters. Remember that the model architecture and optimization schemes are intertwined.
Step17: DCN (stacked). We first train a DCN model with a stacked structure, that is, the inputs are fed to a cross network followed by a deep network.
Step18: Low-rank DCN. To reduce the training and serving cost, we leverage low-rank techniques to approximate the DCN weight matrices. The rank is passed in through argument projection_dim; a smaller projection_dim results in a lower cost. Note that projection_dim needs to be smaller than (input size)/2 to reduce the cost. In practice, we've observed using low-rank DCN with rank (input size)/4 consistently preserved the accuracy of a full-rank DCN.
Step19: DNN. We train a same-sized DNN model as a reference.
Step20: We evaluate the model on test data and report the mean and standard deviation out of 5 runs.
Step21: We see that DCN achieved better performance than a same-sized DNN with ReLU layers. Moreover, the low-rank DCN was able to reduce parameters while maintaining the accuracy.
|
<ASSISTANT_TASK:>
Python Code:
# Installing the necessary libraries.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
# Importing the necessary modules.
import pprint
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
def get_mixer_data(data_size=100_000, random_seed=42):
# We need to fix the random seed
# to make notebook runs repeatable.
rng = np.random.RandomState(random_seed)
country = rng.randint(200, size=[data_size, 1]) / 200.
bananas = rng.randint(24, size=[data_size, 1]) / 24.
coockbooks = rng.randint(6, size=[data_size, 1]) / 6.
x = np.concatenate([country, bananas, coockbooks], axis=1)
# # Create 1st-order terms.
y = 0.1 * country + 0.4 * bananas + 0.7 * coockbooks
# Create 2nd-order cross terms.
y += 0.1 * country * bananas + 3.1 * bananas * coockbooks + (
0.1 * coockbooks * coockbooks)
return x, y
# Generating the data.
x, y = get_mixer_data()
num_train = 90000
train_x = x[:num_train]
train_y = y[:num_train]
eval_x = x[num_train:]
eval_y = y[num_train:]
class Model(tfrs.Model):
# TODO 1: Here is your code.
def __init__(self, model):
super().__init__()
self._model = model
self._logit_layer = tf.keras.layers.Dense(1)
self.task = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[
tf.keras.metrics.RootMeanSquaredError("RMSE")
]
)
def call(self, x):
x = self._model(x)
return self._logit_layer(x)
def compute_loss(self, features, training=False):
x, labels = features
scores = self(x)
return self.task(
labels=labels,
predictions=scores,
)
# Specifying the cross network and ReLU based DNN.
crossnet = Model(tfrs.layers.dcn.Cross())
deepnet = Model(
tf.keras.Sequential([
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(128, activation="relu")
])
)
# Preparing large set of elements for model training.
train_data = tf.data.Dataset.from_tensor_slices((train_x, train_y)).batch(1000)
eval_data = tf.data.Dataset.from_tensor_slices((eval_x, eval_y)).batch(1000)
epochs = 100
learning_rate = 0.4
# Compiling and training the model.
crossnet.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate))
crossnet.fit(train_data, epochs=epochs, verbose=False)
# Compiling and training the model.
deepnet.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate))
deepnet.fit(train_data, epochs=epochs, verbose=False)
crossnet_result = crossnet.evaluate(eval_data, return_dict=True, verbose=False)
print(f"CrossNet(1 layer) RMSE is {crossnet_result['RMSE']:.4f} "
f"using {crossnet.count_params()} parameters.")
deepnet_result = deepnet.evaluate(eval_data, return_dict=True, verbose=False)
print(f"DeepNet(large) RMSE is {deepnet_result['RMSE']:.4f} "
f"using {deepnet.count_params()} parameters.")
mat = crossnet._model._dense.kernel
features = ["country", "purchased_bananas", "purchased_cookbooks"]
plt.figure(figsize=(9,9))
im = plt.matshow(np.abs(mat.numpy()), cmap=plt.cm.Blues)
ax = plt.gca()
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
cax.tick_params(labelsize=10)
_ = ax.set_xticklabels([''] + features, rotation=45, fontsize=10)
_ = ax.set_yticklabels([''] + features, fontsize=10)
ratings = tfds.load("movie_lens/100k-ratings", split="train")
ratings = ratings.map(lambda x: {
"movie_id": x["movie_id"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
"user_gender": int(x["user_gender"]),
"user_zip_code": x["user_zip_code"],
"user_occupation_text": x["user_occupation_text"],
"bucketized_user_age": int(x["bucketized_user_age"]),
})
# TODO 2: Here is your code.
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
feature_names = ["movie_id", "user_id", "user_gender", "user_zip_code",
"user_occupation_text", "bucketized_user_age"]
vocabularies = {}
for feature_name in feature_names:
vocab = ratings.batch(1_000_000).map(lambda x: x[feature_name])
vocabularies[feature_name] = np.unique(np.concatenate(list(vocab)))
class DCN(tfrs.Model):
def __init__(self, use_cross_layer, deep_layer_sizes, projection_dim=None):
super().__init__()
self.embedding_dimension = 32
str_features = ["movie_id", "user_id", "user_zip_code",
"user_occupation_text"]
int_features = ["user_gender", "bucketized_user_age"]
self._all_features = str_features + int_features
self._embeddings = {}
# Compute embeddings for string features.
for feature_name in str_features:
vocabulary = vocabularies[feature_name]
self._embeddings[feature_name] = tf.keras.Sequential(
[tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=vocabulary, mask_token=None),
tf.keras.layers.Embedding(len(vocabulary) + 1,
self.embedding_dimension)
])
# Compute embeddings for int features.
for feature_name in int_features:
vocabulary = vocabularies[feature_name]
self._embeddings[feature_name] = tf.keras.Sequential(
[tf.keras.layers.experimental.preprocessing.IntegerLookup(
vocabulary=vocabulary, mask_value=None),
tf.keras.layers.Embedding(len(vocabulary) + 1,
self.embedding_dimension)
])
if use_cross_layer:
self._cross_layer = tfrs.layers.dcn.Cross(
projection_dim=projection_dim,
kernel_initializer="glorot_uniform")
else:
self._cross_layer = None
self._deep_layers = [tf.keras.layers.Dense(layer_size, activation="relu")
for layer_size in deep_layer_sizes]
self._logit_layer = tf.keras.layers.Dense(1)
self.task = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError("RMSE")]
)
def call(self, features):
# Concatenate embeddings
embeddings = []
for feature_name in self._all_features:
embedding_fn = self._embeddings[feature_name]
embeddings.append(embedding_fn(features[feature_name]))
x = tf.concat(embeddings, axis=1)
# Build Cross Network
if self._cross_layer is not None:
x = self._cross_layer(x)
# Build Deep Network
for deep_layer in self._deep_layers:
x = deep_layer(x)
return self._logit_layer(x)
def compute_loss(self, features, training=False):
labels = features.pop("user_rating")
scores = self(features)
return self.task(
labels=labels,
predictions=scores,
)
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
def run_models(use_cross_layer, deep_layer_sizes, projection_dim=None, num_runs=5):
models = []
rmses = []
for i in range(num_runs):
model = DCN(use_cross_layer=use_cross_layer,
deep_layer_sizes=deep_layer_sizes,
projection_dim=projection_dim)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate))
models.append(model)
model.fit(cached_train, epochs=epochs, verbose=False)
metrics = model.evaluate(cached_test, return_dict=True)
rmses.append(metrics["RMSE"])
mean, stdv = np.average(rmses), np.std(rmses)
return {"model": models, "mean": mean, "stdv": stdv}
epochs = 8
learning_rate = 0.01
# TODO 3: Here is your code.
dcn_result = run_models(use_cross_layer=True,
deep_layer_sizes=[192, 192])
dcn_lr_result = run_models(use_cross_layer=True,
projection_dim=20,
deep_layer_sizes=[192, 192])
dnn_result = run_models(use_cross_layer=False,
deep_layer_sizes=[192, 192, 192])
print("DCN RMSE mean: {:.4f}, stdv: {:.4f}".format(
dcn_result["mean"], dcn_result["stdv"]))
print("DCN (low-rank) RMSE mean: {:.4f}, stdv: {:.4f}".format(
dcn_lr_result["mean"], dcn_lr_result["stdv"]))
print("DNN RMSE mean: {:.4f}, stdv: {:.4f}".format(
dnn_result["mean"], dnn_result["stdv"]))
model = dcn_result["model"][0]
mat = model._cross_layer._dense.kernel
features = model._all_features
block_norm = np.ones([len(features), len(features)])
dim = model.embedding_dimension
# Compute the norms of the blocks.
for i in range(len(features)):
for j in range(len(features)):
block = mat[i * dim:(i + 1) * dim,
j * dim:(j + 1) * dim]
block_norm[i,j] = np.linalg.norm(block, ord="fro")
plt.figure(figsize=(9,9))
im = plt.matshow(block_norm, cmap=plt.cm.Blues)
ax = plt.gca()
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
cax.tick_params(labelsize=10)
_ = ax.set_xticklabels([""] + features, rotation=45, ha="left", fontsize=10)
_ = ax.set_yticklabels([""] + features, fontsize=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given a list of values, there are several ways to count the frequency of each value.
Step2: You can use a Python dictionary
Step3: You can use a Counter (which is a dictionary with additional methods)
Step4: Or you can use the Hist object provided by thinkstats2
Step5: Hist provides Freq, which looks up the frequency of a value.
Step6: You can also use the bracket operator, which does the same thing.
Step7: If the value does not appear, it has frequency 0.
Step8: The Values method returns the values
Step9: So you can iterate the values and their frequencies like this
Step10: Or you can use the Items method
Step11: thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.
Step12: As an example, I'll replicate some of the figures from the book.
Step13: Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.
Step14: Before plotting the ages, I'll apply floor to round down
Step15: As an exercise, plot the histogram of pregnancy lengths (column prglngth).
Step16: Hist provides smallest, which select the lowest values and their frequencies.
Step17: Use Largest to display the longest pregnancy lengths.
Step18: From live births, we can selection first babies and others using birthord, then compute histograms of pregnancy length for the two groups.
Step19: We can use width and align to plot two histograms side-by-side.
Step20: Series provides methods to compute summary statistics
Step21: Here are the mean and standard deviation
Step22: As an exercise, confirm that std is the square root of var
Step23: Here's are the mean pregnancy lengths for first babies and others
Step24: And here's the difference (in weeks)
Step26: This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations
Step27: Compute the Cohen effect size for the difference in pregnancy length for first babies and others.
Step28: Exercises
Step29: For the next few exercises, we'll load the respondent file
Step30: Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.
Step31: Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.
Step32: Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.
Step33: Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?
Step34: Use Hist.Largest to find the largest values of <tt>parity</tt>.
Step35: Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.
Step36: Find the largest parities for high income respondents.
Step37: Compare the mean <tt>parity</tt> for high income respondents and others.
Step38: Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others?
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
t = [1, 2, 2, 3, 5]
hist = {}
for x in t:
hist[x] = hist.get(x, 0) + 1
hist
from collections import Counter
counter = Counter(t)
counter
import thinkstats2
hist = thinkstats2.Hist([1, 2, 2, 3, 5])
hist
hist.Freq(2)
hist[2]
hist[4]
hist.Values()
for val in sorted(hist.Values()):
print(val, hist[val])
for val, freq in hist.Items():
print(val, freq)
import thinkplot
thinkplot.Hist(hist)
thinkplot.Config(xlabel='value', ylabel='frequency')
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')
ages = np.floor(live.agepreg)
hist = thinkstats2.Hist(ages, label='agepreg')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='years', ylabel='Count')
# Solution
hist = thinkstats2.Hist(live.prglngth, label='prglngth')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='weeks', ylabel='Count')
for weeks, freq in hist.Smallest(10):
print(weeks, freq)
# Solution
for weeks, freq in hist.Largest(10):
print(weeks, freq)
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
first_hist = thinkstats2.Hist(firsts.prglngth, label='first')
other_hist = thinkstats2.Hist(others.prglngth, label='other')
width = 0.45
thinkplot.PrePlot(2)
thinkplot.Hist(first_hist, align='right', width=width)
thinkplot.Hist(other_hist, align='left', width=width)
thinkplot.Config(xlabel='weeks', ylabel='Count', xlim=[27, 46])
mean = live.prglngth.mean()
var = live.prglngth.var()
std = live.prglngth.std()
mean, std
# Solution
np.sqrt(var) == std
firsts.prglngth.mean(), others.prglngth.mean()
firsts.prglngth.mean() - others.prglngth.mean()
def CohenEffectSize(group1, group2):
Computes Cohen's effect size for two groups.
group1: Series or DataFrame
group2: Series or DataFrame
returns: float if the arguments are Series;
Series if the arguments are DataFrames
diff = group1.mean() - group2.mean()
var1 = group1.var()
var2 = group2.var()
n1, n2 = len(group1), len(group2)
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / np.sqrt(pooled_var)
return d
# Solution
CohenEffectSize(firsts.prglngth, others.prglngth)
# Solution
firsts.totalwgt_lb.mean(), others.totalwgt_lb.mean()
# Solution
CohenEffectSize(firsts.totalwgt_lb, others.totalwgt_lb)
resp = nsfg.ReadFemResp()
# Solution
hist = thinkstats2.Hist(resp.totincr)
thinkplot.Hist(hist, label='totincr')
thinkplot.Config(xlabel='income (category)', ylabel='Count')
# Solution
hist = thinkstats2.Hist(resp.ager)
thinkplot.Hist(hist, label='ager')
thinkplot.Config(xlabel='age (years)', ylabel='Count')
# Solution
hist = thinkstats2.Hist(resp.numfmhh)
thinkplot.Hist(hist, label='numfmhh')
thinkplot.Config(xlabel='number of people', ylabel='Count')
# Solution
# This distribution is positive-valued and skewed to the right.
hist = thinkstats2.Hist(resp.parity)
thinkplot.Hist(hist, label='parity')
thinkplot.Config(xlabel='parity', ylabel='Count')
# Solution
hist.Largest(10)
# Solution
rich = resp[resp.totincr == 14]
hist = thinkstats2.Hist(rich.parity)
thinkplot.Hist(hist, label='parity')
thinkplot.Config(xlabel='parity', ylabel='Count')
# Solution
hist.Largest(10)
# Solution
notrich = resp[resp.totincr < 14]
rich.parity.mean(), notrich.parity.mean()
# Solution
# This effect is about 10 times stronger than the difference in pregnancy length.
# But remembering the design of the study, we should not make too much of this
# apparent effect.
CohenEffectSize(rich.parity, notrich.parity)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The right way to print is using the official print() function in python
Step2: Now you can see that each call to print() will cause output on a new line,and also note they are not in the Out[] list
Step3: Python versions
Step4: Now we can print(pi) in python2. The old style would be print pi
Step5: Control stuctures
Step6: Python Data Structures
Step7: Math and Numeric Arrays
|
<ASSISTANT_TASK:>
Python Code:
# setting a variable
a = 1.23
# although just writing the variable will show it's value, but this is not the recommended
# way, because per cell only the last one will be printed and stored in the out[]
# list that the notebook maintains
a
a+1
print(a)
print(type(a))
a="1.23"
print(a,type(a))
# checking the value of the variable
a
from __future__ import print_function
pi = 3.1415
print("pi=",pi)
print("pi=%15.10f" % pi)
# for reference, here is the old style of printing in python2
# print "pi=",pi
n = 1
if n > 0:
print("yes, n>0")
else:
print("not")
for i in [2,4,n,6]:
print("i=",i)
print("oulala, after this for loop, i=",i)
n = 10
while n>0:
# n = n - 2
print("whiling",n)
n = n - 2
print("last n",n)
a1 = [1,2,3,4]
a2 = range(1,5)
print(a1)
print(a2)
a2 = ['a',1,'cccc']
print(a1)
print(a2)
a3 = range(12,20,2)
print(a3)
a1=range(3)
a2=range(1,4)
print(a1,a2)
a1+a2
import math
import numpy as np
math.pi
np.pi
# %matplotlib inline
import matplotlib.pyplot as plt
a=np.arange(0,1,0.01)
b = a*a
c = np.sqrt(a)
plt.plot(a,b,'-bo',label='b')
plt.plot(a,c,'-ro',label='c')
plt.legend()
plt.plot(a,a+1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Baseline is static, a straight line for each input - Train (reduced)
Step2: Baseline is static, a straight line for each input - Train Full
|
<ASSISTANT_TASK:>
Python Code:
bltest = MyBaseline(npz_path=npz_test)
bltest.getMSE()
bltest.renderMSEs()
plt.show()
bltest.getHuberLoss()
bltest.renderHuberLosses()
plt.show()
bltest.get_dtw()
bltest.renderRandomTargetVsPrediction()
plt.show()
cur_baseline = MyBaseline(npz_path=npz_train_reduced)
cur_baseline.getMSE()
cur_baseline.renderMSEs()
plt.show()
cur_baseline.getHuberLoss()
cur_baseline.renderHuberLosses()
plt.show()
cur_baseline.get_dtw()
cur_baseline.renderRandomTargetVsPrediction()
plt.show()
cur_baseline = MyBaseline(npz_path=npz_train)
cur_baseline.getMSE()
cur_baseline.renderMSEs()
plt.show()
cur_baseline.getHuberLoss()
cur_baseline.renderHuberLosses()
plt.show()
cur_baseline.get_dtw()
cur_baseline.renderRandomTargetVsPrediction()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebooks extends the general equilibrium farm distribution models found in the geqfarm module o also include the possibility that the 'smallholder' sector is represented by a commune, which we model with the historical Russian Mir in mind.
Step2: Model Scenarios
Step3: Useful math preliminaries
Step4: The Economy has default resource endowment
Step5: So as expected the efficient (competitive) resource allocation has every farmer operating a farm of equal unit size. The smallhold_eq method gives the competitive (efficient) allocation for a subeconomy with XE = (TE,LE) endowments and skill distribution s. It returns a named tuple with equilibrium facor prices and X=[T,L] allocations.
Step6: Since everyone started out with the same skill $z_i$ and DRS production function and land and labor are assumed perfectly tradable then it makes sense that in equilibrium everyone would operate a farm of the same size.
Step7: Let's create a little heterogeneity in the commune
Step8: The DE scenario (Latifundia)
Step9: Thus far we've said nothing of the ownership of land or labor. Let's assume every household has one unit of labor but that the 'landlord' class (which WLOG we index to be the last skill group s[-1]) owns fraction $\theta$ of the land. Assuming a uniform distribution of households across skills every skill group has Lbar/N households, and so there are that many landlords who act as a single cartel.
Step10: In the example above the 'landlord' farmer was in every way the same as the other farmers, the only difference being he had more land ownership (fraction $\theta$ of the total). He had the same skill parameter as every other farmer. In an efficient equilibrium his operational farm size should therefore be the same size as every other farmer. The plot above shows how monopoly power (which rises with $\theta$ allows the monopolist to distort the economy -- he withholds land from the lease market to drive up the land rental rate and, since this deprives the 'fringe' of farmers of land, lowers the marginal product of labor on each smallholder farm, increasing the smallholder labor supply to the market which pushes down the labor wage. Hence we see how at higher levels of $\theta$ the landlord expands the size of his estate and establish monopsony power wages.
Step11: Let's recalculate the new equilibria under the different scenarios.
Step12: Given that he is more skilled than before the landlord's efficient scale of production has increased. This lowers the cost of being big. Interestingly at low $\theta$ this leads the landlord to hire less land and labor ...
Step14: NEXT STEPS
|
<ASSISTANT_TASK:>
Python Code:
%reload_ext watermark
%watermark -u -n -t
import sys
sys.path.append('../')
from geqfarm import *
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
plt.rcParams["figure.figsize"] = (10, 8)
np.set_printoptions(precision=4)
%matplotlib inline
%load_ext autoreload
%autoreload 2
N = 5
E = Economy(N) # an instance takes N length as parameter
s = np.ones(N) # skill distribution in the economy
E.ALPHA = 0.5
E.GAMMA = 0.90
E.TBAR,E.LBAR
Xc = E.smallhold_eq([E.TBAR,E.LBAR],s)
Xc
N = 5
mir = Economy(N)
mir.s
mir.s = np.array([0.85, 0.85, 0.95, 0.95, 1.0])
zhat = mir.s[0:N-1].sum()/(N-1)
zR = mir.s[-1]
zhat, zR
tau = (zhat/zR)**(1/(1-mir.GAMMA))
tau
mir.GAMMA
(Xrc,Xr,wc,wr) = scene_print(E,10, detail=True)
factor_plot(E,Xrc,Xr)
TLratio_plot(E,Xrc,Xr)
E.s[-1]=1.10
(Xrc,Xr,wc,wr) = scene_print(E,10,detail=True)
factor_plot(E,Xrc,Xr)
TLratio_plot(E,Xrc,Xr)
class MirEconomy(Economy):
sub class of Economy class but with Mir rules in subeconomy
def __init__(self, N): # constructor to set initial parameters.
super(MirEconomy, self).__init__(N) # inherit properties
# if None supplied use defaults
mir = MirEconomy(N)
mir.print_params()
DE = Economy(5)
DE.smallhold_eq([E.TBAR, E.LBAR], s, analytic=True)
(Xrc, Xr, wc, wr) = scene_print(DE, 10, detail=True)
factor_plot(E, Xrc, Xr)
DD = MirEconomy(5)
print(DD.prodn(np.array([10, 10]), np.array([1, 1.05])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Приведение к списку
Step2: Генератор множества
Step3: Стандартные операции
Step4: Удаление элемента
Step5: Размер множества
Step6: Проход по множеству
Step7: Операции над двумя множествами
Step8: Разность
Step9: Пересечение
Step10: Симметрическая разность
Step11: Сравнение
Step12: Словарь (dict)
Step13: Стандартные операции
Step14: Изменение элемента
Step15: Удаление элемента
Step16: Обращение к элементам
Step17: Размер словаря
Step18: Проход по словарю
Step19: Пример использования
|
<ASSISTANT_TASK:>
Python Code:
a = set()
a.add(3) # Добавление элемента, O(1)
b = {5}
print(a, b)
c = {4, 4, 2, 6} # Элементы не повторяются
print(c) # Порядок элементов не важен
lst = [8, 1, 3, 3, 8]
a = set(lst) # Приведение списка к множеству (выкинули повторения)
print(a)
print(list(a)) # Приведение множества к списку
a = {i ** 2 for i in range(10)}
print(a)
b = {4, 5}
print(b)
print(5 in b)
if 8 in b:
print("YES")
else:
print("NO")
a = {1, 2, 4, 5, 6}
a.discard(5)
print(a)
a.remove(4)
print(a)
a.discard(8000) # Игнорирует команду, если элемента нет
print(a)
a.remove(8000) # Кидает ошибку, если элемента нет
print(a)
b = {4, 5}
print(len(b))
a = {1, 2, 6, 9, 3}
for elem in a:
print(elem, end=" ")
print()
while a:
elem = a.pop() # Вытаскивает элемент из множества
print(elem, end=" ")
a = {2, 3, 4}
b = {5, 2}
print(a | b)
a = {2, 3, 4}
b = {5, 2}
print(a - b)
a = {2, 3, 4}
b = {5, 2}
print(a & b)
a = {2, 3, 4}
b = {5, 2}
print(a ^ b)
a = {2, 3, 4}
b = {3, 4}
# Сравнения
print(a < b)
print(a > b)
print(a >= b)
print(a == b)
a = {}
b = {1: 3, "4": 8}
c = dict()
d = dict(((1, 2), ("a", 13)))
e = dict(one=1, two=2)
print(a, b, c, d, e, sep="\n", end="\n\n")
a = {}
a['hello'] = 'world' # Добавление элемента, O(1)
print(a)
a = {'one': 1, 'two': 2}
a['one'] = 8 # Изменение элемента, O(1)
print(a)
a = {'one': 1, 'two': 2}
a.pop('two') # Удаление элемента, O(1)
print(a)
b = {1: 3, '4': 8, 'a': 13}
print(b[1], b['a'], b.get('4')) # Существующие элементы
print(b.get('8'), b.get('three', 0)) # Несуществующие элементы со значением по умолчанию
print(b[0]) # b[0] - KeyError
a = {"a": 18, "b": 10, "c": 13}
print(len(a))
a = {"a": 18, "b": 10, "c": 13}
for x, y in a.items():
print("%s: %s" % (x, y))
print()
for x in a.keys():
print(x, end=" ")
print()
for x in a.values():
print(x, end=" ")
print()
to_sort = ["5", "2", "2", "2", "1", "1", "1", "0", "3", "3", "2", "2", "5", "5", "5", "3", "1"]
count_sort = {}
for elem in to_sort:
count_sort[elem] = count_sort.get(elem, 0) + 1
print(count_sort)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The environment
Step2: Running dynamic presentations
Step5: To close this instances press control 'c' in the ipython notebook terminal console
Step6: About me
Step7: \newpage
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('F4rFuIb1Ie4')
## PDF output using pandoc
import os
### Export this notebook as markdown
commandLineSyntax = 'ipython nbconvert --to markdown 20150916_OGC_Reuse_under_licence.ipynb'
print (commandLineSyntax)
os.system(commandLineSyntax)
### Export this notebook and the document header as PDF using Pandoc
commandLineSyntax = 'pandoc -f markdown -t latex -N -V geometry:margin=1in DocumentHeader.md 20150916_OGC_Reuse_under_licence.md --filter pandoc-citeproc --latex-engine=xelatex --toc -o 20150916_OGC_Reuse_under_licence.pdf '
os.system(commandLineSyntax)
### Remove cruft from the pdf
commandLineSyntax = 'pdftk 20150916_OGC_Reuse_under_licence.pdf cat 1-4 15-end output outfile_p12-15.pdf'
os.system(commandLineSyntax)
%install_ext https://raw.githubusercontent.com/rasbt/python_reference/master/ipython_magic/watermark.py
%load_ext watermark
%watermark -a "Anthony Beck" -d -v -m -g
#List of installed conda packages
!conda list
#List of installed pip packages
!pip list
!ipython nbconvert 20150916_OGC_Reuse_under_licence.ipynb --to slides --post serve
#Future proof python 2
from __future__ import print_function #For python3 print syntax
from __future__ import division
# def
import IPython.core.display
# A function to collect user input - ipynb_input(varname='username', prompt='What is your username')
def ipynb_input(varname, prompt=''):
Prompt user for input and assign string val to given variable name.
js_code = (
var value = prompt("{prompt}","");
var py_code = "{varname} = '" + value + "'";
IPython.notebook.kernel.execute(py_code);
).format(prompt=prompt, varname=varname)
return IPython.core.display.Javascript(js_code)
# inline
%pylab inline
from IPython.display import YouTubeVideo
YouTubeVideo('jUzGF401vLc')
from IPython.display import YouTubeVideo
YouTubeVideo('tkRB5Rp1_W4')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carregando um arquivo csv em um DataFrame do Pandas
Step2: release_dates.csv
Step3: titles
Step4: df.head(n)
Step5: df.tail(n)
Step6: Quantos registros há no conjunto?
Step7: Quais são os possíveis valores para a coluna type?
Step8: df[col].unique()
Step9: Quantos atores e quantas atrizes há no conjunto?
Step10: Operações com colunas
Step11: Operações Aritméticas
Step12: Comparações
Step13: Filtrar
Step14: Por colunas
Step15: Por valor nulo ou não nulo
Step16: Por vetor de booleanos
Step17: Preencher valores nulos
Step18: Por coluna
Step19: Quantos atores atuaram em cada ano?
Step20: Qual foi a diferença entre o número de atores e atrizes que atuaram em cada década?
Step21: Datas
Step22: Quanto % dos filmes foram lançados na sexta-feira?
Step23: Merge
Step24: Qual o nome e ano do filme mais antigo?
Step25: Quantos filmes são de 1960?
Step26: Quantos filmes são de cada ano dos anos 70?
Step27: Quantos filmes foram lançados desde o ano que você nasceu até hoje?
Step28: Quais são os nomes dos filmes até 1906?
Step29: Quais são os 15 nomes de filmes mais comuns?
Step30: Em quantos filmes Judi Dench atuou?
Step31: Liste os filmes nos quais Judi Dench atuou como o ator número 1, ordenado por ano.
Step32: Liste os atores da versão de 1972 de Sleuth pela ordem do rank n.
Step33: Quais atores mais atuaram em 1985?
Step34: SciKit Learn (http
Step35: iris.csv
Step36: Treinar modelo de Árvore de Decisão
Step37: Salvar modelo
Step38: Carregar modelo
Step39: Predição para casos de teste
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib
%matplotlib inline
%%time
cast = pd.DataFrame.from_csv('data/cast.csv', index_col=None, encoding='utf-8')
%%time
release_dates = pd.read_csv('data/release_dates.csv', index_col=None,
parse_dates=['date'], infer_datetime_format=True)
cast.columns
titles = cast[['title', 'year']].drop_duplicates().reset_index(drop=True)
titles.head()
cast.head()
release_dates.head()
cast.tail()
release_dates.tail()
len(cast), len(release_dates)
cast['type']
cast.type.head()
c = 'type'
cast[c].head() #cast.c.head() não vai funcionar!
cast['type'].unique()
cast['type'].value_counts()
h = cast.head()
h
h.year // 10 * 10 # Década
h
h.year > 2000
cast[cast.character == 'Macduff Child']
h[['title', 'year']]
h[h.n.isnull()]
h[h.n.notnull()]
h
h[[True, False, True, False, False]]
h.year > 2000
h[h.year > 2000]
h[(h.year > 2000) & (h.year < 2016)] # & para 'and', | para 'or'
h.fillna(0)
h.n.fillna(0)
cast.year.value_counts()#.head(10)
cast.year.value_counts().plot()
cast.year.value_counts().sort_index()#.head()
cast.year.value_counts().sort_index().plot()
bins = pd.np.arange(1880, 2040, 2)
cast.year.hist(bins=bins)
g = cast.groupby([cast.year // 10 * 10, 'type']).size()
g
u = g.unstack()
u
a = u['actor'] - u['actress']
a
a.plot()
release_dates.head()
release_dates.date.dt.year.head()
release_dates.date.dt.dayofyear.head() # segunda=0, domingo=6
len(release_dates[release_dates.date.dt.dayofweek == 4])*100/len(release_dates)
cast.head()
release_dates.head()
c = cast[cast.name == 'Ellen Page']
c = c.merge(release_dates)
c.head()
titles.sort_values('year').head(1)
len(titles[titles.year == 1960])
for y in range(1970, 1980):
print(y, (titles.year == y).sum())
titles[titles.year // 10 == 197].year.value_counts().sort_index()
titles.groupby('year').size().loc[1970:1979]
birth = 1990
len(titles[(titles.year >= birth) & (titles.year <= 2016)])
titles[titles.year <= 1906][['title']]
titles.year.min()
titles.set_index('year').sort_index().loc[1894:1906]
titles.title.value_counts().head(15)
len(cast[cast.name == 'Judi Dench'])
c = cast
c = c[c.name == 'Judi Dench']
c = c[c.n == 1]
c.sort_values('year')
c =cast
c = c[c.name == 'Judi Dench']
c
c = cast
c = c[c.title == 'Sleuth']
c = c[c.year == 1972]
c.sort_values('n')
cast[cast.year == 1985].name.value_counts().head(10)
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
import pickle
import time
time1=time.strftime('%Y-%m-%d_%H-%M-%S')
iris = pd.DataFrame.from_csv('iris.csv', index_col=None, encoding='utf-8')
iris.columns
target_data = iris['species']
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
feature_data = iris[features]
features_train, features_test, target_train, target_test = train_test_split(feature_data, target_data, test_size=0.33, random_state=42)
dt = DecisionTreeClassifier()
target_data = iris['species']
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
feature_data = iris[features]
dt = dt.fit(features_train, target_train)
with open('iris-dt_'+time1, 'bw') as f:
pickle.dump(dt, f)
with open('iris-dt_'+time1, 'br') as f:
pickle.load(f)
predictions = dt.predict(features_test)
confusion_matrix(target_test, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Կոդերի սկզբունքը նույնն է նաև մնացած մասնագիտությունների համար
Step2: Պահանջվող որակավորումներ
Step3: Սկզբունքը նույնն է նաև մնացած բոլոր հմտությունների որոնման համար
|
<ASSISTANT_TASK:>
Python Code:
from BeautifulSoup import *
import requests
url = "https://careercenter.am/ccidxann.php"
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page)
tables = soup.findAll("table")
my_table = tables[0]
rows = my_table.findAll('tr')
data_list = []
for i in rows:
columns = i.findAll('td')
for j in columns:
data_list.append(j.text)
even = []
for i in range(len(data_list)):
if i%2 == 0:
even.append(data_list[i])
list_manager=[]
for i in even:
manager=re.findall('.*\s*(?:M|m)anager\s*.*',i)
if len(manager)==1:
list_manager.append(manager)
for i in range(len(list_manager)):
list_manager[i]=list_manager[i][0]
len(list_manager)
list_marketing=[]
for i in even:
marketing=re.findall('.*\s*(?:M|m)arketing\s*.*',i)
if len(marketing)==1:
list_marketing.append(marketing)
for i in range(len(list_marketing)):
list_marketing[i]=list_marketing[i][0]
len(list_marketing )
a_tags=my_table.findAll('a')
new_list=list_link[1:]
list_link=[]
for i in a_tags:
d=i.get('href')
list_link.append(d)
new_link=[]
for i in new_list:
m='https://careercenter.am/'+i
new_link.append(m)
for i in new_link:
response=requests.get(i)
page=response.text
soup=BeautifulSoup(page)
list_soup=[]
for i in range(0,len(new_link)):
response=requests.get(new_link[i])
page=response.text
soup=BeautifulSoup(page)
list_soup.append(soup)
exp_list=[]
for i in list_soup:
m=i.findAll('p')
for j in m:
exp_list.append(j.text)
list_exp=[]
for i in exp_list:
exp=re.findall('[0-9]\syears\sof.*(?:E|e)xperience\s*',i)
if len(exp)==1:
list_exp.append(exp)
for i in range(len(list_exp)):
list_exp[i]=list_exp[i][0]
list_years=[]
for i in list_exp:
exp=re.findall('[0-9]',i)
if len(exp)==1:
list_years.append(exp)
for i in range(len(list_years)):
list_years[i]=list_years[i][0]
int_list_years=[]
for i in list_years:
m=int(i)
int_list_years.append(m)
sum=0
for j in int_list_years:
sum=sum+j
mean=sum/len(int_list_years)
print 'Միջին ստաժ՝ ' + str(mean)
sum=0
for j in int_list_years:
sum=sum+j
mean=sum/len(new_link)
print 'Ընդհանուր միջին ստաժ՝ ' + str(mean)
list_english=[]
for i in exp_list:
exp=re.findall('\s*English\s*[a-z]*\s*[a-z]*',i)
if len(exp)==1:
list_english.append(exp)
for i in range(len(list_english)):
list_english[i]=list_english[i][0]
len(list_english)
print 'Անգլերեն պահանջող աշխատանքներ՝ ' + str(len(list_english) )
list_russian=[]
for i in exp_list:
exp=re.findall('\s*Russian\s*[a-z]*\s*[a-z]*',i)
if len(exp)==1:
list_russian.append(exp)
print 'Ռուսերեն պահանջող աշխատանքներ՝ ' + str(len(list_russian))
exp_company=[]
for i in list_soup:
m=i.findAll('p',attrs={'align':'center'})
for j in m:
exp_company.append(j.text)
list_zangi=[]
for i in exp_company:
exp=re.findall('Zangi',i)
if len(exp)==1:
list_zangi.append(exp)
print 'Zangi-ի կողմից հայատարարված աշխատատեղեր՝ ' + str(len(list_zangi))
list_llc=[]
for i in exp_company:
exp=re.findall('.*\s*Tech[a-z]*',i)
if len(exp)==1:
list_llc.append(exp)
print '"Tech" անունով ընկերությունների կողմից հայատարարված աշխատատեղեր՝ ' + str(len(list_llc))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Часто, когда вы имеете дело с величинами, представляющими собой сумму значений показателя за каждый день или за каждый рабочий день, имеет смысл перед началом прогнозирования поделить весь ряд на число дней в периоде. Например, если поделить ряд с объёмом производства молока на одну корову на число дней в месяце, полученная величина будет меняться более плавно, и для неё легче будет построить прогнозирующую модель.
Step2: Для ряда со средним дневным количеством молока на корову из предыдущего вопроса давайте с помощью критерия Дики-Фуллера подберём порядок дифференцирования, при котором ряд становится стационарным.
Step3: Для стационарного ряда из предыдущего вопроса постройте график автокорреляционной функции.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
import pandas as pd
import statsmodels.api as sm
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
#Reading milk data
milk = pd.read_csv('monthly-milk-production.csv', ';', index_col=['month'], parse_dates=['month'], dayfirst=True)
milk.info()
milk.head()
_ = plt.plot(milk.index, milk.values)
sm.tsa.stattools.adfuller(milk.values.flatten())
milk['daily'] = milk.milk.values.flatten() / milk.index.days_in_month
_ = plt.plot(milk.index, milk.daily)
milk.daily.values.sum()
milk.daily_diff1 = milk.daily - milk.daily.shift(1)
_ = plt.plot(milk.index, milk.daily_diff1)
sm.tsa.stattools.adfuller(milk.daily_diff1.dropna())
milk.daily_diff12 = milk.daily - milk.daily.shift(12)
_ = plt.plot(milk.index, milk.daily_diff12)
sm.tsa.stattools.adfuller(milk.daily_diff12.dropna())
milk.daily_diff12_1 = milk.daily_diff12 - milk.daily_diff12.shift(1)
_ = plt.plot(milk.index, milk.daily_diff12_1)
sm.tsa.stattools.adfuller(milk.daily_diff12_1.dropna())
sm.graphics.tsa.plot_acf(milk.daily_diff12_1.dropna().values.squeeze(), lags=50),
sm.graphics.tsa.plot_pacf(milk.daily_diff12_1.dropna().values.squeeze(), lags=50);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Characterize Parameter Space
Step2: Suggested Changes
Step3: Characterize Data Space
Step4: Solve Problem
Step5: Store Data for Retrieval in other Jupyter Notebooks
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import bet.calculateP.simpleFunP as simpleFunP
import bet.calculateP.calculateP as calculateP
import bet.sample as samp
import bet.sampling.basicSampling as bsam
from myModel import my_model
from IPython.display import Image
sampler = bsam.sampler(my_model)
# Initialize 3-dimensional input parameter sample set object
input_samples = samp.sample_set(2)
# Set parameter domain
input_samples.set_domain(np.repeat([[0.0, 1.0]], 2, axis=0))
# Generate samples on the parameter space
randomSampling = True
if randomSampling is True:
input_samples = sampler.random_sample_set('random', input_samples, num_samples=1E3)
else:
input_samples = sampler.regular_sample_set(input_samples, num_samples_per_dim=[30, 30])
MC_assumption = True
# Estimate volumes of Voronoi cells associated with the parameter samples
if MC_assumption is False:
input_samples.estimate_volume(n_mc_points=1E5)
else:
input_samples.estimate_volume_mc()
# Create the discretization object using the input samples
my_discretization = sampler.compute_QoI_and_create_discretization(input_samples,
savefile = 'Validation_discretization.txt.gz')
num_samples_discretize_D = 1
num_iid_samples = 1E5
Partition_set = samp.sample_set(2)
Monte_Carlo_set = samp.sample_set(2)
Partition_set.set_domain(np.repeat([[0.0, 1.0]], 2, axis=0))
Monte_Carlo_set.set_domain(np.repeat([[0.0, 1.0]], 2, axis=0))
Partition_discretization = sampler.create_random_discretization('random',
Partition_set,
num_samples=num_samples_discretize_D)
Monte_Carlo_discretization = sampler.create_random_discretization('random',
Monte_Carlo_set,
num_samples=num_iid_samples)
# Compute the simple function approximation to the distribution on the data space
simpleFunP.user_partition_user_distribution(my_discretization,
Partition_discretization,
Monte_Carlo_discretization)
# Calculate probabilities
calculateP.prob(my_discretization)
%store my_discretization
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now we'll add datasets
Step3: Run Compute
Step4: If we then try to access the model, we see that there is instead a single parameter that is a placeholder - this parameter stores information on how to check the progress of the run_compute job and how to load the resulting model once it's complete
Step5: Re-attaching to a Job
Step6: If we want, we can even save the Bundle and load it later to retrieve the results. In this case where the job is being run in a different Python thread but on the same machine, you cannot, however, exit Python or restart your machine.
Step7: And at any point we can choose to "re-attach". If the job isn't yet complete, we'll be in a wait loop until it is. Once the job is complete, the new model will be loaded and accessible.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,20,501))
b.run_compute(detach=True, model='mymodel')
b['mymodel']
b['mymodel'].status
b.save('test_detach.bundle')
b = phoebe.Bundle.open('test_detach.bundle')
b['mymodel'].status
b['mymodel'].attach()
b['mymodel']
axs, artists = b['mymodel'].plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 M.Z. Jorisch
Step3: The Kepler_eqn function uses the eccentricity, $e$, and mean anomaly, $M$, and then uses the iterative method to spit out our value of $E$.
Step5: Now that we have a function to solve for the orbital trajectory of an elliptical orbit (which Mars and Jupiter both have) we have to create a function for the disturbing acceleration caused by the third body, Jupiter.
Step6: Now that we have our Kepler function, our elliptical orbit function, and our acceleration function, we can set up our initial conditions. After that we will be able to get our final radial and velocity vectors for the osculating and perturbed orbits in addition to Jupiter's orbit.
Step7: Next we will have a for-loop that will give us our solution at every time step for each of our orbits. In this case, we will use a final time of 4000 days. Each time step will be one day (in seconds), and we will be able to see the trajectories over that time period.
Step8: We mentioned earlier that sometimes the term $1 - \frac{r_osc ^3}{r^3}$ can cause issues because towards the beginning of orbit the two terms are approximately equal and can cause the solution to blow up, but this can be solved as follows
Step9: Looking at our first plot we can see that there is a change in Mars' orbit due to the gravitational pull from Jupiter as it flies by!
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot
import numpy
from numpy import linalg
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
def Kepler_eqn(e, M):
Takes the eccentricity and mean anomaly of an orbit to solve Kepler's equation
Parameters:
----------
e : float
eccentricity of orbit
M : float
Mean anomaly of orbit
Returns:
-------
E : float
Eccentric anomaly
E = M + e * numpy.sin(M) # eccentric anomoaly
fofE = E - e * numpy.sin(E) - M #eccentric anomaly as a function of E
fdotE = 1 - e * numpy.cos(E) #derivative with respect to E of fofE
dE = - fofE / fdotE # change in E
Enew = E + dE
tolerance = 1e-2
while abs(fofE) > tolerance:
E = M + e * numpy.sin(Enew)
fofE = E - numpy.sin(E) - M
fdotE = 1 - e * numpy.cos(E)
dE = - fofE / fdotE
Enew = E + dE
return E
#Based on code from Ashish Tewari
def ellip_orb(a, Period, mu, e, t0, r0, v0, t):
Calculates the orbital position for an elliptical orbit
Parameters:
----------
a : float
Semi-major axis
Period : float
Period of planetary orbit
mu : float
Gravitational parameter
t0 : float
Initial time t = 0
r0 : array of float
Initial positional array
v0 : array of float
Initial velocity array
t : float
time
Returns:
-------
r : array of float
Array of radius at each time t
v : array of float
Array of velocity at each time t
r0_norm = numpy.linalg.norm(r0) # Normalized initial radius
v0_norm = numpy.linalg.norm(v0) # Normalized initial velocity
alpha = r0 * v0 # Constant used for Legrangian coefficients
theta0 = numpy.pi # Initial true anomaly
n = 2 * numpy.pi / (Period) # Mean motion, given the period
E0 = 2 * numpy.arctan(numpy.sqrt((1 - e) / (1 + e)) * numpy.tan(0.5 * theta0)) # Initial eccentric anomaly
tau = t0 + (- E0 + e * numpy.sin(E0)) / n # t - tau is time since Perigee
M = n * (t - tau) # Mean anomaly
E = Kepler_eqn(e, M) # Eccentric anomaly found through Kepler's equation
r_leg = a * (1 - e * numpy.cos(E)) # Radius used for legrangian coefficients
#Legrangian Coefficients
F = 1 + a * (numpy.cos(E - E0) - 1) * r0_norm
G = a * alpha * (1 - numpy.cos(E - E0)) / mu + r0_norm * numpy.sqrt(a / mu) * numpy.sin(E - E0)
F_dot = - numpy.sqrt(mu * a) * (numpy.sin(E - E0)) / (r_leg * r0_norm)
G_dot = 1 + a * (numpy.cos(E - E0) - 1) / r_leg
r = numpy.zeros_like(r0)
v = numpy.zeros_like(v0)
r = F * r0 + G * v0 # Radial value of orbit for specified time
v = F_dot * r0 + G_dot * v0 # Velocity value of orbit for specified time
return r, v
def acceleration_d(m1, m2, m3, r, r3):
Calculates the acceleration due to the disturbing orbit
Parameters:
----------
m1 : float
Mass of central body
m2 : float
Mass of second body
m3 : float
Mass of third (disturbing) body
r : array of float
Radial distance between body two and one
r3: array of float
Radial distance between body three and one
Returns:
-------
a_d : array of float
Acceleration due to the disturbing orbit
a_d = numpy.zeros((2, 1))
G = 6.674e-11 # Gravitational constant
r13 = r3 # Radial distance between Jupiter and the Sun
r23 = r - r3 # Radial distance between Jupiter and Mars
r23_norm = numpy.linalg.norm(r23) # Normalized radius between Jupiter and Mars
r13_norm = numpy.linalg.norm(r13) # Normalized radius between Jupiter and the Sun
a_d = (((1 / m2) * ((G* m2 * m3)/ (r23_norm ** 3))) * r23) - (((1 / m1) * ((G * m1 * m3) / (r13_norm ** 3))) * r13)
return a_d
mu3 = 1.2669e17 # Standard gravitational parameter of Jupiter in m^3 / s^2
m3 = 1.8983e27 # Mass of Jupiter in kg
e3 = .0489 # Eccentricity of Jupiter
a3 = 778000000. # Semi-major Axis of Jupiter in km
Period3 = 4332.589 * 3600 * 24 # Period of Jupiter Orbit in seconds
mu = 4.2828e13 # Standard gravitational parameter of Mars in m^3 / s^2
m2 = 6.4174e23 # Mass of Mars in kg
e = .0934 # Eccentricity of Mars
a = 228000000. # Semi-major Axis of Mars in km
Period = 686.980 * 3600 * 24 # Period of Mars Orbit in seconds
mu1 = 1.3271e20 # Standard gravitational parameters of the Sun in m^3 / s^2
m1 = 1.989e30 # Mass of the Sun in kg
dt = 24 * 3600 # Time step
tfinal = 4000 * dt # Final time
N = int(tfinal / dt) + 1 # Number of time steps
t = numpy.linspace(0,tfinal,N) # Time array
r0 = numpy.array([228000000., 0.]) # Initial radius of Mars
v0 = numpy.array([-21.84, -10.27]) # Initial velocity
r3_0 = numpy.array([778000000., 0.]) # Initial radius of Jupiter
v3_0 = numpy.array([-13.04, -.713]) # Initial velocity of Jupiter
# Set arrays for radial and velocity components
r = numpy.empty((N, 2))
v = numpy.empty((N, 2))
gamma = numpy.empty((N, 2))
delta = numpy.empty((N, 2))
r_n = numpy.empty((N, 2))
v_n = numpy.empty((N, 2))
a_d = numpy.empty((N, 2))
r_osc = numpy.empty((N, 2))
r_osc_n = numpy.empty((N, 2))
v_osc = numpy.empty((N, 2))
v_osc_n = numpy.empty((N, 2))
r3_n = numpy.empty((N, 2))
for i,ts in enumerate(t):
delta = numpy.zeros_like(r0)
gamma = numpy.zeros_like(r0)
r_osc, v_osc = ellip_orb(a, Period, mu1, e, t[0], r0, v0, ts) # Trajectory of the osculating orbit of Mars
r_osc_norm = numpy.linalg.norm(r_osc) # Normalized osculating orbit of Mars
r0_norm = numpy.linalg.norm(r0) # Normalized initial orbit of Mars
r3, v3 = ellip_orb(a3, Period3, mu3, e3, t[0], r3_0, v3_0, ts) # Trajectory of Jupiter
a_d = acceleration_d(m1, m2, m3, r_osc, r3) # Acceleration due to Jupiter
gamma = mu3 * (dt) * ((1 - (r_osc_norm / r0_norm) ** 3) / r_osc_norm ** 3) + a_d * (dt) # Difference in velocity between osculating orbit and perturbed
delta = gamma * (dt) # Difference between osculating orbit and perturbed orbit radius
r = r_osc + delta # Perturbed orbital radius
v = v_osc + gamma # Perturbed orbital velocity
r_osc_n[i,:] = r_osc # Value of osculating orbital radius for every time step
v_osc_n[i,:] = v_osc # Value of osculating orbital velocity for every time step
r3_n[i,:] = r3 # Value of Jupiter's radius for every time step
r_n[i,:] = r # Value of the perturbed orbital radius for every time step
v_n[i,:] = v # Value of the perturbed orbital velocity for every time step
x = numpy.linspace(t[0], t[-1], N)
pyplot.figure(figsize = (10,10))
pyplot.grid(True)
pyplot.xlabel(r'X Distance (km)', fontsize = 18)
pyplot.ylabel(r'Y Distance (km)', fontsize = 18)
pyplot.title('Trajectory of Osc vs Perturbed Orbit, Flight Time = %.2f days' %(tfinal / dt), fontsize=14)
pyplot.plot(r_n[:,0], r_n[:,1])
pyplot.plot(r_osc_n[:,0], r_osc_n[:,1])
pyplot.legend(['Perturbed Orbit', 'Osculating Orbit']);
pyplot.figure(figsize = (10,10))
pyplot.grid(True)
pyplot.xlabel(r'X Distance (km)', fontsize = 18)
pyplot.ylabel(r'Y Distance (km)', fontsize = 18)
pyplot.title('Trajectory of Osc, Perturbed and Jupiter Orbit, Flight Time = %.2f days' %(tfinal / dt), fontsize=14)
pyplot.plot(r_n[:,0], r_n[:,1])
pyplot.plot(r_osc_n[:,0], r_osc_n[:,1])
pyplot.plot(r3_n[:,0], r3_n[:,1])
pyplot.legend(['Perturbed Orbit', 'Osculating Orbit', 'Jupiter\'s Orbit']);
from IPython.core.display import HTML
css_file = '../numericalmoocstyle.css'
HTML(open(css_file, "r").read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Which three years of the 1930s saw the most films released?
Step2: Plot the number of films that have been released each decade over the history of cinema.
Step3: Plot the number of "Hamlet" films made each decade.
Step4: Plot the number of "Rustler" characters in each decade of the history of film.
Step5: Plot the number of "Hamlet" characters each decade.
Step6: What are the 11 most common character names in movie history?
Step7: Who are the 10 people most often credited as "Herself" in film history?
Step8: Who are the 10 people most often credited as "Himself" in film history?
Step9: Which actors or actresses appeared in the most movies in the year 1945?
Step10: Which actors or actresses appeared in the most movies in the year 1985?
Step11: Plot how many roles Mammootty has played in each year of his career.
Step12: What are the 10 most frequent roles that start with the phrase "Patron in"?
Step13: What are the 10 most frequent roles that start with the word "Science"?
Step14: Plot the n-values of the roles that Judi Dench has played over her career.
Step15: Plot the n-values of Cary Grant's roles through his career.
Step16: Plot the n-value of the roles that Sidney Poitier has acted over the years.
Step17: How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s?
Step18: How many supporting (n=2) roles were available to actors, and how many to actresses, in the 1950s?
|
<ASSISTANT_TASK:>
Python Code:
titles.title.value_counts().head(10)
titles[(titles["year"]>=1930) & (titles["year"]<1940)].title.value_counts().head(3)
t = titles
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
t = titles[titles.title == 'Hamlet']
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
t = cast[cast.character == 'Rustler']
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
t = cast[cast.character == 'Hamlet']
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
cast.character.value_counts().head(11)
cast[cast.character == "Herself"].name.value_counts().head(10)
cast[cast.character == "Himself"].name.value_counts().head(10)
cast[cast.year == 1945].name.value_counts().head(1)
cast[cast.year == 1985].name.value_counts().head(1)
# (cast[cast.name == "Mammootty"].character // 1 * 1).value_counts().sort_index().plot(kind='bar')
cast[cast.name == 'Mammootty'].year.value_counts().sort_index().plot()
cast[cast.character.str.startswith('Patron in')].character.value_counts().head(10)
cast[cast.character.str.startswith("Science")].character.head(10)
char = cast[cast.name == "Judi Dench"].sort("year")
char = char[char.n.notnull()]
char.plot(x="year",y ="n", kind="scatter")
char = cast[cast.name == "Cary Grant"].sort("year")
char = char[char.n.notnull()]
char.plot(x="year",y ="n", kind="scatter")
char = cast[cast["name"]== "Sidney Poitier"]
char = char[char.n.notnull()]
char.plot(x="year",y="n",kind="scatter")
c = cast
c = c[c.year // 10 == 195]
c = c[(c.n == 1) ]
c.type.value_counts()
c = cast
c = c[c.year // 10 == 195]
c = c[(c.n == 2) ]
c.type.value_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br><br><br><br>
Step2: Commands execution on remote target
Step3: Example of frameworks configuration on remote target
Step4: Create a big/LITTLE partition using CGroups
Step5: <br><br><br><br>
Step6: <br><br><br><br>
Step7: Example of energy collected data
Step8: Example of platform description
Step9: <br><br><br><br>
Step10: Using the TRAPpy Trace Plotter
Step11: Example of Trace Analysis
Step12: <br><br><br><br>
Step13: Example of Behavioral Analysis
Step14: Get tasks behaviors
Step15: Check for expected behaviros
Step16: Examples of Data analysis
Step17: What are the relative residency on different OPPs?
Step18: Example of Custom Plotting
|
<ASSISTANT_TASK:>
Python Code:
import logging
reload(logging)
logging.basicConfig(
format='%(asctime)-9s %(levelname)-8s: %(message)s',
datefmt='%I:%M:%S')
# Enable logging at INFO level
logging.getLogger().setLevel(logging.INFO)
# Execute this cell to enable verbose SSH commands
logging.getLogger('ssh').setLevel(logging.DEBUG)
# Other python modules required by this notebook
import json
import os
# Setup a target configuration
conf = {
# Target is localhost
"platform" : 'linux',
"board" : "juno",
# Login credentials
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
"0": 355, "1": 138, "2": 138, "3": 355, "4": 354, "5": 354
},
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
],
"buffsize" : 10240
},
# Where results are collected
"results_dir" : "LisaInANutshell",
# Devlib module required (or not required)
'modules' : [ "cpufreq", "cgroups" ],
#"exclude_modules" : [ "hwmon" ],
}
# Support to access the remote target
from env import TestEnv
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(conf)
target = te.target
print "DONE"
# Enable Energy-Aware scheduler
target.execute("echo ENERGY_AWARE > /sys/kernel/debug/sched_features");
# Check which sched_feature are enabled
sched_features = target.read_value("/sys/kernel/debug/sched_features");
print "sched_features:"
print sched_features
# It's possible also to run custom script
# my_script = target.get_installed()
# target.execute(my_script)
target.cpufreq.set_all_governors('sched');
# Check which governor is enabled on each CPU
enabled_governors = target.cpufreq.get_all_governors()
print enabled_governors
cpuset = target.cgroups.controller('cpuset')
# Configure a big partition
cpuset_bigs = cpuset.cgroup('/big')
cpuset_bigs.set(cpus=te.target.bl.bigs, mems=0)
# Configure a LITTLE partition
cpuset_littles = cpuset.cgroup('/LITTLE')
cpuset_littles.set(cpus=te.target.bl.littles, mems=0)
# Dump the configuraiton of each controller
cgroups = cpuset.list_all()
for cgname in cgroups:
cgroup = cpuset.cgroup(cgname)
attrs = cgroup.get()
cpus = attrs['cpus']
print '{}:{:<15} cpus: {}'.format(cpuset.kind, cgroup.name, cpus)
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Periodic, Ramp
# Light workload
light = Periodic(
duty_cycle_pct = 10,
duration_s = 3,
period_ms = 32,
)
# Ramp workload
ramp = Ramp(
start_pct=10,
end_pct=60,
delta_pct=20,
time_s=0.5,
period_ms=16
)
# Heavy workload
heavy = Periodic(
duty_cycle_pct=60,
duration_s=3,
period_ms=16
)
# Composed workload
lrh_task = light + ramp + heavy
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'test', calibration=te.calibration())
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind = 'profile',
# 2. define the "profile" of each task
params = {
# 3. Composed task
'task_lrh': lrh_task.get(),
},
#loadref='big',
loadref='LITTLE',
run_dir=target.working_directory
);
# Inspect the JSON file used to run the application
with open('./test_00.json', 'r') as fh:
rtapp_json = json.load(fh)
logging.info('Generated RTApp JSON file:')
print json.dumps(rtapp_json, indent=4, sort_keys=True)
def execute(te, wload, res_dir):
logging.info('# Setup FTrace')
te.ftrace.start()
logging.info('## Start energy sampling')
te.emeter.reset()
logging.info('### Start RTApp execution')
wload.run(out_dir=res_dir)
logging.info('## Read energy consumption: %s/energy.json', res_dir)
nrg, nrg_file = te.emeter.report(out_dir=res_dir)
logging.info('# Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(res_dir, 'trace.dat')
logging.info('# Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('# Save platform description: %s/platform.json', res_dir)
plt, plt_file = te.platform_dump(res_dir)
logging.info('# Report collected data:')
logging.info(' %s', res_dir)
!tree {res_dir}
return nrg, nrg_file, plt, plt_file, trace_file
nrg, nrg_file, plt, plt_file, trace_file = execute(te, rtapp, te.res_dir)
import pandas as pd
df = pd.DataFrame(list(nrg.iteritems()),
columns=['Cluster', 'Energy'])
df = df.set_index('Cluster')
df
# Show the collected platform description
with open(os.path.join(te.res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
# Let's look at the trace using kernelshark...
trace_file = te.res_dir + '/trace.dat'
!kernelshark {trace_file} 2>/dev/null
# Suport for FTrace events parsing and visualization
import trappy
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(trace_file)
# Load the LISA::Trace parsing module
from trace import Trace
# Define which event we are interested into
trace = Trace(te.platform, te.res_dir, [
"sched_switch",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
])
# Let's have a look at the set of events collected from the trace
ftrace = trace.ftrace
logging.info("List of events identified in the trace:")
for event in ftrace.class_definitions.keys():
logging.info(" %s", event)
# Trace events are converted into tables, let's have a look at one
# of such tables
df = trace.data_frame.trace_event('sched_load_avg_task')
df.head()
# Simple selection of events based on conditional values
#df[df.comm == 'task_lrh'].head()
# Simple selection of specific signals
#df[df.comm == 'task_lrh'][['util_avg']].head()
# Simple statistics reporting
#df[df.comm == 'task_lrh'][['util_avg']].describe()
# Signals can be easily plot using the ILinePlotter
trappy.ILinePlot(
# FTrace object
ftrace,
# Signals to be plotted
signals=[
'sched_load_avg_cpu:util_avg',
'sched_load_avg_task:util_avg'
],
# # Generate one plot for each value of the specified column
# pivot='cpu',
# # Generate only plots which satisfy these filters
# filters={
# 'comm': ['task_lrh'],
# 'cpu' : [0,5]
# },
# Formatting style
per_line=2,
drawstyle='steps-post',
marker = '+'
).view()
from bart.sched.SchedMultiAssert import SchedAssert
# Create an object to get/assert scheduling pbehaviors
sa = SchedAssert(ftrace, te.topology, execname='task_lrh')
# Check the residency of a task on the LITTLE cluster
print "Task residency [%] on LITTLE cluster:",\
sa.getResidency(
"cluster",
te.target.bl.littles,
percent=True
)
# Check on which CPU the task start its execution
print "Task initial CPU:",\
sa.getFirstCpu()
import operator
# Define the time window where we want focus our assertions
start_s = sa.getStartTime()
little_residency_window = (start_s, start_s + 10)
# Defined the expected task residency
EXPECTED_RESIDENCY_PCT=99
result = sa.assertResidency(
"cluster",
te.target.bl.littles,
EXPECTED_RESIDENCY_PCT,
operator.ge,
window=little_residency_window,
percent=True
)
print "Task running {} [%] of its time on LITTLE? {}"\
.format(EXPECTED_RESIDENCY_PCT, result)
result = sa.assertFirstCpu(te.target.bl.bigs)
print "Task starting on a big CPU? {}".format(result)
# Focus on sched_switch events
df = ftrace.sched_switch.data_frame
# # Select only interesting columns
# df = df.ix[:,'next_comm':'prev_state']
# # Group sched_switch event by task switching into the CPU
# df = df.groupby('next_pid').describe(include=['object'])
# df = df.unstack()
# # Sort sched_switch events by number of time a task switch into the CPU
# df = df['next_comm'].sort_values(by=['count'], ascending=False)
df.head()
# # Get topmost task name and PID
# most_switching_pid = df.index[1]
# most_switching_task = df.values[1][2]
# task_name = "{}:{}".format(most_switching_pid, most_switching_task)
# # Print result
# logging.info("The most swithing task is: [%s]", task_name)
# Focus on cpu_frequency events for CPU0
df = ftrace.cpu_frequency.data_frame
df = df[df.cpu == 0]
# # Compute the residency on each OPP before switching to the next one
# df.loc[:,'start'] = df.index
# df.loc[:,'delta'] = (df['start'] - df['start'].shift()).fillna(0).shift(-1)
# # Group by frequency and sum-up the deltas
# freq_residencies = df.groupby('frequency')['delta'].sum()
# logging.info("Residency time per OPP:")
# df = pd.DataFrame(freq_residencies)
df.head()
# # Compute the relative residency time
# tot = sum(freq_residencies)
# #df = df.apply(lambda delta : 100*delta/tot)
# for f in freq_residencies.index:
# logging.info("Freq %10dHz : %5.1f%%", f, 100*freq_residencies[f]/tot)
# Plot residency time
import matplotlib.pyplot as plt
# Enable generation of Notebook emebedded plots
%matplotlib inline
fig, axes = plt.subplots(1, 1, figsize=(16, 5));
df.plot(kind='bar', ax=axes);
from perf_analysis import PerfAnalysis
# Full analysis function
def analysis(t_min=None, t_max=None):
test_dir = te.res_dir
platform_json = '{}/platform.json'.format(test_dir)
trace_file = '{}/trace.dat'.format(test_dir)
# Load platform description data
with open(platform_json, 'r') as fh:
platform = json.load(fh)
# Load RTApp Performance data
pa = PerfAnalysis(test_dir)
logging.info("Loaded performance data for tasks: %s", pa.tasks())
# Load Trace data
#events = my_tests_conf['ftrace']['events']
events = [
"sched_switch",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"cpu_frequency",
"cpu_capacity",
]
trace = Trace(platform, test_dir, events, tasks=pa.tasks())
# Define time ranges for all the temporal plots
trace.setXTimeRange(t_min, t_max)
# Tasks performances plots
for task in pa.tasks():
pa.plotPerf(task)
# Tasks plots
trace.analysis.tasks.plotTasks()
# Cluster and CPUs plots
trace.analysis.frequency.plotClusterFrequencies()
analysis()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
np.random.seed(1)
df = pd.DataFrame({
'A' : ['one', 'one', 'two', 'three'] * 6,
'B' : ['A', 'B', 'C'] * 8,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 4,
'D' : np.random.randn(24),
'E' : np.random.randn(24)
})
def g(df):
return pd.pivot_table(df, values=['D','E'], index=['B'], aggfunc={'D':np.max, 'E':np.min})
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El script es algo tonto, pero ilustra varios aspectos de la sintaxis de Python.
Step2: It is also possible to continue expressions on the next line within parentheses, without using the "\" marker
Step3: Most Python style guides recommend the second version of line continuation (within parentheses) to the first (use of the "\" marker).
Step4: Abusing this flexibility can lead to issues with code readibility – in fact, abusing white space is often one of the primary means of intentionally obfuscating code (which some people do for sport).
Step5: They can also be used to indicate that a function is being called.
Step6: Some functions can be called with no arguments at all, in which case the opening and closing parentheses still must be used to indicate a function evaluation.
|
<ASSISTANT_TASK:>
Python Code:
# set the midpoint
midpoint = 5
# make two empty lists
lower = []; upper = []
# split the numbers into lower and upper
for i in range(10):
if (i < midpoint):
lower.append(i)
else:
upper.append(i)
print("lower:", lower)
print("upper:", upper)
x = 1 + 2 + 3 + 4 +\
5 + 6 + 7 + 8
x = (1 + 2 + 3 + 4 +
5 + 6 + 7 + 8)
x=1+2
x = 1 + 2
x = 1 + 2
2 * (3 + 4)
print('first value:', 1)
print('second value:', 2)
L = [4,2,3,1]
L.sort()
print(L)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Zip Code Data Comparison - WBEZ, Current Data
Step2: Community Area Breakdown for 2009-2015
|
<ASSISTANT_TASK:>
Python Code:
flood_comm_top = flood_comm_sum.sort_values(by='Count Calls', ascending=False)[:20]
flood_comm_top.plot(kind='bar',x='Community Area',y='Count Calls')
# WBEZ zip data
wbez_zip = pd.read_csv('wbez_flood_311_zip.csv')
wbez_zip_top = wbez_zip.sort_values(by='number_of_311_calls',ascending=False)[:20]
wbez_zip_top.plot(kind='bar',x='zip_code',y='number_of_311_calls')
flood_zip_df = pd.read_csv('311_data/flood_calls_311_zip.csv')
flood_zip_df['Created Date'] = pd.to_datetime(flood_zip_df['Created Date'])
flood_zip_df = flood_zip_df.set_index(pd.DatetimeIndex(flood_zip_df['Created Date']))
# Setting to same time frame as WBEZ
flood_zip_df = flood_zip_df['2009-01-01':'2015-12-30']
flood_zip_df = flood_zip_df[flood_zip_df.columns.values[1:]]
flood_zip_stack = pd.DataFrame(flood_zip_df.stack()).reset_index()
flood_zip_stack = flood_zip_stack.rename(columns={'level_0':'Created Date','level_1':'Zip Code',0:'Count Calls'})
flood_zip_sum = flood_zip_stack.groupby(['Zip Code'])['Count Calls'].sum()
flood_zip_sum = flood_zip_sum.reset_index()
flood_zip_sum.head()
flood_zip_top = flood_zip_sum.sort_values(by='Count Calls', ascending=False)[:20]
flood_zip_top.plot(kind='bar',x='Zip Code',y='Count Calls')
fig, axs = plt.subplots(1,2)
plt.rcParams["figure.figsize"] = [15, 5]
wbez_zip_top.plot(title='WBEZ Data', ax=axs[0], kind='bar',x='zip_code',y='number_of_311_calls')
flood_zip_top.plot(title='FOIA Data', ax=axs[1], kind='bar',x='Zip Code',y='Count Calls')
flood_comm_time = flood_comm_stack_df.copy()
flood_comm_time['Date'] = pd.to_datetime(flood_comm_time['Date'])
flood_comm_time = flood_comm_time.set_index(pd.DatetimeIndex(flood_comm_time['Date']))
flood_comm_time = flood_comm_time['2009-01-01':'2015-12-30']
flood_comm_time_sum = flood_comm_time.groupby(['Community Area'])['Count Calls'].sum()
flood_comm_time_sum = flood_comm_time_sum.reset_index()
flood_comm_time_sum.head()
flood_comm_time_top = flood_comm_time_sum.sort_values(by='Count Calls', ascending=False)[:20]
flood_comm_time_top.plot(kind='bar',x='Community Area',y='Count Calls')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Likewise, as before we need to create an event parth function and a severity level function.
Step2: Next, we need to make an subclass or an instance of the visisc.EventSelectionQuery. This class uses the <a href="http
Step3: Given that we have the query class, we can now create and open a query selection dialog where it is possible to customize the labels for source classes and the severity levels.
Step4: For opening the window, we can the call. However, simarly to previous visualization examples, we have to run it outside the Jupyter notebook by calling ipython directly.
|
<ASSISTANT_TASK:>
Python Code:
import pyisc;
import visisc;
import numpy as np
import datetime
from scipy.stats import poisson, norm, multivariate_normal
%matplotlib wx
%gui wx
n_sources = 10
n_source_classes = 10
n_events = 100
num_of_normal_days = 200
num_of_anomalous_days = 10
data = None
days_list = [num_of_normal_days, num_of_anomalous_days]
dates = []
for state in [0,1]: # normal, anomalous data
num_of_days = days_list[state]
for k in range(n_source_classes):
for i in range(n_sources):
data0 = None
for j in range(n_events):
if state == 0:# Normal
po_dist = poisson(int((10+2*(n_source_classes-k))*(float(j)/n_events/2+0.75))) # from 0.75 to 1.25
else: # anomalous
po_dist = poisson(int((20+2*(n_source_classes-k))*(float(j)/n_events+0.5))) # from 0.5 to 1.5
tmp = po_dist.rvs(num_of_days)
if data0 is None:
data0 = tmp
else:
data0 = np.c_[data0,tmp]
tmp = np.c_[
[k*n_sources+i] * (num_of_days), # Sources
[k] * (num_of_days), # Source classes
[ # Timestamp
datetime.date(2015,02,24) + datetime.timedelta(d)
for d in np.array(range(num_of_days)) + (0 if state==0 else num_of_normal_days)
],
[1] * (num_of_days), # Measurement period
data0, # Event frequency counts
]
if data is None:
data = tmp
else:
data = np.r_[
tmp,
data
]
# Column index into the data
source_column = 0
class_column = 1
date_column = 2
period_column = 3
first_event_column = 4
last_event_column = first_event_column + n_events
event_names = ["event_%i"%i for i in range(n_events)]
def event_path(x): # Returns a list of strings with 3 elements
return ["Type_%i"%(x/N) for N in [50, 10]]+[event_names[x]]
def severity_level(x): # returns 3 different severity levels: 0, 1, 2
return x-(x/3)*3
class MySelectionQuery(visisc.EventSelectionQuery):
def __init__(self):
self.list_of_source_ids = [i for i in range(n_sources*n_classes)]
# Below: a list of pairs with id and name, where the name is shown in the GUI while the id is put into teh selection.
self.list_of_source_classes = [(i, "class_%i"%i) for i in range(n_source_classes)]
self.list_of_event_names = event_names
# Below: a list of pairs with id and name, where the name is shown in the GUI while the id is put into teh selection.
self.list_of_event_severity_levels = [(i, "Level %i"%i) for i in range(3)]
self.period_start_date = data.T[date_column].min()
self.period_end_date = data.T[date_column].max()
def execute_query(self):
query = self
query.selected_list_of_source_ids = query.list_of_source_ids
data_query = np.array(
[
data[i] for i in range(len(data)) if
data[i][source_column] in query.selected_list_of_source_ids and
data[i][class_column] in query.selected_list_of_source_classes and
data[i][date_column] >= query.period_start_date and
data[i][date_column] <= query.period_end_date
]
)
event_columns = [first_event_column+event_names.index(e) for e in query.selected_list_of_event_names
if severity_level(first_event_column+event_names.index(e)) in query.selected_list_of_event_severity_levels]
model = visisc.EventDataModel.hierarchical_model(
event_columns=event_columns,
get_event_path = event_path,
get_severity_level = severity_level,
num_of_severity_levels=3
)
data_object = model.data_object(
data_query,
source_column = source_column,
class_column = class_column,
period_column=period_column,
date_column=date_column
)
anomaly_detector = model.fit_anomaly_detector(data_object,poisson_onesided=True)
vis = visisc.EventVisualization(model, 13.8,
start_day=query.period_end_date,# yes confusing, start day in the EventVisualization is backward looking
precompute_cache=True) # Precompute all anomaly calculation in order to speed up visualization.
query = MySelectionQuery()
dialog = visisc.EventSelectionDialog(
query,
source_class_label="Select Machine Types",
severity_level_label="Select Event Severity Types"
)
!ipython --matplotlib=wx --gui=wx -i visISC_query_dialog_example.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: O que temos no retorno da função np.histogram é a contagem do número de pixels com valores em uma determinada faixa. No exemplo acima, a imagem do cameramen possui 10618 pixels com valor entre 0 e 12,75. Podemos visualizar o histograma através da função plot do matplotlib.
Step2: Veja que a visualização acima não é muito adequada, já que o histograma não é uma função contínua. Uma forma melhor de visualizarmos o histograma é usando um gráfico de barras. E, no eixo das abscissas, queremos mostrar o valor do nível de cinza computado (bin center) e não os limites de cada faixa (bin edges), como o retornado pela função np.histogram.
Step3: Histograma x informação de localização
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.image as mpimg
f = mpimg.imread('../data/cameraman.tif')
print(f.min(), f.max())
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(f, cmap = 'gray')
plt.colorbar()
nbins = 20
h, bin_edges = np.histogram(f, nbins,(0,255))
print('h=\n',h)
print('bins=\n',bin_edges)
plt.plot(h)
w=255./nbins
bin_centers = bin_edges[1:]-(w/2)
plt.bar(bin_centers, h, width=w)
plt.title('Historama da imagem')
f = mpimg.imread('../data/cameraman.tif')
fs = f.copy()
np.random.shuffle(fs) # embaralha os pixels da imagem os deixando em outra ordem
plt.figure()
plt.imshow(f, cmap = 'gray')
plt.colorbar()
plt.figure()
plt.imshow(fs, cmap = 'gray')
plt.colorbar()
h, bins = np.histogram(f)
plt.figure()
plt.plot(h)
plt.title('histograma imagem original')
hshuffle, binshuffle = np.histogram(fs)
plt.figure()
plt.plot(hshuffle)
plt.title('histograma imagem shuffle')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Include the input file that contains all input parameters needed for all components. This file can either be a Python dictionary or a text file that can be converted into a Python dictionary. If a text file is provided, it will be converted to a Python dictionary. Here we use an existing text file prepared for this exercise.
Step3: Instantiate Landlab components to simulate corresponding attributes. In this example, we shall demonstrate the use of seasonal rainfall and PFT-specific potential evapotranspiration. The instantiated objects are
Step4: Lets look at the initial organization of PFTs
Step5: Specify an approximate number of years for the model to run.
Step6: Create empty arrays to store spatio-temporal data over multiple iterations. The captured data can be used for plotting model outputs.
Step7: To reduce computational overhead, we shall create a lookup array for plant-specific PET values for each day of the year, and slope and aspect grid.
Step8: Specify current_time (in years). current_time is the current time in the simulation.
Step9: The loop below couples the components introduced above in a for loop until all "n" number of storms are generated. Time is advanced by the soil moisture object based on storm and interstorm durations that are estimated by the strom generator object. The ecohydrologic model is run on each storm whereas cellular automaton vegetation component is run once every year.
Step10: Time_Consumed is an optional variable that gives information about computer running time
Step11: Save the outputs using numpy.save(). These files have '.nc' extension, which can be loaded using numpy.load().
Step12: Lets look at outputs.
Step13: If you run this model for around 900 years, you will observe patterns of PFTs. For example, you will find more trees on north facing slopes and mostly shrubs and grass on south facing slopes, as shown below
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
%matplotlib inline
import time
import numpy as np
from landlab.io import read_esri_ascii
from landlab import RasterModelGrid as rmg
from landlab import load_params
from Ecohyd_functions_DEM import (
Initialize_,
Empty_arrays,
Create_PET_lookup,
Save_,
Plot_,
)
(grid, elevation) = read_esri_ascii("DEM_10m.asc") # Read the DEM
grid1 = rmg((5, 4), xy_spacing=(5.0, 5.0)) # Representative grid
InputFile = "Inputs_Vegetation_CA_DEM.txt"
data = load_params(InputFile) # Creates dictionary that holds the inputs
PD_D, PD_W, Rad, Rad_PET, PET_Tree, PET_Shrub, PET_Grass, SM, VEG, vegca = Initialize_(
data, grid, grid1, elevation
)
import matplotlib.pyplot as plt
import matplotlib as mpl
cmap = mpl.colors.ListedColormap(["green", "red", "black", "white", "red", "black"])
bounds = [-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
description = "green: grass; red: shrub; black: tree; white: bare"
plt.figure(101)
grid.imshow(
"vegetation__plant_functional_type",
at="cell",
cmap=cmap,
grid_units=("m", "m"),
norm=norm,
limits=[0, 5],
allow_colorbar=False,
)
plt.figtext(0.2, 0.0, description, weight="bold", fontsize=10)
n_years = 50 # Approx number of years for model to run
# Calculate approximate number of storms per year
fraction_wet = (data["doy__end_of_monsoon"] - data["doy__start_of_monsoon"]) / 365.0
fraction_dry = 1 - fraction_wet
no_of_storms_wet = (
8760 * (fraction_wet) / (data["mean_interstorm_wet"] + data["mean_storm_wet"])
)
no_of_storms_dry = (
8760 * (fraction_dry) / (data["mean_interstorm_dry"] + data["mean_storm_dry"])
)
n = int(n_years * (no_of_storms_wet + no_of_storms_dry))
P, Tb, Tr, Time, VegType, PET_, Rad_Factor, EP30, PET_threshold = Empty_arrays(
n, n_years, grid, grid1
)
Create_PET_lookup(
Rad, PET_Tree, PET_Shrub, PET_Grass, PET_, Rad_Factor, EP30, Rad_PET, grid
)
# # Represent current time in years
current_time = 0 # Start from first day of Jan
# Keep track of run time for simulation—optional
Start_time = time.clock() # Recording time taken for simulation
# declaring few variables that will be used in storm loop
time_check = 0.0 # Buffer to store current_time at previous storm
yrs = 0 # Keep track of number of years passed
WS = 0.0 # Buffer for Water Stress
Tg = 365 # Growing season in days
# # Run storm Loop
for i in range(0, n):
# # Update objects
# Calculate Day of Year (DOY)
Julian = int(np.floor((current_time - np.floor(current_time)) * 365.0))
# Generate seasonal storms
# for Dry season
if Julian < data["doy__start_of_monsoon"] or Julian > data["doy__end_of_monsoon"]:
PD_D.update()
P[i] = PD_D.get_storm_depth()
Tr[i] = PD_D.get_precipitation_event_duration()
Tb[i] = PD_D.get_interstorm_event_duration()
# Wet Season—Jul to Sep—NA Monsoon
else:
PD_W.update()
P[i] = PD_W.get_storm_depth()
Tr[i] = PD_W.get_precipitation_event_duration()
Tb[i] = PD_W.get_interstorm_event_duration()
# Spatially distribute PET and its 30-day-mean (analogous to degree day)
grid["cell"]["surface__potential_evapotranspiration_rate"] = (
np.choose(grid["cell"]["vegetation__plant_functional_type"], PET_[Julian])
) * Rad_Factor[Julian]
grid["cell"]["surface__potential_evapotranspiration_30day_mean"] = (
np.choose(grid["cell"]["vegetation__plant_functional_type"], EP30[Julian])
) * Rad_Factor[Julian]
# Assign spatial rainfall data
grid["cell"]["rainfall__daily_depth"] = P[i] * np.ones(grid.number_of_cells)
# Update soil moisture component
current_time = SM.update(current_time, Tr=Tr[i], Tb=Tb[i])
# Decide whether its growing season or not
if Julian != 364:
if EP30[Julian + 1, 0] > EP30[Julian, 0]:
PET_threshold = 1
# 1 corresponds to ETThresholdup (begin growing season)
else:
PET_threshold = 0
# 0 corresponds to ETThresholddown (end growing season)
# Update vegetation component
VEG.update(PETthreshold_switch=PET_threshold, Tb=Tb[i], Tr=Tr[i])
# Update yearly cumulative water stress data
WS += (grid["cell"]["vegetation__water_stress"]) * Tb[i] / 24.0
# Record time (optional)
Time[i] = current_time
# Cellular Automata
if (current_time - time_check) >= 1.0:
if yrs % 5 == 0:
print("Elapsed time = {time} years".format(time=yrs))
VegType[yrs] = grid["cell"]["vegetation__plant_functional_type"]
grid["cell"]["vegetation__cumulative_water_stress"] = WS / Tg
vegca.update()
SM.initialize()
VEG.initialize()
time_check = current_time
WS = 0
yrs += 1
VegType[yrs] = grid["cell"]["vegetation__plant_functional_type"]
Final_time = time.clock()
Time_Consumed = (Final_time - Start_time) / 60.0 # in minutes
print("Time_consumed = {time} minutes".format(time=Time_Consumed))
# # Saving
sim = "VegCA_DEM_26Jul16_"
# Save_(sim, Tb, Tr, P, VegType, yrs, Time_Consumed, Time)
Plot_(grid, VegType, yrs, yr_step=10)
from IPython.display import Image
Image(filename="presentation.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on row 2 annotated a chunk of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is [NONE].
Step2: Now we can pre-process the data and run the CrowdTruth metrics
Step3: Removing stopwords from Media Units and Annotations
Step4: In the new configuration class ConfigDimRed, we apply the function we just built to both the column that contains the media unit text (inputColumns[2]), and the column containing the crowd annotations (outputColumns[0])
Step5: Now we can pre-process the data and run the CrowdTruth metrics
Step6: Effect on CrowdTruth metrics
Step7: The red line in the plot runs through the diagonal. All sentences above the line have a higher sentence quality score when the stopwords were removed.
Step8: The quality of the majority of workers also has increased in the configuration where we removed the stopwords. However, because of the inter-linked nature of the CrowdTruth quality metrics, the annotations of these workers now has a greater weight when calculating the sentence quality score. So the stopword removal process had the effect of removing some of the noise in the annotations and therefore increasing the quality scores, but also of amplifying the true ambiguity in the sentences.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
test_data = pd.read_csv("../data/event-text-highlight.csv")
test_data["tagged_events"][0:30]
import crowdtruth
from crowdtruth.configuration import DefaultConfig
class Config(DefaultConfig):
inputColumns = ["doc_id", "sentence_id", "events", "events_count", "original_sententce", "processed_sentence", "tokens"]
outputColumns = ["tagged_events"]
open_ended_task = True
annotation_separator = ","
remove_empty_rows = False
def processJudgments(self, judgments):
# build annotation vector just from words
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
# normalize vector elements
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('[',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(']',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('"',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(',,,',','))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(',,',','))
return judgments
data_with_stopwords, config_with_stopwords = crowdtruth.load(
file = "../data/event-text-highlight.csv",
config = Config()
)
processed_results_with_stopwords = crowdtruth.run(
data_with_stopwords,
config_with_stopwords
)
import nltk
from nltk.corpus import stopwords
import string
stopword_set = set(stopwords.words('english'))
stopword_set.update(['s'])
def remove_stop_words(words_string, sep):
'''
words_string: string containing all words
sep: separator character for the words in words_string
'''
words_list = words_string.split(sep)
corrected_words_list = ""
for word in words_list:
if word not in stopword_set:
if corrected_words_list != "":
corrected_words_list += sep
corrected_words_list += word
return corrected_words_list
import pandas as pd
class ConfigDimRed(Config):
def processJudgments(self, judgments):
judgments = Config.processJudgments(self, judgments)
# remove stopwords from input sentence
for idx in range(len(judgments[self.inputColumns[2]])):
judgments.at[idx, self.inputColumns[2]] = remove_stop_words(
judgments[self.inputColumns[2]][idx], " ")
for idx in range(len(judgments[self.outputColumns[0]])):
judgments.at[idx, self.outputColumns[0]] = remove_stop_words(
judgments[self.outputColumns[0]][idx], self.annotation_separator)
if judgments[self.outputColumns[0]][idx] == "":
judgments.at[idx, self.outputColumns[0]] = self.none_token
return judgments
data_without_stopwords, config_without_stopwords = crowdtruth.load(
file = "../data/event-text-highlight.csv",
config = ConfigDimRed()
)
processed_results_without_stopwords = crowdtruth.run(
data_without_stopwords,
config_without_stopwords
)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
processed_results_with_stopwords["units"]["uqs"],
processed_results_without_stopwords["units"]["uqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Sentence Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
plt.scatter(
processed_results_with_stopwords["workers"]["wqs"],
processed_results_without_stopwords["workers"]["wqs"],
)
plt.plot([0, 0.8], [0, 0.8], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
data_with_stopwords["units"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create profile points
Step2: Create guide curve points
Step3: Build profiles curves
Step4: Check
Step5: Result
Step6: Visualize the result
|
<ASSISTANT_TASK:>
Python Code:
import tigl3.curve_factories
import tigl3.surface_factories
from OCC.gp import gp_Pnt
from OCC.Display.SimpleGui import init_display
import numpy as np
# list of points on NACA2412 profile
px = [1.000084, 0.975825, 0.905287, 0.795069, 0.655665, 0.500588, 0.34468, 0.203313, 0.091996, 0.022051, 0.0, 0.026892, 0.098987, 0.208902, 0.346303, 0.499412, 0.653352, 0.792716, 0.90373, 0.975232, 0.999916]
py = [0.001257, 0.006231, 0.019752, 0.03826, 0.057302, 0.072381, 0.079198, 0.072947, 0.054325, 0.028152, 0.0, -0.023408, -0.037507, -0.042346, -0.039941, -0.033493, -0.0245, -0.015499, -0.008033, -0.003035, -0.001257]
points_c1 = np.array([pnt for pnt in zip(px, [0.]*len(px), py)]) * 2.
points_c2 = np.array([pnt for pnt in zip(px, [0]*len(px), py)])
points_c3 = np.array([pnt for pnt in zip(px, py, [0.]*len(px))]) * 0.2
# shift sections to their correct position
# second curve at y = 7
points_c2 += np.array([1.0, 7, 0])
# third curve at y = 7.5
points_c3[:, 1] *= -1
points_c3 += np.array([1.7, 7.8, 1.0])
# upper trailing edge points
te_up_points = np.array([points_c1[0,:], points_c2[0,:], points_c3[0,:]])
# leading edge points.
le_points = np.array([
points_c1[10,:], # First profile LE
[0.35, 2., -0.1],# Additional point to control LE shape
[0.7, 5., -0.2], # Additional point to control LE shape
points_c2[10,:], # Second profile LE
points_c3[10,:], # Third profile LE
])
# lower trailing edge points
te_lo_points = np.array([points_c1[-1,:], points_c2[-1,:], points_c3[-1,:]])
profile_1 = tigl3.curve_factories.interpolate_points(points_c1)
profile_2 = tigl3.curve_factories.interpolate_points(points_c2)
profile_3 = tigl3.curve_factories.interpolate_points(points_c3)
# Lets define also the parameters of the points to control the shape of the guide curve
# This is optional, but can improve the result
te_up = tigl3.curve_factories.interpolate_points(te_up_points, [0, 0.65, 1.])
le = tigl3.curve_factories.interpolate_points(le_points, [0., 0.25, 0.55, 0.8, 1.0])
te_lo = tigl3.curve_factories.interpolate_points(te_lo_points, [0, 0.65, 1.])
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.0001)
# draw the curve
display.DisplayShape(profile_1)
display.DisplayShape(profile_2)
display.DisplayShape(profile_3)
display.DisplayShape(te_up)
display.DisplayShape(le)
display.DisplayShape(te_lo)
# also draw the guide curve points
for point in le_points:
display.DisplayShape(gp_Pnt(*point), update=False)
for point in te_up_points:
display.DisplayShape(gp_Pnt(*point), update=False)
for point in te_lo_points:
display.DisplayShape(gp_Pnt(*point), update=False)
# match content to screen and start the event loop
display.FitAll()
start_display()
surface = tigl3.surface_factories.interpolate_curve_network([profile_1, profile_2, profile_3],
[te_up, le, te_lo])
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.00007)
# draw the curve
display.DisplayShape(profile_1)
display.DisplayShape(profile_2)
display.DisplayShape(profile_3)
display.DisplayShape(te_up)
display.DisplayShape(le)
display.DisplayShape(te_lo)
display.DisplayShape(surface)
# match content to screen and start the event loop
display.FitAll()
start_display()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generated some initial 2D data
Step2: Define the labels and shuffle the data
Step3: We'll get back to this later, but the following are test inputs that we'll use to evaluate the model
Step4: Again, define the placeholders, variables, model, and cost function
Step5: Train the softmax classification model
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 1000
num_labels = 3
batch_size = 100
x1_label0 = np.random.normal(1, 1, (100, 1))
x2_label0 = np.random.normal(1, 1, (100, 1))
x1_label1 = np.random.normal(5, 1, (100, 1))
x2_label1 = np.random.normal(4, 1, (100, 1))
x1_label2 = np.random.normal(8, 1, (100, 1))
x2_label2 = np.random.normal(0, 1, (100, 1))
plt.scatter(x1_label0, x2_label0, c='r', marker='o', s=60)
plt.scatter(x1_label1, x2_label1, c='g', marker='x', s=60)
plt.scatter(x1_label2, x2_label2, c='b', marker='_', s=60)
plt.show()
xs_label0 = np.hstack((x1_label0, x2_label0))
xs_label1 = np.hstack((x1_label1, x2_label1))
xs_label2 = np.hstack((x1_label2, x2_label2))
xs = np.vstack((xs_label0, xs_label1, xs_label2))
labels = np.matrix([[1., 0., 0.]] * len(x1_label0) + [[0., 1., 0.]] * len(x1_label1) + [[0., 0., 1.]] * len(x1_label2))
arr = np.arange(xs.shape[0])
np.random.shuffle(arr)
xs = xs[arr, :]
labels = labels[arr, :]
test_x1_label0 = np.random.normal(1, 1, (10, 1))
test_x2_label0 = np.random.normal(1, 1, (10, 1))
test_x1_label1 = np.random.normal(5, 1, (10, 1))
test_x2_label1 = np.random.normal(4, 1, (10, 1))
test_x1_label2 = np.random.normal(8, 1, (10, 1))
test_x2_label2 = np.random.normal(0, 1, (10, 1))
test_xs_label0 = np.hstack((test_x1_label0, test_x2_label0))
test_xs_label1 = np.hstack((test_x1_label1, test_x2_label1))
test_xs_label2 = np.hstack((test_x1_label2, test_x2_label2))
test_xs = np.vstack((test_xs_label0, test_xs_label1, test_xs_label2))
test_labels = np.matrix([[1., 0., 0.]] * 10 + [[0., 1., 0.]] * 10 + [[0., 0., 1.]] * 10)
train_size, num_features = xs.shape
X = tf.placeholder("float", shape=[None, num_features])
Y = tf.placeholder("float", shape=[None, num_labels])
W = tf.Variable(tf.zeros([num_features, num_labels]))
b = tf.Variable(tf.zeros([num_labels]))
y_model = tf.nn.softmax(tf.matmul(X, W) + b)
cost = -tf.reduce_sum(Y * tf.log(y_model))
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
with tf.Session() as sess:
tf.global_variables_initializer().run()
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = xs[offset:(offset + batch_size), :]
batch_labels = labels[offset:(offset + batch_size)]
err, _ = sess.run([cost, train_op], feed_dict={X: batch_xs, Y: batch_labels})
if step % 100 == 0:
print (step, err)
W_val = sess.run(W)
print('w', W_val)
b_val = sess.run(b)
print('b', b_val)
print("accuracy", accuracy.eval(feed_dict={X: test_xs, Y: test_labels}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Learn about the Model Input
Step2: Exercise 1
Step3: With the cat command you can print character by character one of the two input files that HydroTrend uses.
Step4: ## <font color = green> Assignment 1 </font>
Step5: <font color = green> Assignment 2 </font>
Step6: <font color = green> Assignment 3 </font>
Step7: HydroTrend Exercise 2
Step8: <font color = green> Assignment 5 </font>
Step9: <font color = green> Assignment 6 </font>
Step10: <font color = green> Assignment 7 </font>
Step11: Exercise 3
Step12: <font color = green> Bonus Assignment 9 </font>
Step13: <font color = green> Bonus Assignment 10 </font>
|
<ASSISTANT_TASK:>
Python Code:
# To start, import numpy and matplotlib.
import matplotlib.pyplot as plt
import numpy as np
# Then we import the package
import pymt.models
hydrotrend = pymt.models.Hydrotrend()
import pymt
pymt.__version__
# Get basic information about the HydroTrend model
help(hydrotrend)
# Set up Hydrotrend model by indicating the number of years to run
config_file, config_folder = hydrotrend.setup("_hydrotrendBC", run_duration=100)
cat _hydrotrendBC/HYDRO0.HYPS
cat _hydrotrendBC/HYDRO.IN
#In pymt one can always find out what output a model generates by using the .output_var_names method.
hydrotrend.output_var_names
# Now we initialize the model with the configure file and in the configure folder
hydrotrend.initialize(config_file, config_folder)
# this line of code lists time parameters, when, how long and at what timestep will the model simulation work?
hydrotrend.start_time, hydrotrend.time, hydrotrend.end_time, hydrotrend.time_step, hydrotrend.time_units
# this code declares numpy arrays for several important parameters we want to save.
n_days = int(hydrotrend.end_time)
q = np.empty(n_days) #river discharge at the outlet
qs = np.empty(n_days)# sediment load at the outlet
cs = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet
qb = np.empty(n_days) # bedload at the outlet
# here we have coded up the time loop using i as the index
# we update the model with one timestep at the time, untill we reach the end time
# for each time step we also get the values for the output parameters we wish to
for i in range(n_days):
hydrotrend.update()
q[i] = hydrotrend.get_value("channel_exit_water__volume_flow_rate")
qs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_flow_rate")
cs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_concentration")
qb[i] = hydrotrend.get_value("channel_exit_water_sediment~bedload__mass_flow_rate")
# We can plot the simulated output timeseries of Hydrotrend, for example the river discharge
plt.plot(q)
plt.title('HydroTrend simulation of 100 year river discharge, Waiapaoa River')
plt.ylabel('river discharge in m3/sec')
plt.show
# Or you can plot a subset of the simulated daily timeseries using the index
#for example the first year
plt.plot(q[0:365], 'black')
# compare with the last year
plt.plot(q[-366:-1],'grey')
plt.title('HydroTrend simulation of first and last year discharge, Waiapaoa River')
plt.show()
# Of course, it is important to calculate statistical properties of the simulated parameters
print(q.mean())
hydrotrend.get_var_units("channel_exit_water__volume_flow_rate")
# your code goes here
# here you can calculate the maximum river discharge.
# your code to determine which day and which year encompass the maximum discharge go here
# Hint: you will want to determine the ndex of htis day first, look into the numpy.argmax and numpy.argmin
# as a sanity check you can see whether the plot y-axis seems to go up to the maximum you had calculated in the previous step
# as a sanity check you can look in the plot of all the years to see whether the timing your code predicts is correct
# type your explanation about the 100 year flood here.
# your code goes here
# you will have to sum all days of the individual years, to get the annual loads, then calculate the mean over the 100 years.
# one possible trick is to use the .reshape() method
# plot a graph of the 100 years timeseries of the total annual loads
# take the mean over the 100 years
#your evaluation of the sediment load of the Waiapaoha River and its comparison to the Mississippi River goes here.
#Hint: use the following paper to read about the Mississippi sediment load (Blum, M, Roberts, H., 2009. Drowning of the Mississippi Delta due to insufficient sediment supply and global sea-level rise, Nature Geoscience).
# Set up a new run of the Hydrotrend model
# Create a new config file a different folder for input and output files, indicating the number of years to run, and specify the change in mean annual temparture parameter
hydrotrendHT = pymt.models.Hydrotrend()
config_file, config_folder = hydrotrendHT.setup("_hydrotrendhighT", run_duration=50, change_in_mean_annual_temperature=0.08)
# intialize the new simulation
hydrotrendHT.initialize(config_file, config_folder)
# the code for the timeloop goes here
# I use the abbrevation HT for 'High Temperature' scenario
n_days = int(hydrotrendHT.end_time)
q_HT = np.empty(n_days) #river discharge at the outlet
qs_HT = np.empty(n_days)# sediment load at the outlet
cs_HT = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet
qb_HT = np.empty(n_days) # bedload at the outlet
for i in range(n_days):
hydrotrendHT.update()
q_HT[i] = hydrotrendHT.get_value("channel_exit_water__volume_flow_rate")
qs_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~suspended__mass_flow_rate")
cs_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~suspended__mass_concentration")
qb_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~bedload__mass_flow_rate")
# your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here
# print out these same parameters for the basecase for comparison
# type your answers here
# Set up a new run of the Hydrotrend model
# Create a new config file indicating the number of years to run, and specify the change in mean annual precipitation parameter
# initialize the new simulation
# your code for the timeloop goes here
## your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here
#type your answer here
# your explanation goes here, can you list two reasons why this factor would be unsuitable or it would fall short?
# Set up a new 50 year of the Hydrotrend model
# Create a new directory, and a config file indicating the number of years to run, and specify different reservoir parameters
# initialize the new simulation
# your code for the timeloop and update loop goes here
# plot a bar graph comparing Q mean, Qs mean, Qmax, Qs Max, Qb mean and Qbmax for the basecase run and the reservoir run
# Describe how such a reservoir affects the water and sediment load at the coast (i.e. downstream of the reservoir)?
# write a short motivation and description of your scenario
# make a 2 panel plot using the subplot functionality of matplotlib
# One panel would show the hypsometry of the Waiapohoa and the other panel the hypsometry of your selected river basin
# Set up a new 50 year of the Hydrotrend model
# Create a new directory for this different basin
# initialize the new simulation
# your code for the timeloop and update loop goes here
# plot a line graph comparing Q mean, Qs mean, for the basecase run and the new river basin run
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first draw a set of inputs from the parameter space according to the distributions in the table above and compute the function's value at each of these points.
Step2: The following cell instantiates a subspace object and computes the active subspace by fitting a global linear model (this is indicated by the 'sstype='OLS'' flag). The 'nboot=100' flag says to use 100 bootstrap replicates to estimate variability in eigenvalue and subspace error estimates.
Step3: We plot estimated eigenvalues, subspace errors, and a sufficient summary plot (a plot of the active variable vs data values).
Step4: We now make the same plots with a global quadratic model (sstype='QPHD').
Step5: We now use local linear models to approximate gradients and make the same plots as above.
Step6: We see tight trends in the summary plots in each case, which look very similar between the 3 approaches to approximating gradients. Because these trends look so similar between the approaches, it would likely not make much difference which one is used for constructing a response surface for this particular application, though the global linear model does not allow for higher-dimensional subspaces, so you would need to use a global quadratic model or local linear models if you needed a higher-dimensional space. The consistency between these approaches is not guaranteed; you should explore each for your application and decide which fits your needs the best.
Step7: The summary plot looks very similar to the plots created by approximating gradients with input/output pairs. This is also not guaranteed; having exact gradients will often result in tighter trends in summary plots because you don't have the additional layer of error from approximating gradients.
Step8: We can see from the $R^2$ value and the graph that the response surface is a good fit. We'll demonsrate how to use the response surface to estimate the average and minimum of the quantity of interest. We must first construct an active variable domain (requiring a computed subspace object) and an active variable map (requiring a domain object).
Step9: Now we show how to average the quantity of interest against the joint distribution of input parameters using the response surface, and compare this estimate to a Monte Carlo estimate using the full input space.
Step10: We see very close answers between these methods, which we would expect based on the good fit of the response surface. The Monte Carlo method takes significantly longer to run despite our model being a fairly simple algebraic equation; for models that take many minutes or hours to run, accurate Monte Carlo (or other traditional integration methods) may be infeasible. We now demonstrate the use of the response surface in minimization and compare the response surface result with the results of a scipy optimizer on the full input space.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import active_subspaces as ac
import numpy as np
import matplotlib.pyplot as plt
from wing_functions import *
M = 1000 #This is the number of data points to use
#Sample the input space according to the distributions in the table above
Sw = np.random.uniform(150, 200, (M, 1))
Wfw = np.random.uniform(220, 300, (M, 1))
A = np.random.uniform(6, 10, (M, 1))
L = np.random.uniform(-10, 10, (M, 1))
q = np.random.uniform(16, 45, (M, 1))
l = np.random.uniform(.5, 1, (M, 1))
tc = np.random.uniform(.08, .18, (M, 1))
Nz = np.random.uniform(2.5, 6, (M, 1))
Wdg = np.random.uniform(1700, 2500, (M, 1))
Wp = np.random.uniform(.025, .08, (M, 1))
#The input matrix
x = np.hstack((Sw, Wfw, A, L, q, l, tc, Nz, Wdg, Wp))
#The function's output
f = wing(x)
#Upper and lower limits for inputs
ub = np.array([150, 220, 6, -10, 16, .5, .08, 2.5, 1700, .025]).reshape((1, 10))
lb = np.array([200, 300, 10, 10, 45, 1, .18, 6, 2500, .08]).reshape((1, 10))
#We normalize the inputs to the interval [-1, 1]:
XX = 2.*(x - lb)/(ub - lb) - 1.0
#Instantiate a subspace object
ss = ac.subspaces.Subspaces()
#Compute the subspace with a global linear model (sstype='OLS') and 100 bootstrap replicates
ss.compute(X=XX, f=f, nboot=100, sstype='OLS')
#This plots the eigenvalues (ss.eigenvals) with bootstrap ranges (ss.e_br)
ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br)
#This plots subspace errors with bootstrap ranges (all contained in ss.sub_br)
ac.utils.plotters.subspace_errors(ss.sub_br)
#This makes sufficient summary plots with the active variables (XX.dot(ss.W1)) and output (f)
ac.utils.plotters.sufficient_summary(XX.dot(ss.W1), f)
ss.compute(X=XX, f=f, nboot=100, sstype='QPHD')
ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br)
ac.utils.plotters.subspace_errors(ss.sub_br)
ac.utils.plotters.sufficient_summary(XX.dot(ss.W1), f)
df = ac.gradients.local_linear_gradients(XX, f) #Estimated gradients using local linear models
ss.compute(df=df, nboot=100)
ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br)
ac.utils.plotters.subspace_errors(ss.sub_br)
ac.utils.plotters.sufficient_summary(XX.dot(ss.W1), f)
df = wing_grad(x) #Analytically computed gradients
#Chain Rule
df *= (ub - lb)/2.
#Additional chain rule factor for going from degrees to radians
df[:,3] *= np.pi/180.
ss.compute(df=df, nboot=100)
ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br)
ac.utils.plotters.subspace_errors(ss.sub_br)
ac.utils.plotters.sufficient_summary(XX.dot(ss.W1), f)
#quadratic polynomial approximation
RS = ac.utils.response_surfaces.PolynomialApproximation(2)
#Train the surface with active variable values (y = XX.dot(ss.W1)) and function values (f)
y = XX.dot(ss.W1)
RS.train(y, f)
print 'The R^2 value of the response surface is {:.4f}'.format(RS.Rsqr)
#Plot the data and response surface prediction
plt.figure(figsize=(7, 7))
y0 = np.linspace(-2, 2, 200)
plt.plot(y, f, 'bo', y0, RS.predict(y0[:,None])[0], 'k-', linewidth=2, markersize=8)
plt.grid(True)
plt.xlabel('Active Variable Value', fontsize=18)
plt.ylabel('Output', fontsize=18)
avdom = ac.domains.BoundedActiveVariableDomain(ss)
avmap = ac.domains.BoundedActiveVariableMap(avdom)
#import time for runtime comparison
import time
tb_as = time.time()
#Approximate the average of the quantity of interest by integrating the response surface
int_I = ac.integrals.av_integrate(lambda x: RS.predict(x)[0], avmap, 1000)
print 'The estimated average using the response surface is {:.2f}'.format(int_I)
te_as = time.time()
tb_mc = time.time()
#Approximate the average using simple Monte Carlo with M replicates
M = int(1e7)
X = np.random.uniform(-1, 1, (M, 10))
X = lb + (ub - lb)/2.*(X + 1)
MC_f = wing(X)
MC_I = MC_f.sum()/M
MCSE = MC_f.std(ddof=1)/np.sqrt(M)
print 'The estimated average using standard Monte Carlo is {:.2f} with 95% CI ({:.2f}, {:.2f})'\
.format(MC_I, MC_I - 1.96*MCSE, MC_I + 1.96*MCSE)
te_mc = time.time()
print 'Active subspaces took {:.1f} seconds and MCMC took {:.1f} seconds'\
.format(te_as - tb_as, te_mc - tb_mc)
#Minimize using the response surface
ystar, fstar = ac.optimizers.av_minimize(lambda x: RS.predict(x.reshape(1,1))[0], avdom)
print 'The response surface optimization results are\n: ystar = {:.4f}, fstar = {:.4f}\n'.\
format(ystar[0,0], fstar[0,0])
#Minimize using scipy
import scipy.optimize as opt
#wing_n takes normalized inputs while wing takes unnormalized inputs
wing_n = lambda x: wing(lb + (ub - lb)/2.*(x + 1))
scipy_res = opt.minimize(wing_n, np.zeros(10), bounds=(zip(-1*np.ones(10), np.ones(10))))
print 'The scipy optimization results are:'
print 'xstar = {}, fstar = {:.4f}'.format(scipy_res['x'], scipy_res['fun'][0,0])
print 'xstar*W1 = {:.4f}'.format(scipy_res['x'].reshape((1, 10)).dot(ss.W1)[0,0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
Step6: BUT WAIT!
Step7: Column zero has 9020 zeros and 980 ones. So about one tenth of our 10000 images was a zero.
Step8: It has over 1000 ones. So there are more ones than zeros in the testY set.
Step9: As a check, adding up all the one-hot one's, we should get a total of 10000
Step10: Now let's look at our predictions in the same way
Step11: And what about our accuracy test?
Step12: But what about the other columns besides zero?
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 128, activation='ReLU')
net = tflearn.fully_connected(net, 32, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=8)
# Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
np.bincount(testY[:,0].astype(np.int_))
np.bincount(testY[:,1].astype(np.int_))
for i in range(10):
print(i, np.bincount(testY[:,i].astype(np.int_)))
980 + \
1135+ \
1032+ \
1010+ \
982 + \
892 + \
958 + \
1028+ \
974 + \
1009
for i in range(10):
print(i, np.bincount((np.array(model.predict(testX))[:,i] >= 0.5).astype(np.int_)))
np.bincount(predictions == testY[:,0])
for i in range(10):
print(i, np.bincount((np.array(model.predict(testX))[:,i] >= 0.5).astype(np.int_) == testY[:,i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name = 'inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name = 'targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation = tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation = None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name = 'output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In SQL databases hold tables, tables contain rows and each row is made up of a number of columns. In NoSQL databases hold collections, collections hold documents and each document is made up of a number of fields. Documents are the NoSQL version of rows and are written in json, so we can write them as python dictionaries. Above we are using the database "test_database" and the collection "user". This is the collection we will insert our documents into.
Step2: Above we created a document with name, location and interests fields. The fields above take strings or a list of strings but it is also possible to give them ints, floats, even other documents and many other types.
Step3: insert_many() can be used for bulk inserts. Again note that the first user has an extra field not declared by any of the other users.
Step4: While we know that each user has included their name, if we try to iterate over a field that is not present in all documents, such as the occupation field, we will get an error. Code such as below should be used to prevent an error being thrown.
Step5: Bob has gotten a job and wants to update his profile. users.find_one({"name"
Step6: It turns out that "Elliot" is actually a bot account made with the purpose of spamming the other users. remove({"name"
Step7: An example of how to delete data from a user is shown below. We pull Mike's data out of the collection and then pop off the occupation field.
Step8: Here we print off the ObjectId for Mike. This returns to us a string which we can not use in searching the collection. To do so we must convert it into an ObjectId by importing ObjectId from bson.objectid (binary javascript object notation). Below we get Mike's ObjectId as a string, convert it and search the collection for his document, update his document and then search for the new entry, all using his unique ObjectId.
Step9: PyMongo allows us to leverage other great python modules such as Pandas. Here we import pandas, convert the cursor returned by using find() into a list and then pass turn that into a new DataFrame. All missing values are replaced with NaN and we can use pandas fillna() to replace these with suitable values
|
<ASSISTANT_TASK:>
Python Code:
from pymongo import MongoClient
client = MongoClient() #connects to the running mongod instance
users = client.test_database.user #creates database "test_database" and collection "user" if they do not already exist
users.remove({}) #Making sure that the collection is empty before I start
user = {"name" : "Bob",
"location" : "Ireland",
"interests" : ["Java", "python"]}
users.insert_one(user) #insert user into users collection as a document. insert() can also be used.
users.find() #find() returns a cursor
users.find_one()
user = {"name" : "Ted",
"interests" : ["data science", "R"]} #Note that this document does not contain the same fields (columns) as the one above
users.insert_one(user)
for user in users.find(): #use the cursor to return all documents in the collection
print user #the order the fields will be printed in can't be guaranteed
#Insert many users at once
new_users = [{"name" : "Mike",
"occupation" : "Data Scientist",
"interests" : ["data science", "machine learning", "python", "R"]},
{"name" : "Elliot",
"interests" : ["programming"]}]
users.insert_many(new_users)
for user in users.find():
print user["name"] #print out just the users name
#printing the occupation for all users will throw an error as only one document contains this information
for user in users.find():
try:
print user["occupation"]
except (Exception):
pass
#find returns the cursor position of a specific user
print users.find({"name" : "Bob"}) #using find_one() instead would return the first user found whose name is Bob.
update_user = users.find_one({"name" : "Bob"})
update_user["occupation"] = "programmer"
users.update({"name" : "Bob"}, update_user)
print users.find_one({"name" : "Bob"})
users.remove({"name" : "Elliot"})
users.find_one({"name" : "Elliot"})
update_user = users.find_one({"name" : "Mike"})
update_user.pop("occupation")
print update_user
print update_user["_id"]
from bson.objectid import ObjectId
id_as_string = update_user["_id"]
print users.find_one({"_id" : ObjectId(id_as_string)})
users.update({"_id" : ObjectId(id_as_string)}, update_user)
print ""
print users.find_one({"_id" : ObjectId(id_as_string)})
import pandas as pd
df = pd.DataFrame(list(users.find()))
print df.head()
df = df.fillna("None")
print df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define a sine wave
Step2: Plot the sine wave
Step3: Define a function that allows a user to vary the wavenumber of the sine wave
Step4: For example
Step5: Use a slider widget to call sine_plotter and interactively change the wave number of the sine wave
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact
x = np.linspace(0, 1, 101)
k = 2
f = np.sin(2*np.pi * k * x)
plt.plot(x, f)
def sine_plotter(wave_number):
plt.plot(x, np.sin(2*np.pi * x * wave_number), 'r')
sine_plotter(5)
interact(sine_plotter, wave_number=(1, 10, 0.5))
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.