Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
6,600
|
<ASSISTANT_TASK:>
Python Code:
print(
This is some printed text,
with a nicely formatted output.
)
Image('example.jpg',height=400)
plt.scatter(np.random.rand(10), np.random.rand(10),
label='data label')
plt.ylabel(r'a y label with latex $\alpha$')
plt.legend();
df = pd.DataFrame(np.random.rand(3,4),columns=['a','b','c','d'])
df.a = ['$\delta$','x','y']
df.b = ['l','m','n']
df.set_index(['a','b'])
df.round(3)
Latex('$$ a = b+c $$')
y = sym.Function('y')
n = sym.symbols(r'\alpha')
f = y(n)-2*y(n-1/sym.pi)-5*y(n-2)
sym.rsolve(f,y(n),[1,4])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Markdown
Step2: Images and Figures
Step3: Displaying a plot with its code
Step4: Tables (with pandas)
Step5: Equations (with ipython or sympy)
Step6: The plotting code for a sympy equation (=@eqn
|
6,601
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import os, sys
import re, json
from urllib.parse import urlparse
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import Imputer, FunctionTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.externals import joblib
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import chi2, SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import KFold
from nltk.stem.snowball import SnowballStemmer
from nltk.stem import WordNetLemmatizer
from nltk import word_tokenize
import xgboost as xgb
import warnings
warnings.filterwarnings('ignore')
basepath = os.path.expanduser('~/Desktop/src/Stumbleupon_classification_challenge/')
sys.path.append(os.path.join(basepath, 'src'))
np.random.seed(4)
from data import load_datasets
from models import train_test_split, cross_val_scheme
# Initialize Stemmer
sns = SnowballStemmer(language='english')
train, test, sample_sub = load_datasets.load_dataset()
train['is_news'] = train.is_news.fillna(-999)
test['is_news'] = test.is_news.fillna(-999)
def extract_top_level_domain(url):
parsed_url = urlparse(url)
top_level = parsed_url[1].split('.')[-1]
return top_level
def get_tlds(urls):
return np.array([extract_top_level_domain(url) for url in urls])
train['tlds'] = get_tlds(train.url)
test['tlds'] = get_tlds(test.url)
ohe = pd.get_dummies(list(train.tlds) + list(test.tlds))
train = pd.concat((train, ohe.iloc[:len(train)]), axis=1)
test = pd.concat((test, ohe.iloc[len(train):]), axis=1)
class NumericalFeatures(BaseEstimator, TransformerMixin):
@staticmethod
def url_depth(url):
parsed_url = urlparse(url)
path = parsed_url.path
return len(list(filter(lambda x: len(x)> 0, path.split('/'))))
@staticmethod
def get_url_depths(urls):
return np.array([NumericalFeatures.url_depth(url) for url in urls])
def __init__(self, numerical_features):
self.features = numerical_features
def fit(self, X, y=None):
return self
def transform(self, df):
df['url_depth'] = self.get_url_depths(df.url)
numeric_features = self.features + ['url_depth']
df_numeric = df[numeric_features]
return df_numeric
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
X_train = train.iloc[itrain]
X_test = train.iloc[itest]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
numeric_features = list(train.select_dtypes(exclude=['object']).columns[1:])
numeric_features.remove('label')
pipeline = Pipeline([
('feature_extractor', NumericalFeatures(numeric_features)),
('imputer', Imputer(strategy='mean')),
('scaler', StandardScaler()),
('model', xgb.XGBClassifier(learning_rate=.08, max_depth=6))
])
pipeline.fit(X_train, y_train)
# cross validation
params = {
'n_folds': 5,
'shuffle': True,
'random_state': 3
}
scores, mean_score, std_score = cross_val_scheme.cv_scheme(pipeline, X_train, y_train, train.iloc[itrain].is_news, **params)
print('CV Scores: %s'%(scores))
print('Mean CV Score: %f'%(mean_score))
print('Std Cv Scoes: %f'%(std_score))
y_preds = pipeline.predict_proba(X_test)[:, 1]
print('ROC AUC score on the test set ', roc_auc_score(y_test, y_preds))
joblib.dump(pipeline, os.path.join(basepath, 'data/processed/pipeline_numeric/pipeline_numeric.pkl'))
train = joblib.load(os.path.join(basepath, 'data/processed/train_raw_content.pkl'))
test = joblib.load(os.path.join(basepath, 'data/processed/test_raw_content.pkl'))
train_json = list(map(json.loads, train.boilerplate))
test_json = list(map(json.loads, test.boilerplate))
train['boilerplate'] = train_json
test['boilerplate'] = test_json
def get_component(boilerplate, key):
Get value for a particular key in boilerplate json,
if present return the value else return an empty string
boilerplate: list of boilerplate text in json format
key: key for which we want to fetch value e.g. body, title and url
return np.array([bp[key] if key in bp and bp[key] else u'' for bp in boilerplate])
train['body_bp'] = get_component(train.boilerplate, 'body')
test['body_bp'] = get_component(test.boilerplate, 'body')
train['title_bp'] = get_component(train.boilerplate, 'title')
test['title_bp'] = get_component(test.boilerplate, 'title')
train['url_component'] = get_component(train.boilerplate, 'url')
test['url_component'] = get_component(test.boilerplate, 'url')
class LemmaTokenizer(object):
def __init__(self):
self.wnl = WordNetLemmatizer()
def __call__(self, doc):
return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
class VarSelect(BaseEstimator, TransformerMixin):
def __init__(self, keys):
self.keys = keys
def fit(self, X, y=None):
return self
def transform(self, df):
return df[self.keys]
class StemTokenizer(object):
def __init__(self):
self.sns = sns
def __call__(self, doc):
return [self.sns.stem(t) for t in word_tokenize(doc)]
def remove_non_alphanumeric(df):
return df.replace(r'[^A-Za-z0-9]+', ' ', regex=True)
strip_non_words = FunctionTransformer(remove_non_alphanumeric, validate=False)
# Lemma Tokenizer
pipeline_lemma = Pipeline([
('strip', strip_non_words),
('union', FeatureUnion([
('body', Pipeline([
('var', VarSelect(keys='body_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('title', Pipeline([
('var', VarSelect(keys='title_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('url', Pipeline([
('var', VarSelect(keys='url_component')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(),
ngram_range=(1,2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=50))
]))
])),
('scaler', MinMaxScaler()),
('selection', SelectKBest(chi2, k=100)),
('model', LogisticRegression())
])
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
features = ['url_component', 'body_bp', 'title_bp']
X_train = train.iloc[itrain][features]
X_test = train.iloc[itest][features]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
pipeline.fit(X_train, y_train)
y_preds = pipeline.predict_proba(X_test)[:, 1]
print('AUC score on unseen examples are: ', roc_auc_score(y_test, y_preds))
# train on full dataset
X = train[features]
y = train.label
pipeline_lemma.fit(X, y)
# save this model to disk
joblib.dump(pipeline_lemma, os.path.join(basepath, 'data/processed/pipeline_boilerplate_lemma/model_lemma.pkl'))
# Stemming Tokenizer
pipeline_stemming = Pipeline([
('strip', strip_non_words),
('union', FeatureUnion([
('body', Pipeline([
('var', VarSelect(keys='body_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('title', Pipeline([
('var', VarSelect(keys='title_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('url', Pipeline([
('var', VarSelect(keys='url_component')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(),
ngram_range=(1,2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=50))
]))
])),
('scaler', MinMaxScaler()),
('selection', SelectKBest(chi2, k=100)),
('model', LogisticRegression())
])
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
features = ['url_component', 'body', 'title']
X_train = train.iloc[itrain][features]
X_test = train.iloc[itest][features]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
pipeline_stemming.fit(X_train, y_train)
y_preds = pipeline_stemming.predict_proba(X_test)[:, 1]
print('AUC score on unseen examples are: ', roc_auc_score(y_test, y_preds))
# train on full dataset
X = train[features]
y = train.label
pipeline_stemming.fit(X, y)
# save this model to disk
joblib.dump(pipeline_stemming, os.path.join(basepath, 'data/processed/pipeline_boilerplate_stem/model_stem.pkl'))
class Blending(object):
def __init__(self, models):
self.models = models # dict
def predict(self, X, X_test, y=None):
cv = KFold(len(X), n_folds=3, shuffle=True, random_state=10)
dataset_blend_train = np.zeros((X.shape[0], len(self.models.keys())))
dataset_blend_test = np.zeros((X_test.shape[0], len(self.models.keys())))
for index, key in enumerate(self.models.keys()):
dataset_blend_test_index = np.zeros((X_test.shape[0], len(cv)))
model = self.models[key][1]
feature_list = self.models[key][0]
print('Training model of type: ', key)
for i , (itrain, itest) in enumerate(cv):
Xtr = X.iloc[itrain][feature_list]
ytr = y.iloc[itrain]
Xte = X.iloc[itest][feature_list]
yte = y.iloc[itest]
y_preds = model.predict_proba(Xte)[:, 1]
dataset_blend_train[itest, index] = y_preds
dataset_blend_test_index[:, i] = model.predict_proba(X_test)[:, 1]
dataset_blend_test[:, index] = dataset_blend_test_index.mean(1)
print('\nBlending')
clf = LogisticRegression()
clf.fit(dataset_blend_train, y)
y_submission = clf.predict_proba(dataset_blend_test)[:, 1]
y_submission = (y_submission - y_submission.min()) / (y_submission.max() - y_submission.min())
return y_submission
def stem_tokens(x):
return ' '.join([sns.stem(word) for word in word_tokenize(x)])
def preprocess_string(s):
return stem_tokens(s)
class Weights(BaseEstimator, TransformerMixin):
def __init__(self, weight):
self.weight = weight
def fit(self, X, y=None):
return self
def transform(self, X):
return self.weight * X
# load all the models from the disk
# pipeline_numeric = joblib.load(os.path.join(basepath, 'data/processed/pipeline_numeric/pipeline_numeric.pkl'))
# pipeline_lemma = joblib.load(os.path.join(basepath, 'data/processed/pipeline_boilerplate_lemma/model_lemma.pkl'))
# pipeline_stemming = joblib.load(os.path.join(basepath, 'data/processed/pipeline_boilerplate_stem/model_stem.pkl'))
pipeline_raw = joblib.load(os.path.join(basepath, 'data/processed/pipeline_raw/model_raw.pkl'))
numeric_features = list(train.select_dtypes(exclude=['object']).columns[1:]) + ['url']
numeric_features.remove('label')
boilerplate_features = ['body_bp', 'title_bp', 'url_component']
raw_features = ['body', 'title', 'h1', 'h2', 'h3', 'h4', 'span', 'a', 'label_',\
'meta-title', 'meta-description', 'li']
models = {
# 'numeric': [numeric_features, pipeline_numeric],
'boilerplate_lemma': [boilerplate_features, pipeline_lemma],
'boilerplate_stem': [boilerplate_features, pipeline_stemming],
'boilerplate_raw': [raw_features, pipeline_raw]
}
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
features = list(boilerplate_features) + list(raw_features)
X_train = train.iloc[itrain][features]
X_test = train.iloc[itest][features]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
blend = Blending(models)
y_blend = blend.predict(X_train, X_test, y_train)
print('AUC score after blending ', roc_auc_score(y_test, y_blend))
X = train[features]
X_test = test[features]
y = train.label
assert X.shape[1] == X_test.shape[1]
blend = Blending(models)
predictions = blend.predict(X, X_test, y)
sample_sub['label'] = predictions
sample_sub.to_csv(os.path.join(basepath, 'submissions/blend_3.csv'), index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Text Features based on the boiler plate
Step2: Split into training and test sets.
Step3: Load Textual Features Prepared from raw content
Step5: Text features from Boilerplate
Step6: Pipeline involving Stemming
Step7: Blending
Step8: Train on full dataset.
Step9: Submissions
|
6,602
|
<ASSISTANT_TASK:>
Python Code:
# you would normaly install eppy by doing
# python setup.py install
# or
# pip install eppy
# or
# easy_install eppy
# if you have not done so, uncomment the following three lines
import sys
# pathnameto_eppy = 'c:/eppy'
pathnameto_eppy = '../'
sys.path.append(pathnameto_eppy)
from eppy import modeleditor
from eppy.modeleditor import IDF
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
try:
idf1 = IDF(fname1)
except Exception, e:
raise e
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
IDF.setiddname(iddfile)
idf1 = IDF(fname1)
try:
IDF.setiddname("anotheridd.idd")
except Exception, e:
raise e
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
# IDF.setiddname(iddfile)# idd ws set further up in this page
idf1 = IDF(fname1)
building = idf1.idfobjects['building'.upper()][0]
print building
print building.getrange("Loads_Convergence_Tolerance_Value")
print building.checkrange("Loads_Convergence_Tolerance_Value")
building.Loads_Convergence_Tolerance_Value = 0.6
from eppy.bunch_subclass import RangeError
try:
print building.checkrange("Loads_Convergence_Tolerance_Value")
except RangeError, e:
raise e
print building.fieldnames
for fieldname in building.fieldnames:
print "%s = %s" % (fieldname, building[fieldname])
from eppy.bunch_subclass import RangeError
for fieldname in building.fieldnames:
try:
building.checkrange(fieldname)
print "%s = %s #-in range" % (fieldname, building[fieldname],)
except RangeError as e:
print "%s = %s #-****OUT OF RANGE****" % (fieldname, building[fieldname],)
# some initial steps
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
# IDF.setiddname(iddfile) # Has already been set
# - Let us first open a file from the disk
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
idf_fromfilename = IDF(fname1) # initialize the IDF object with the file name
idf_fromfilename.printidf()
# - now let us open a file from the disk differently
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
fhandle = open(fname1, 'r') # open the file for reading and assign it a file handle
idf_fromfilehandle = IDF(fhandle) # initialize the IDF object with the file handle
idf_fromfilehandle.printidf()
# So IDF object can be initialized with either a file name or a file handle
# - How do I create a blank new idf file
idftxt = "" # empty string
from StringIO import StringIO
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_emptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_emptyfile.printidf()
# - The string does not have to be blank
idftxt = "VERSION, 7.3;" # Not an emplty string. has just the version number
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_notemptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_notemptyfile.printidf()
# - give it a file name
idf_notemptyfile.idfname = "notemptyfile.idf"
# - Save it to the disk
idf_notemptyfile.save()
txt = open("notemptyfile.idf", 'r').read()# read the file from the disk
print txt
import os
os.remove("notemptyfile.idf")
# making a blank idf object
blankstr = ""
from StringIO import StringIO
idf = IDF(StringIO(blankstr))
newobject = idf.newidfobject("material".upper()) # the key for the object type has to be in upper case
# .upper() makes it upper case
print newobject
newobject.Name = "Shiny new material object"
print newobject
anothermaterial = idf.newidfobject("material".upper())
anothermaterial.Name = "Lousy material"
thirdmaterial = idf.newidfobject("material".upper())
thirdmaterial.Name = "third material"
print thirdmaterial
print idf.idfobjects["MATERIAL"]
idf.popidfobject('MATERIAL', 1) # first material is '0', second is '1'
print idf.idfobjects['MATERIAL']
firstmaterial = idf.idfobjects['MATERIAL'][-1]
idf.removeidfobject(firstmaterial)
print idf.idfobjects['MATERIAL']
onlymaterial = idf.idfobjects["MATERIAL"][0]
idf.copyidfobject(onlymaterial)
print idf.idfobjects["MATERIAL"]
gypboard = idf.newidfobject('MATERIAL', Name="G01a 19mm gypsum board",
Roughness="MediumSmooth",
Thickness=0.019,
Conductivity=0.16,
Density=800,
Specific_Heat=1090)
print gypboard
print idf.idfobjects["MATERIAL"]
interiorwall = idf.newidfobject("CONSTRUCTION", Name="Interior Wall",
Outside_Layer="G01a 19mm gypsum board",
Layer_2="Shiny new material object",
Layer_3="G01a 19mm gypsum board")
print interiorwall
modeleditor.rename(idf, "MATERIAL", "G01a 19mm gypsum board", "peanut butter")
print interiorwall
idf.printidf()
defaultmaterial = idf.newidfobject("MATERIAL",
Name='with default')
print defaultmaterial
nodefaultmaterial = idf.newidfobject("MATERIAL",
Name='Without default',
defaultvalues=False)
print nodefaultmaterial
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/box.idf"
# IDF.setiddname(iddfile)
idf = IDF(fname1)
surfaces = idf.idfobjects["BuildingSurface:Detailed".upper()]
surface = surfaces[0]
print "area = %s" % (surface.area, )
print "tilt = %s" % (surface.tilt, )
print "azimuth = %s" % (surface.azimuth, )
zones = idf.idfobjects["ZONE"]
zone = zones[0]
area = modeleditor.zonearea(idf, zone.Name)
volume = modeleditor.zonevolume(idf, zone.Name)
print "zone area = %s" % (area, )
print "zone volume = %s" % (volume, )
idf1.printidf()
import eppy.json_functions as json_functions
json_str = {"idf.VERSION..Version_Identifier":8.5,
"idf.SIMULATIONCONTROL..Do_Zone_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_System_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_Plant_Sizing_Calculation": "No",
"idf.BUILDING.Empire State Building.North_Axis": 52,
"idf.BUILDING.Empire State Building.Terrain": "Rural",
}
json_functions.updateidf(idf1, json_str)
idf1.printidf()
json_str = {"idf.BUILDING.Taj.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building'.upper()]
# of course, you are creating an invalid E+ file. But we are just playing here.
# first way
json_str = {"idf.BUILDING.Taj.with.dot.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
# second way (put the name in single quotes)
json_str = {"idf.BUILDING.'Another.Taj.with.dot'.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building'.upper()]
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
IDF.setiddname(iddfile)
idf = IDF(fname)
from eppy.easyopen import easyopen
fname = './eppy/resources/idffiles/V8_8/smallfile.idf'
idf = easyopen(fname)
thefans = idf.idfobjects['Fan:VariableVolume'.upper()]
thefan = thefans[0]
bhp = thefan.fanpower_bhp
watts = thefan.fanpower_watts
cfm = thefan.fan_maxcfm
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When things go wrong in your eppy script, you get "Errors and Exceptions".
Step2: Now let us open file fname1 without setting the idd file
Step3: OK. It does not let you do that and it raises an exception
Step4: That worked without raising an exception
Step5: Excellent!! It raised the exception we were expecting.
Step6: Let us set these values outside the range and see what happens
Step7: So the Range Check works
Step8: So let us use this
Step9: Now let us test if the values are in the legal range. We know that "Loads_Convergence_Tolerance_Value" is out of range
Step10: You see, we caught the out of range value
Step11: It did not print anything. Why should it. It was empty.
Step12: Aha !
Step13: Let us confirm that the file was saved to disk
Step14: Yup ! that file was saved. Let us delete it since we were just playing
Step15: Deleting, copying/adding and making new idfobjects
Step16: To make and add a new idfobject object, we use the function IDF.newidfobject(). We want to make an object of type "MATERIAL"
Step17: Let us give this a name, say "Shiny new material object"
Step18: Let us look at all the "MATERIAL" objects
Step19: As we can see there are three MATERIAL idfobjects. They are
Step20: You can see that the second material is gone ! Now let us remove the first material, but do it using a different function
Step21: So we have two ways of deleting an idf object
Step22: So now we have a copy of the material. You can use this method to copy idf objects from other idf files too.
Step23: newidfobject() also fills in the default values like "Thermal Absorptance", "Solar Absorptance", etc.
Step24: Renaming an idf object
Step25: to rename gypboard and have that name change in all the places we call modeleditor.rename(idf, key, oldname, newname)
Step26: Now we have "peanut butter" everywhere. At least where we need it. Let us look at the entir idf file, just to be sure
Step27: Turn off default values
Step28: But why would you want to turn it off.
Step29: Can we do the same for zones ?
Step30: Not as slick, but still pretty easy
Step31: Compare the first printidf() and the second printidf().
Step32: What if you object name had a dot . in it? Will the json_function get confused?
Step33: Note When you us the json update function
Step34: You have to find the IDD file on your hard disk.
Step35: For this to work,
|
6,603
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd, numpy as np, scipy.sparse as sp
import os, gc, regex as re, time as tm
import matplotlib.pyplot as plt
%matplotlib inline
DATADIR = os.path.realpath( os.path.join( ".", "data", "proj02" ) )
raw_dblp_file = os.path.join( DATADIR, "dblp_2000.csv.gz" )
cached_dblp_file = os.path.join( DATADIR, "dblp_2000.ppdf" )
cached_author_index = os.path.join( DATADIR, "dblp_2000_authors.txt" )
## Return a mask of elements of b found in a: optimal for numeric arrays
def match( a, v, return_indices = False ) :
index = np.argsort( a )
## Get insertion indices
sorted_index = np.searchsorted( a, v, sorter = index )
## Truncate the indices by the length of a
index = np.take( index, sorted_index, mode = "clip" )
mask = a[ index ] == v
## return
if return_indices :
return mask, index[ mask ]
return mask
## Convert the edgelist into sparse matrix
def to_sparse_coo( u, v, shape, dtype = np.int32 ) :
## Create a COOrdinate sparse matrix from the given ij-indices
assert( len( u ) == len( v ) )
return sp.coo_matrix( (
np.ones( len( u ) + len( v ), dtype = dtype ), (
np.concatenate( ( u, v ) ), np.concatenate( ( v, u ) ) )
), shape = shape )
## Remeber: when converting COO to CSR/CSC the duplicate coordinate
## entries are summed!
## Create cache if necessary
tick = tm.time( )
if not os.path.exists( cached_dblp_file ) :
## Load the csv file into a dataframe
dblp = pd.read_csv( raw_dblp_file, # nrows = 10000,
## On-the-fly decompression
compression = "gzip", header = None, quoting = 0,
## Assign column headers
names = [ 'author1', 'author2', 'year', ], encoding = "utf-8" )
## Finish
tock = tm.time( )
print "Raw DBLP read in %.3f sec." % ( tock - tick, )
## Map author names to ids
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder( ).fit( np.concatenate( (
dblp["author1"].values, dblp["author2"].values, ) ) )
dblp_author_index = le.classes_
for col in [ 'author1', 'author2', ] :
dblp[ col ] = le.transform( dblp[ col ] )
## Cache
dblp.to_pickle( cached_dblp_file )
with open( cached_author_index, "w" ) as out :
for label in le.classes_ :
out.write( label.strip( ).encode( "utf-8" ) + "\n" )
del dblp, le
## Finish
tick = tm.time( )
print "Preprocessing took %.3f sec." % ( tick - tock, )
else :
## Load the database from pickled format
dblp = pd.read_pickle( cached_dblp_file )
## Read the dictionary of authors
with open( cached_author_index, "r" ) as author_index :
dblp_author_index = [ line.decode( "utf-8" ) for line in author_index ]
## Report
tock = tm.time( )
print "DBLP loaded in %.3f sec." % ( tock - tick, )
pre2010 = dblp[ dblp.year <= 2010 ].copy( )
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder( ).fit( np.concatenate( ( pre2010[ "author1" ].values, pre2010[ "author2" ].values ) ) )
pre2010_values = le.classes_
## Recode the edge data
for col in [ 'author1', 'author2', ] :
pre2010[ col ] = le.transform( pre2010[ col ] )
pre2010_adj = to_sparse_coo(
pre2010[ "author1" ].values, pre2010[ "author2" ].values,
shape = 2 * [ len( le.classes_ ) ] )
## Eliminate duplicates by converting them into ones
pre2010_adj = pre2010_adj.tocsr( )
pre2010_adj.data = np.ones_like( pre2010_adj.data )
post2010 = dblp[ dblp.year > 2010 ]
common_vertices = np.intersect1d( pre2010_values,
np.union1d( post2010[ "author1" ].values, post2010[ "author2" ].values ) )
post2010 = post2010[ (
match( common_vertices, post2010[ "author1" ].values ) &
match( common_vertices, post2010[ "author2" ].values ) ) ]
del common_vertices
for col in [ 'author1', 'author2', ] :
post2010[ col ] = le.transform( post2010[ col ] )
## The adjacency matrix
post2010_adj = sp.coo_matrix( ( np.ones( post2010.shape[0], dtype = np.bool ),
( post2010[ "author1" ].values, post2010[ "author2" ].values )
), shape = pre2010_adj.shape ).tolil( )
post2010_adj[ pre2010_adj.nonzero( ) ] = 0
post2010_adj = post2010_adj.tocsr( )
post2010_adj.data = np.ones_like( post2010_adj.data )
print post2010_adj.__repr__()
print pre2010_adj.__repr__( )
positive = np.append( *( c.reshape((-1, 1)) for c in post2010_adj.nonzero( ) ), axis = 1 )
## Generate a sample of vertex pairs with no edge in both periods
negative = np.random.choice( pre2010_adj.shape[ 0 ], size = ( 2 * positive.shape[0], positive.shape[1] ) )
E = np.vstack( ( positive, negative ) )
y = np.vstack( ( np.ones( ( positive.shape[ 0 ], 1 ), dtype = np.float ),
np.zeros( ( negative.shape[ 0 ], 1 ), dtype = np.float ) ) )
def phi_degree( edges, A ) :
deg = A.sum( axis = 1 ).astype( np.float )
return np.append( deg[ edges[ :, 0 ] ], deg[ edges[ :, 1 ] ], axis = 1 )
def __sparse_sandwich( edges, A, W = None ) :
AA = A.dot( A.T ) if W is None else A.dot( W ).dot( A.T )
result = AA[ edges[:,0], edges[:,1] ]
del AA ; gc.collect( 0 ) ; gc.collect( 1 ) ; gc.collect( 2 )
return result.reshape(-1, 1)
def phi_adamic_adar( edges, A ) :
inv_log_deg = 1.0 / np.log( A.sum( axis = 1 ).getA1( ) )
inv_log_deg[ np.isinf( inv_log_deg ) ] = 0
result = __sparse_sandwich( edges, A, sp.diags( inv_log_deg, 0 ) )
del inv_log_deg ; gc.collect( 0 ) ; gc.collect( 1 ) ; gc.collect( 2 )
return result
def phi_common_neighbours( edges, A ) :
return __sparse_sandwich( edges, A )
def __sparse_pagerank( A, beta = 0.85, one = None, niter = 1000, rel_eps = 1e-6, verbose = True ) :
## Initialize the iterations
one = one if one is not None else np.ones( ( 1, A.shape[ 0 ] ), dtype = np.float )
one = sp.csr_matrix( one / one.sum( axis = 1 ) )
## Get the out-degree
out = np.asarray( A.sum( axis = 1 ).getA1( ), dtype = np.float )
## Obtain the mask of dangling vertices
dangling = np.where( out == 0.0 )[ 0 ]
## Correct the out-degree for sink nodes
out[ dangling ] = 1.0
## Just one iteration: all dangling nodes add to the importance of all vertices.
pi = np.full( ( one.shape[0], A.shape[0] ), 1.0 / A.shape[ 0 ], dtype = np.float )
## If there are no dangling vertices then use simple iterations
kiter, status = 0, -1
## Make a stochastic matrix
P = sp.diags( 1.0 / out, 0, dtype = np.float ).dot( A ).tocsc( )
while kiter < niter :
## make a copy of hte current ranking estimates
pi_last = pi.copy( )
## Use sparse inplace operations for speed. Firstt the random walk part
pi *= beta ; pi *= P
## Now the teleportaiton ...
pi += ( 1 - beta ) * one
## ... and dangling vertices part
if len( dangling ) > 0 :
pi += beta * one.multiply( np.sum( pi_last[ :, dangling ], axis = 1 ).reshape( ( -1, 1 ) ) )
## Normalize
pi /= np.sum( pi, axis = 1 )
if np.sum( np.abs( pi - pi_last ) ) <= one.shape[0] * rel_eps * np.sum( np.abs( pi_last ) ) :
status = 0
break
## Next iteration
kiter += 1
if kiter % 10 == 0 :
print kiter
return pi, status, kiter
## The global pagerank score
def phi_gpr( edges, A, verbose = True ) :
pi, s, k = __sparse_pagerank( A, one = None, verbose = verbose )
return np.concatenate( ( pi[ :, edges[ :, 0 ] ], pi[ :,edges[ :, 1 ] ] ), axis = 0 ).T
def phi_ppr( edges, A ) :
pass #result = np.empty( edges.shape, dtype = np.float )
# return __sparse_sandwich( edges, A )
tick = tm.time()
phi_12 = phi_degree( E, pre2010_adj )
tock = tm.time()
print "Vertex degree computed in %.3f sec." % ( tock - tick, )
tick = tm.time()
phi_3 = phi_adamic_adar( E, pre2010_adj )
tock = tm.time()
print "Adamic/adar computed in %.3f sec." % ( tock - tick, )
tick = tm.time()
phi_4 = phi_common_neighbours( E, pre2010_adj )
tock = tm.time()
print "Common neighbours computed in %.3f sec." % ( tock - tick, )
tick = tm.time()
phi_56 = phi_gpr( E, pre2010_adj, verbose = False )
tock = tm.time()
print "Global pagerank computed in %.3f sec." % ( tock - tick, )
tick = tm.time()
phi_78 = phi_ppr( E, pre2010_adj, verbose = False )
tock = tm.time()
print "Personalized pagerank computed in %.3f sec." % ( tock - tick, )
# tick = tm.time()
# phi_5 = phi_shortest_paths( E, pre2010_adj )
# tock = tm.time()
# print "Shortest paths computed in %.3f sec." % ( tock - tick, )
X = np.hstack( ( phi_12, phi_3, phi_4, phi_56 ) ) #, phi_78 ) )
from sklearn.cross_validation import train_test_split
X_modelling, X_main, y_modelling, y_main = train_test_split( X, y.ravel( ), train_size = 0.20 )
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import cross_val_score
classifiers = list( )
from sklearn.linear_model import LogisticRegression
LR_grid = GridSearchCV( LogisticRegression( ), cv = 10, verbose = 1,
param_grid = { "C" : np.logspace( -2, 2, num = 5 ) }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "Logistic", LR_grid.best_estimator_ ) )
from sklearn.lda import LDA
from sklearn.qda import QDA
classifiers.append( ( "LDA", LDA( ) ) )
classifiers.append( ( "QDA", QDA( ) ) )
from sklearn.ensemble import RandomForestClassifier
RF_grid = GridSearchCV( RandomForestClassifier( n_estimators = 50 ), cv = 10, verbose = 1,
param_grid = { "max_depth" : [ 3, 5, 15, 30 ] }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "RandomForest", RF_grid.best_estimator_ ) )
from sklearn.ensemble import AdaBoostClassifier
classifiers.append( ( "AdaBoost", AdaBoostClassifier( n_estimators = 50 ) ) )
from sklearn.tree import DecisionTreeClassifier
tree_grid = GridSearchCV( DecisionTreeClassifier( criterion = "gini" ), cv = 10, verbose = 1,
param_grid = { "max_depth" : [ 3, 5, 15, 30 ] }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "Tree", tree_grid.best_estimator_ ) )
from sklearn.neighbors import KNeighborsClassifier
knn_grid = GridSearchCV( KNeighborsClassifier( ), cv = 10, verbose = 50,
param_grid = { "n_neighbors" : [ 2, 3, 5, 15, 30 ] }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "k-NN", knn_grid.best_estimator_ ) )
from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier
X_train, X_test, y_train, y_test = train_test_split( X_main, y_main, train_size = 0.20 )
subsample = np.random.permutation( X_train.shape[ 0 ] )#[ : 50000 ]
X_train_subsample, y_train_subsample = X_train[ subsample ], y_train[ subsample ]
results = dict()
for name, clf in classifiers :
tick = tm.time( )
results[name] = cross_val_score( clf, X_train_subsample, y_train_subsample, n_jobs = -1, verbose = 1, cv = 10 )
tock = tm.time( )
print "k-fold crossvalidation for %s took %.3f sec." % ( name, tock - tick, )
k_fold_frame = pd.DataFrame( results )
# k_fold_frame.append( k_fold_frame.apply( np.average ), ignore_index = True )
k_fold_frame.apply( np.average )
fitted_classifiers = [ ( name, clf.fit( X_test, y_test ) ) for name, clf in classifiers ]
print np.sum( y_test == 0, dtype = np.float ), np.sum( y_test == 1, dtype = np.float )
fig = plt.figure( figsize = ( 9,9 ) )
ax = fig.add_subplot( 111 )
ax.set_ylabel( "True positive" ) ; ax.set_xlabel( "False positive" )
for name, clf in fitted_classifiers[:-1] :
theta = clf.predict_proba( X_test )
## The response variable is 0-1 coded therefore it is easy to compute the true- and false- positive counts
i = np.argsort( theta[:,0], axis = 0 )[::-1]
tp, fp = np.cumsum( y_test[ i ] == 1 ), np.cumsum( y_test[ i ] == 0 )
## Plot the ROC curve
ax.plot( fp / np.sum( y_test == 0, dtype = np.float ), tp / np.sum( y_test == 1, dtype = np.float ), label = name )
A = np.tensordot( np.vstack( ( y_test, 1-y_test ) ).T, np.log( np.clip( theta, 1e-15, 1-1e-15 ) ), ( 0, 0 ) )
logloss = -np.sum( np.diag( A ) / y_main.shape[ 0 ] )
print name, logloss
ax.legend()
plt.show( )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define some helper functions
Step2: A handy procedure for converting an $(v_{ij})$ list into a sparse matrix.
Step3: Load the DBLP dataset, making a cached copy if required.
Step4: Now split the DBLP dataset in two periods
Step5: Reencode the vertices of the pre-2010 graph in a less wasteful format. Use sklearn's LabelEncoder() to this end.
Step6: Convert the edge list data into a sparse matrix
Step7: Find the vertices of the pre 2010 period that are in post-2010
Step8: Remove completely new vertices from post 2010 data
Step9: Map the post 2010 vertices to pre 2010 vertices and construct the adjacency matrix.
Step10: Leave only those edges in the post 2010 dataset, which had not existed during 2000-2010.
Step11: Eliminate duplicate edges and transform into a CSR format
Step12: Here we have two aligned symmetric adjacency matrices
Step13: All edes of the post2010 graph are included and considered to be positive examples
Step14: Now a slightly harder part is to generate an adequate number of negative examples, so that the final training sample would be balanced.
Step15: Compie the final training dataset.
Step16: So, finally, we got ourselves a trainig set of edges with 2
Step17: It turns out that at least two edge features can be constructed via a so called "sandwich" matrix.
Step18: The next feature is the Adamic/adar score
Step19: Yet another potential feature is the so-called personalized page rank. Basically it is the same page Rank score, but with the ability to teleport only to a single node.
Step20: Now the feature extractors themselves
Step21: Computing the features
Step22: Adamic/Adar metric
Step23: Common neighbours
Step24: Global Pagerank
Step25: Rooted (personalized) pagerank
Step26: Compute all-pairs shortest paths
Step27: Collect all features into a numpy matrix
Step28: Having computed all the features, lets make a subsample so that the classfification would run faster.
Step29: Attach SciKit's grid search and x-validation modules.
Step30: We are going to analyze many classifiers at once.
Step31: Logistic Regression
Step32: Linear and Quadratic Discriminant Analysis
Step33: Decision tree classifiers
Step34: Boosted tree (AdaBoost)
Step35: Simple tree
Step36: $k$-Nearest Negihbours
Step37: Support Vector Machine classification
Step38: Testing
Step39: Subsample the train dataset
Step40: Run tests
|
6,604
|
<ASSISTANT_TASK:>
Python Code:
print "print out some values of the observation 'TOTAL'"
for name, person in data_dict.iteritems():
if name == 'TOTAL':
print person
salary = []
for name, person in data_dict.iteritems():
if float(person['salary']) > 0:
salary.append(float(person['salary']))
print "the sum of salary of all other persons is: ",np.sum(salary)/2
# Let's remove this TOTAL record.
data_dict.pop('TOTAL')
# There is a also a record which belongs to "THE TRAVEL AGENCY IN THE PARK".
# This is not a person and hence should be removed.
data_dict.pop("THE TRAVEL AGENCY IN THE PARK")
# No of records after removal of TOTAL & THE TRAVEL AGENCY IN THE PARK
print "No of records after removal of TOTAL: ", len(data_dict)
### Task 3: Create new feature(s)
### Store to my_dataset for easy export below.
my_dataset = data_dict
print "we create two new features here 'to_poi_message_ratio' and 'from_poi_message_ratio' "
for person in my_dataset.values():
person['to_poi_message_ratio'] = 0
person['from_poi_message_ratio'] = 0
if float(person['from_messages']) > 0:
person['to_poi_message_ratio'] = float(person['from_this_person_to_poi'])/float(person['from_messages'])
if float(person['to_messages']) > 0:
person['from_poi_message_ratio'] = float(person['from_poi_to_this_person'])/float(person['to_messages'])
features_list.extend(['to_poi_message_ratio', 'from_poi_message_ratio'])
### Extract features and labels from dataset for local testing
data = featureFormat(my_dataset, features_list)
labels, features = targetFeatureSplit(data)
### Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines. For more info:
### http://scikit-learn.org/stable/modules/pipeline.html
# Provided to give you a starting point. Try a variety of classifiers.
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf = DecisionTreeClassifier(min_samples_split=6, random_state=10)
test_classifier(clf, my_dataset, features_list)
#clf = ensemble.RandomForestClassifier(criterion='gini', n_estimators=14, max_depth=7,
# max_features=None, random_state=42, min_samples_split=1)
#clf = AdaBoostClassifier(algorithm='SAMME')
#params = dict(reduce_dim__n_components=[1, 2, 3], tree__min_samples_split=[2, 4, 6, 8 10])
#clf = GridSearchCV(clf, param_grid=params, n_jobs=-1, scoring='recall')
#test_classifier(clf, my_dataset, features_list)
### Task 5: Tune your classifier to achieve better than .3 precision and recall
### using our testing script. Check the tester.py script in the final project
### folder for details on the evaluation method, especially the test_classifier
### function. Because of the small size of the dataset, the script uses
### stratified shuffle split cross validation. For more info:
### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html
# Example starting point. Try investigating other evaluation techniques!
from sklearn.cross_validation import train_test_split
features_train, features_test, labels_train, labels_test = \
train_test_split(features, labels, test_size=0.3, random_state=42)
### Task 6: Dump your classifier, dataset, and features_list so anyone can
### check your results. You do not need to change anything below, but make sure
### that the version of poi_id.py that you submit can be run on its own and
### generates the necessary .pkl files for validating your results.
dump_classifier_and_data(clf, my_dataset, features_list)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We see that the total salary matches to the salary against the "TOTAL" record in the dataset.
|
6,605
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
# Hidden layer activation function is the sigmoid function f(x) = 1/(1 + exp(-1))
self.activation_function = lambda x: 1/ (1 + np.exp(-x))
self.activation_derivative = lambda x: x * (1 - x)
# Output layer activation function is f(x) = x
self.output_activation_function = lambda x: x
self.output_activation_derivative = lambda x: 1
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin = 2).T
### Forward pass ###
# signals into hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# signals from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# signals into final output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
final_outputs = self.output_activation_function(final_inputs)
### Backward pass ###
# Output layer error is the difference between desired target and actual output.
error = targets - final_outputs
output_errors = error * self.output_activation_derivative(final_inputs)
# errors (back-)propagated to the hidden layer
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output)
# hidden layer gradients
hidden_grad = self.activation_derivative(hidden_outputs)
# update hidden-to-output weights with gradient descent step
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
# update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors.T * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
# signals into hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# signals from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# signals into final output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
final_outputs = self.output_activation_function(final_inputs)
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 6000
learning_rate = 0.01
hidden_nodes = 28
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
6,606
|
<ASSISTANT_TASK:>
Python Code:
api_key = 'PASTE_ALCHEMY_API_KEY_HERE'
import requests
import os
import pandas as pd
from datetime import datetime
hacker_news_api_base_url = 'https://hacker-news.firebaseio.com/v0/'
hacker_news_feature_url_item = 'item/'
hacker_news_feature_url_topstories = 'topstories'
hacker_news_api_parameters = '.json?print=pretty'
def get_story_for_id(story_id):
''' Queries the Hacker News API for story information about for the given story_id. '''
story_request_url = hacker_news_api_base_url + hacker_news_feature_url_item + unicode(story_id) + hacker_news_api_parameters
story = requests.get(story_request_url).json()
return story
def get_story_details(story):
''' Filter relevant story information from the given Hacker News API story object. '''
# remove descendants from story (e.g. comments), because we don't use them
if 'kids' in story: del story['kids']
# encode text field content as ascii (work around IPython defect https://github.com/ipython/ipython/issues/6799)
if 'title' in story: story['title'] = story['title'].encode('ascii', 'ignore')
if 'text' in story: story['text'] = story['text'].encode('ascii', 'ignore')
if 'url' in story: story['url'] = story['url'].encode('ascii', 'ignore')
return story
def get_all_story_details(story_ids):
''' Queries Hacker News API for relevant story information for given list of story_ids. '''
all_story_details = []
for story_id in story_ids:
all_story_details.append(get_story_details(get_story_for_id(story_id)))
return all_story_details
current_top_500_stories_url = hacker_news_api_base_url + hacker_news_feature_url_topstories + hacker_news_api_parameters
current_top_500_stories = requests.get(current_top_500_stories_url).json()
current_top_500_stories
story_ids_file_name = 'hacker_news_story_ids.pickle'
def update_saved_story_ids(story_ids, story_ids_file_name):
''' Read story ids from disk, merge with given story_ids, and save back to disk. '''
file_story_ids = []
try:
file_story_ids = pd.read_pickle(story_ids_file_name)
except IOError as err:
# file for story ids does not yet exist, move on
pass
merged_story_ids = set(file_story_ids).union(set(story_ids))
pd.Series(list(merged_story_ids)).to_pickle(story_ids_file_name)
return merged_story_ids
story_ids_up_until_today = update_saved_story_ids(current_top_500_stories, story_ids_file_name)
len(story_ids_up_until_today)
all_story_details = get_all_story_details(list(story_ids_up_until_today))
# optionally, comment the first line and uncomment the two lines below to use a subset of stories to work with to reduce subsequent API requests against AlchemyAPI
# top_10_stories = list(story_ids_up_until_today)[0:10]
# all_story_details = get_all_story_details(top_10_stories)
stories_df = pd.DataFrame.from_dict(all_story_details)
stories_df.head(5)
alchemy_api_base_url = 'http://access.alchemyapi.com/calls/url/'
alchemy_api_parameters = '?apikey=' + api_key + '&outputMode=json&url='
alchemy_feature_url_concepts = "URLGetRankedConcepts"
def get_concepts_for_url(story_url, story_urls_and_concepts):
''' Query AlchemyAPI concept tagging for given url and add result to given story_urls_and_concepts dictionary. '''
if story_url in story_urls_and_concepts:
# attempt to get concepts for story url from disk
concepts = story_urls_and_concepts.get(story_url)
else:
# no concepts available on disk for story url, query AlchemyAPI for concepts and add save for future use
request_url = alchemy_api_base_url + alchemy_feature_url_concepts + alchemy_api_parameters + story_url
concepts = requests.get(request_url).json()
story_urls_and_concepts[story_url] = concepts
return concepts
story_urls_and_concepts = {}
test_url = 'http://www.cnn.com/2009/CRIME/01/13/missing.pilot/index.html'
get_concepts_for_url(test_url, story_urls_and_concepts)
story_urls_and_concepts_file_name = 'story_urls_and_concepts.pickle'
try:
story_urls_and_concepts = pd.read_pickle(story_urls_and_concepts_file_name)
except IOError as err:
# file for story urls and concepts does not yet exist, move on
story_urls_and_concepts = {}
pass
def get_concepts_for_id(story_id, all_concepts_dicts, story_urls_and_concepts):
''' Extracts concepts for given story_id and aggregates story popularity information. '''
print "Querying concepts for story " + unicode(story_id) + "..."
request_url = hacker_news_api_base_url + hacker_news_feature_url_item + unicode(story_id) + hacker_news_api_parameters
print(request_url)
story = requests.get(request_url).json()
# ignore "Ask HN" and job posts, only consider actual stories
if story.get('type') == 'story':
# make sure story has url that links to article
if story.get('url') is not None:
# extract concepts using AlchemyAPI
concept_result = get_concepts_for_url(story.get('url'), story_urls_and_concepts)
if concept_result['status'] == 'OK':
concepts = concept_result.get('concepts')
for concept in concepts:
# check, if we previously encountered the concept in another article
concept_dict = {}
concept_text = concept.get('text')
# ignore concepts with low score
if (float(concept.get('relevance')) > 0.6):
concept_dict['occurs'] = 1
concept_dict['relevance'] = concept.get('relevance')
concept_dict['ids'] = [story_id]
concept_dict['score'] = story.get('score')
concept_dict['descendants'] = story.get('descendants')
concept_dict['links'] = [story.get('url')]
if concept_text in all_concepts_dicts:
# merge additional concept info with already existing concept info
# add up the scores and number of descendants by concept
already_existing_concept = all_concepts_dicts.get(concept_text)
already_existing_concept['occurs'] = already_existing_concept['occurs'] + 1
already_existing_concept['score'] = already_existing_concept['score'] + story.get('score')
already_existing_concept['descendants'] = already_existing_concept['descendants'] + story.get('descendants')
already_existing_concept['links'] = already_existing_concept['links'] + concept_dict['links']
already_existing_concept['ids'] = already_existing_concept['ids'] + concept_dict['ids']
else:
all_concepts_dicts[concept_text] = concept_dict
return all_concepts_dicts
all_concepts_dicts = {}
test_story_id = 9226497
all_concepts_dicts = get_concepts_for_id(test_story_id, all_concepts_dicts, story_urls_and_concepts)
print all_concepts_dicts
len(story_urls_and_concepts)
len(stories_df)
story_counter = 1
for story_id in story_ids_up_until_today:
# optionally, comment the line above and uncomment the line below to limit requests to 10 stories
# for story_id in top_10_stories:
all_concepts_dicts = get_concepts_for_id(story_id, all_concepts_dicts, story_urls_and_concepts)
print 'Done. ' + unicode(story_counter) + ' stories queried.'
story_counter = story_counter + 1
import pickle
with open(story_urls_and_concepts_file_name, 'wb') as story_urls_and_concepts_file:
pickle.dump(story_urls_and_concepts, story_urls_and_concepts_file)
all_concepts_df = pd.DataFrame.from_dict(all_concepts_dicts, orient='index')
all_concepts_sorted_by_score_df = all_concepts_df.sort(columns='score', ascending=False)
all_concepts_sorted_by_score_df
all_concepts_sorted_by_descendants_df = all_concepts_df.sort(columns='descendants', ascending=False)
all_concepts_sorted_by_descendants_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import required Python libraries
Step2: Query the current 500 most popular Hacker News stories
Step3: Define Hacker News API helper functions
Step4: Query stories
Step5: Take a look at what we got to make sure we have a list of story ids
Step6: Save the current 500 most popular Hacker News stories
Step7: Query story details
Step8: Now, query the details and show a sample of the first five stories (If you happen to hit JSON errors, try running the cell again, as these seem to happen intermittently)
Step9: Tag concepts of Hacker News stories using AlchemyAPI
Step10: Define AlchemyAPI helper functions
Step11: Let's test the function by running it against a test_url pointing to an article from cnn.com
Step12: You should see a JSON document containing list of concepts extracted from the website at the given url. Each concept is identified by the text and is assigned a relevance which measures how confident AlchemyAPI is that the website is talking about this concept. Based on the identified concept, the JSON also contains links to publicly available knowledge bases DBPedia and Yago. Feel free to test AlchemyAPI concept tagging for articles that you're interested in by replacing the test_url.
Step13: Now that we can extract concepts for articles at a given url we need to extract the concepts for a Hacker News story_id. For each identified concept we need to keep track how often and from what story it was extracted. We need to aggregate popularity measures like score and number of descendants from the stories.
Step14: Let's test the get_concepts_for_id helper function by providing it a valid Hacker News story id
Step15: You should see a dictionary of concepts with links to stories, score and descendant information. Feel free to enter different test_story_ids.
Step16: Save the story_urls_and_concepts dictionary to disk for future use. The dictionary got created and updated while iterating through all story_ids and is valueable at this point, because it contains the concepts from querying the AlchemyAPI, which supports a limited number of requests per day. Without saving the story_urls_and_concepts to disk, we would hit that 1000 requests per day limit after just a few days of collecting story ids.
Step17: Evaluate the result
Step18: Most discussed stories by number of comments
|
6,607
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
print(tf.__version__)
a = 1.
b = 2.
c = a + b
print(c)
a = tf.constant(1.)
b = tf.constant(2.)
c = tf.add(a, b)
print(c)
with tf.Session() as sess:
result = sess.run(c)
print(result)
a = np.array([5, 3, 8])
b = np.array([3, -1, 2])
c = np.add(a, b)
print(c)
a = tf.constant([5, 3, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
print(c)
with tf.Session() as sess:
result = sess.run(c)
print(result)
a = tf.placeholder(dtype=tf.int32, shape=(None,))
b = tf.placeholder(dtype=tf.int32, shape=(None,))
c = tf.add(a, b)
with tf.Session() as sess:
result1 = sess.run(c, feed_dict={a: [3, 4, 5], b: [-1, 2, 3]})
result2 = sess.run(c, feed_dict={a: [1, 2, 3], b: [3, 2, 1]})
print(result1)
print(result2)
v = tf.Variable([1, 2])
assign_op = tf.assign(v, [2, 3])
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
print(sess.run(v))
sess.run(assign_v)
print(sess.run(v))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hello TensorFlow
Step2: 当然ですが 3.0 と答えが表示されます。
Step3: Tensor というオブジェクトが表示されますね。
Step4: 行列演算
Step5: TensorFlow
Step6: TensorFlow + placeholder
Step7: 典型的な使い方として、学習時に使うデータを placeholder で定義しておくという方法があります。
Step8: tf.Variable 最初に初期化処理を行う必要があるので init_op を実行します。
|
6,608
|
<ASSISTANT_TASK:>
Python Code:
G = cf.load_seventh_grader_network()
len(G.nodes())
# Who are represented in the network?
list(G.nodes())[0:5]
len(G.nodes())
# len(G)
# Who is connected to who in the network?
# list(G.edges())[0:5]
list(G.edges())[0:5]
len(G.edges())
# Let's get a list of nodes with their attributes.
list(G.nodes(data=True))[0:5]
# G.nodes(data=True)
# NetworkX will return a list of tuples in the form (node_id, attribute_dictionary)
from collections import Counter
mf_counts = Counter([d['gender']
for n, d in G.nodes(data=True)])
def test_answer(mf_counts):
assert mf_counts['female'] == 17
assert mf_counts['male'] == 12
test_answer(mf_counts)
list(G.edges(data=True))[0:5]
# Answer
counts = [d['count'] for n1, n2, d in G.edges(data=True)]
maxcount = max(counts)
def test_maxcount(maxcount):
assert maxcount == 3
test_maxcount(maxcount)
# Answer
G.add_node(30, gender='male')
G.add_node(31, gender='female')
G.add_edge(30, 31, count=3)
G.add_edge(31, 30, count=3) # reverse is optional in undirected network
G.add_edge(30, 7, count=3) # but this network is directed
G.add_edge(7, 30, count=3)
G.add_edge(31, 7, count=3)
G.add_edge(7, 31, count=3)
def test_graph_integrity(G):
assert 30 in G.nodes()
assert 31 in G.nodes()
assert G.nodes[30]['gender'] == 'male'
assert G.nodes[31]['gender'] == 'female'
assert G.has_edge(30, 31)
assert G.has_edge(30, 7)
assert G.has_edge(31, 7)
assert G.edges[30, 7]['count'] == 3
assert G.edges[7, 30]['count'] == 3
assert G.edges[31, 7]['count'] == 3
assert G.edges[7, 31]['count'] == 3
assert G.edges[30, 31]['count'] == 3
assert G.edges[31, 30]['count'] == 3
print('All tests passed.')
test_graph_integrity(G)
unrequitted_friendships = []
for n1, n2 in G.edges():
if not G.has_edge(n2, n1):
unrequitted_friendships.append((n1, n2))
assert len(unrequitted_friendships) == 124
len([(n1, n2) for n1, n2 in G.edges() if not G.has_edge(n2, n1)])
links = ((n1, n2) for n1, n2, d in G.edges(data=True))
reverse_links = ((n2, n1) for n1, n2, d in G.edges(data=True))
len(list(set(links) - set(reverse_links)))
nx.draw(G)
nx.draw(G, with_labels=True)
from nxviz import MatrixPlot
m = MatrixPlot(G)
m.draw()
plt.show()
from nxviz import ArcPlot
a = ArcPlot(G, node_color='gender', node_grouping='gender')
a.draw()
from nxviz import CircosPlot
c = CircosPlot(G, node_color='gender', node_grouping='gender')
c.draw()
plt.savefig('images/seventh.png', dpi=300)
from hiveplot import HivePlot
nodes = dict()
nodes['male'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'male']
nodes['female'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'female']
edges = dict()
edges['group1'] = G.edges(data=True)
nodes_cmap = dict()
nodes_cmap['male'] = 'blue'
nodes_cmap['female'] = 'red'
edges_cmap = dict()
edges_cmap['group1'] = 'black'
h = HivePlot(nodes, edges, nodes_cmap, edges_cmap)
h.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic Network Statistics
Step2: Exercise
Step3: Let's now figure out who is connected to who in the network
Step4: Exercise
Step5: Concept
Step6: Exercise
Step7: Edges can also store attributes in their attribute dictionary.
Step8: In this synthetic social network, the number of times the left student indicated that the right student was their favourite is stored in the "count" variable.
Step9: Exercise
Step10: Verify that you have added in the edges and nodes correctly by running the following cell.
Step11: Exercise (break-time)
Step12: In a previous session at ODSC East 2018, a few other class participants provided the following solutions.
Step13: This one by @end0 is a unique one involving sets.
Step14: Tests
Step15: If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument.
Step16: However, note that if the number of nodes in the graph gets really large, node-link diagrams can begin to look like massive hairballs. This is undesirable for graph visualization.
Step17: Arc Plot
Step18: Circos Plot
Step19: This visualization helps us highlight nodes that there are poorly connected, and others that are strongly connected.
|
6,609
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
return None, None, None, None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
display_step = None
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
6,610
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x,W) + b
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
pred = tf.argmax(y,axis=1)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
%matplotlib inline
import matplotlib.pyplot as plt
number = mnist.train.next_batch(1)[0]
plt.imshow(number.reshape(28,28))
sess.run(pred,feed_dict={x: number})[0]
EXPORT_PATH = "/tmp/models"
VERSION=1
from tensorflow.contrib.session_bundle import exporter
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
model_exporter.init(
sess.graph.as_graph_def(),
named_graph_signatures={
'inputs': exporter.generic_signature({'x': x}),
'outputs': exporter.generic_signature({'pred': pred})})
model_exporter.export(EXPORT_PATH, tf.constant(VERSION), sess)
!ls -lhR /tmp/models
!tail -n2 /tmp/models/model.log
import numpy as np
import cPickle as pickle
import requests
def test_flask_client(x):
URL = "http://localhost:8915/model_prediction"
s = pickle.dumps({"x":x}, protocol=0)
DATA = {"model_name": "default",
"input": requests.utils.quote(s)}
r = requests.get(URL, data=DATA)
return r.json()
%matplotlib inline
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
number = mnist.train.next_batch(1)[0]
plt.imshow(number.reshape(28,28))
test_flask_client(number)
int(test_flask_client(number)["outputs"]["pred"]["int64Val"][0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Additionally, we declare pred value, which is the actual prediction.
Step2: We download training examples and train the model.
Step3: Let's make sure that everything is working as expected. We have our number.
Step4: Make sure that our model can efficiently predict it
Step5: Now we want to save this model, and serve it with TensorFlow Serving. We define the path where we store the weights and the model version.
Step6: And here we are saving the actual weights.
Step7: Let's make sure the weights were saved correctly
Step8: Services
Step9: REST request
Step10: Let's make sure our train data still makes sense
Step11: Running prediction
Step12: It's easy to extract the actual prediction value from here
|
6,611
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
6,612
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Extract Features
Step3: Train SVM on features
Step4: Inline question 1
|
6,613
|
<ASSISTANT_TASK:>
Python Code:
class Ball:
def __init__(self, start_position, start_velocity, radius):
self.position = start_position
self.velocity = start_velocity
self.radius = radius
def move(self, time_step):
self.position[0] += self.velocity[0] * time_step
self.position[1] += self.velocity[1] * time_step
self.velocity[0] += 0
self.velocity[1] -= 9.81 * time_step
if self.position[1] <= 0.0:
self.velocity[1] *= -1
tennis_ball = Ball([0, 1], [0.2, 0], 0.033)
time_step = 0.04
number_of_steps = 1500
trajectory_numerical = [] # this is an empty list
trajectory_time = [] # also a empty list to store the time
for step in range(number_of_steps):
trajectory_numerical.append(tennis_ball.position[:]) # append adds something at the end of a list
trajectory_time.append(step*time_step)
if tennis_ball.position[1] <= 0.0: break # stop the calculation here
tennis_ball.move(time_step)
# don't worry about the code below too much it just crates an animation
# of your calculated positions.
from animation import animate
%matplotlib notebook
ani = animate(trajectory_numerical, time_step*100/0.04)
# Initialisation
start_position=[0, 1] # start position of the ball (as in the numeric example)
start_velocity=[0.2, 0] # start velocity of the ball (as in the numeric example)
# Analytical solution
trajectory_analytical = [] # an empty list to store the analytic results
for point_in_time in trajectory_time: # iterate of the timepoints
position_x = start_position[0] + start_velocity[0] * point_in_time # fill in here
position_y = start_position[1] - 0.5 * 9.81 * point_in_time**2 # fill in here
trajectory_analytical.append([position_x,position_y]) # update trajectory
# Error calculation
error = []
error_x_sq=error_y_sq=0
for i in range(len(trajectory_analytical)):
diff_x=trajectory_analytical[i][0]-trajectory_numerical[i][0]
diff_y=trajectory_analytical[i][1]-trajectory_numerical[i][1]
error_x_sq+=diff_x**2
error_y_sq+=diff_y**2
error.append([error_x_sq,error_y_sq])
# Plotting
from animation import plot
%matplotlib inline
plot(trajectory_time, trajectory_numerical,trajectory_analytical,error)
class Ball:
def __init__(self, start_position, start_velocity, radius):
self.position = start_position
self.velocity = start_velocity
self.radius = radius
def move(self, time_step):
self.position[0] += self.velocity[0] * time_step
self.position[1] += self.velocity[1] * time_step - 9.81 / 2.0 * time_step**2
self.velocity[0] += 0
self.velocity[1] -= 9.81 * time_step
if self.position[1] <= 0.0:
self.velocity[1] *= -1
tennis_ball = Ball([0, 1], [0.2, 0], 0.033)
time_step = 0.04
number_of_steps = 1500
trajectory_numerical_2nd = [] # this is an empty list
trajectory_time = [] # also a empty list to store the time
for step in range(number_of_steps):
trajectory_numerical_2nd.append(tennis_ball.position[:]) # append adds something at the end of a list
trajectory_time.append(step*time_step)
if tennis_ball.position[1] <= 0.0: break # stop the calculation here
tennis_ball.move(time_step)
# don't worry about the code below too much it just crates an animation
# of your calculated positions.
from animation import animate
%matplotlib notebook
#ani = animate(trajectory_numerical, time_step*100/0.04)
trajectory_analytical = [] # an empty list to store the analytic results
start_position=[0, 1]
start_velocity=[0.2, 0]
for point_in_time in trajectory_time:
position_x = start_position[0] + start_velocity[0] * point_in_time
position_y = start_position[1] - 0.5 * 9.81 * point_in_time**2
trajectory_analytical.append([position_x,position_y])
error = []
error_x_sq=error_y_sq=0
for i in range(len(trajectory_analytical)):
diff_x=trajectory_analytical[i][0]-trajectory_numerical_2nd[i][0]
diff_y=trajectory_analytical[i][1]-trajectory_numerical_2nd[i][1]
error_x_sq+=diff_x**2
error_y_sq+=diff_y**2
error.append([error_x_sq,error_y_sq])
from animation import plot
%matplotlib inline
plot(trajectory_time, trajectory_numerical_2nd,trajectory_analytical,error)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Analytical solution
Step2: As you can see there is a difference between the numerical and analytical solution.
|
6,614
|
<ASSISTANT_TASK:>
Python Code:
import os
import glob
import itertools
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_BD)
print 'Max BD: {}'.format(max_BD)
# paths
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/'
buildDir = os.path.join(workDir, 'atomIncorp_taxaIncorp_MW-HR-SIP_postSpar')
dataDir = os.path.join(workDir, 'atomIncorp_taxaIncorp')
if not os.path.isdir(buildDir):
os.makedirs(buildDir)
%cd $buildDir
# making an experimental design file for qSIP
x = range(1,7)
y = ['control', 'treatment']
expDesignFile = os.path.join(buildDir, 'qSIP_exp_design.txt')
with open(expDesignFile, 'wb') as outFH:
for i,z in itertools.izip(x,itertools.cycle(y)):
line = '\t'.join([str(i),z])
outFH.write(line + '\n')
!head $expDesignFile
# building tree structure
nest = nestly.Nest()
# varying params
nest.add('percIncorp', [0, 15, 25, 50, 100])
nest.add('percTaxa', [1, 5, 10, 25, 50])
nest.add('rep', range(1,11))
## set params
nest.add('abs', ['1e9'], create_dir=False)
nest.add('np', [10], create_dir=False)
nest.add('Monte_rep', [100000], create_dir=False)
nest.add('subsample_dist', ['lognormal'], create_dir=False)
nest.add('subsample_mean', [9.432], create_dir=False)
nest.add('subsample_scale', [0.5], create_dir=False)
nest.add('subsample_min', [10000], create_dir=False)
nest.add('subsample_max', [30000], create_dir=False)
nest.add('min_BD', [min_BD], create_dir=False)
nest.add('max_BD', [max_BD], create_dir=False)
nest.add('DBL_scaling', [0.5], create_dir=False)
nest.add('bandwidth', [0.8], create_dir=False)
nest.add('heavy_BD_min', [1.71], create_dir=False)
nest.add('heavy_BD_max', [1.75], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('occurs', ['0.0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5'], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('exp_design', [expDesignFile], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
files = !find . -name "*.json"
dirs = [os.path.split(x)[0] for x in files]
srcFiles = ['OTU_abs1e9_PCR_sub_w.txt', 'OTU_abs1e9_PCR_sub_meta.txt', 'BD-shift_stats.txt']
for d in dirs:
for f in srcFiles:
f1 = os.path.join(dataDir, d, f)
f2 = os.path.join(buildDir, d, f)
cmd = 'cp -f {} {}'.format(f1, f2)
!$cmd
bashFileTmp = os.path.splitext(bashFile)[0] + '_HRSIP_multi.sh'
bashFileTmp
%%writefile $bashFileTmp
#!/bin/bash
# phyloseq
## making phyloseq object from OTU table
SIPSimR phyloseq_make \
OTU_abs{abs}_PCR_sub_w.txt \
-s OTU_abs{abs}_PCR_sub_meta.txt \
> OTU_abs{abs}_PCR_sub.physeq
## HR SIP pipeline
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.71-1.75 \
--all OTU_abs1e9_PCR_sub_MW1_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW1_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.71-1.78 \
--all OTU_abs1e9_PCR_sub_MW2_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW2_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.69-1.74,1.73-1.78 \
--all OTU_abs1e9_PCR_sub_MW3_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW3_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.70-1.73,1.72-1.75,1.74-1.77 \
--all OTU_abs1e9_PCR_sub_MW4_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW4_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.69-1.73,1.72-1.76,1.75-1.79 \
--all OTU_abs1e9_PCR_sub_MW5_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW5_DESeq2
!chmod 777 $bashFileTmp
!cd $workDir; \
nestrun --template-file $bashFileTmp -d $buildDir --log-file HR-SIP_multi.log -j 10
%pushnote postSpar MW-HR-SIP complete
bashFileTmp = os.path.splitext(bashFile)[0] + '_cMtx.sh'
bashFileTmp
%%writefile $bashFileTmp
#!/bin/bash
# HR-SIP multiple 'heavy' BD windows
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW1-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW1_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW2-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW2_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW3-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW3_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW4-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW4_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW5-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW5_DESeq2
!chmod 777 $bashFileTmp
!cd $workDir; \
nestrun --template-file $bashFileTmp -d $buildDir --log-file cMtx.log -j 10
def agg_cMtx(prefix):
# all data
#!nestagg delim \
# -d $buildDir \
# -k percIncorp,percTaxa,rep \
# -o $prefix-cMtx_data.txt \
# --tab \
# $prefix-cMtx_data.txt
# overall
x = prefix + '-cMtx_overall.txt'
!nestagg delim \
-d $buildDir \
-k percIncorp,percTaxa,rep \
-o $x \
--tab \
$x
# by class
x = prefix + '-cMtx_byClass.txt'
!nestagg delim \
-d $buildDir \
-k percIncorp,percTaxa,rep \
-o $x \
--tab \
$x
agg_cMtx('DESeq2_MW1')
agg_cMtx('DESeq2_MW2')
agg_cMtx('DESeq2_MW3')
agg_cMtx('DESeq2_MW4')
agg_cMtx('DESeq2_MW5')
%pushnote postSpar MW-HR-SIP run complete!
F = os.path.join(buildDir, '*-cMtx_byClass.txt')
files = glob.glob(F)
files
%%R -i files
df_byClass = list()
for (f in files){
ff = strsplit(f, '/') %>% unlist
fff = ff[length(ff)]
df_byClass[[fff]] = read.delim(f, sep='\t')
}
df_byClass = do.call(rbind, df_byClass)
df_byClass$file = gsub('\\.[0-9]+$', '', rownames(df_byClass))
df_byClass$method = gsub('-cMtx.+', '', df_byClass$file)
rownames(df_byClass) = 1:nrow(df_byClass)
df_byClass %>% head(n=3)
%%R
# renaming method
rename = data.frame(method = c('DESeq2_MW1', 'DESeq2_MW2', 'DESeq2_MW3', 'DESeq2_MW4', 'DESeq2_MW5'),
method_new = c('1.71-1.75',
'1.71-1.78',
'1.69-1.74,1.73-1.78',
'1.70-1.73,1.72-1.75,1.74-1.77',
'1.69-1.73,1.72-1.76,1.75-1.79'))
df_byClass = inner_join(df_byClass, rename, c('method'='method')) %>%
select(-method) %>%
rename('method' = method_new)
df_byClass$method = factor(df_byClass$method, levels=rename$method_new %>% as.vector)
df_byClass %>% head(n=3)
%%R -w 800 -h 550
# summarize by SIPSim rep & library rep
df_byClass.s = df_byClass %>%
group_by(method, percIncorp, percTaxa, variables) %>%
summarize(mean_value = mean(values),
sd_value = sd(values))
# plotting
ggplot(df_byClass.s, aes(variables, mean_value, color=method,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
geom_pointrange(alpha=0.8, size=0.2) +
labs(y='Value') +
facet_grid(percTaxa ~ percIncorp) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 850 -h 600
# summarize by SIPSim rep & library rep
vars = c('Balanced Accuracy', 'Sensitivity', 'Specificity')
df_byClass.s.f = df_byClass.s %>%
filter(variables %in% vars)
# plotting
ggplot(df_byClass.s.f, aes(variables, mean_value, fill=method,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
#geom_pointrange(alpha=0.8, size=0.2) +
geom_bar(stat='identity', position='dodge', width=0.8) +
geom_errorbar(stat='identity', position='dodge', width=0.8) +
scale_y_continuous(breaks=seq(0, 1, 0.2)) +
scale_fill_discrete('"Heavy" BD window(s)') +
facet_grid(percTaxa ~ percIncorp) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.y = element_blank()
)
%%R -h 250 -w 650
df_byClass.sf = df_byClass %>%
filter(variables == 'Specificity')
max_val = max(df_byClass.sf$values, na.rm=TRUE)
ggplot(df_byClass.sf, aes(values)) +
geom_histogram() +
scale_y_log10() +
labs(x='Specificity') +
theme_bw() +
theme(
text = element_text(size=16)
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BD min/max
Step2: Nestly
Step3: Nestly params
Step4: Copying input files
Step5: Multi-window HR-SIP
Step6: Making confusion matrices
Step7: Aggregating the confusion matrix data
Step8: --End of simulation--
Step9: Checking that specificity is not always 1 (perfect)
|
6,615
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.image as mpimg
import cv2
%%bash
ls -l | grep .tiff
img = mpimg.imread('Lab_3_DIP.tiff')
plt.figure(figsize=(15,10))
plt.imshow(img)
plt.figure(figsize=(20,20))
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
dilation = cv2.dilate(erosion,kernel,iterations = 1)
plt.subplot(1,3,1),
plt.imshow(img),
plt.title('img')
plt.subplot(1,3,2),
plt.imshow(erosion),
plt.title('erosion(img, 1)')
plt.subplot(1,3,3),
plt.imshow(dilation),
plt.title('dilate(erosion(img, 1), 1)')
plt.figure(figsize=(20,30))
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
dilation = cv2.dilate(erosion,kernel,iterations = 1)
plt.subplot(4,3,1),
plt.imshow(img),
plt.title('img')
plt.subplot(4,3,2),
plt.imshow(erosion),
plt.title('erosion(img, 1)')
plt.subplot(4,3,3),
plt.imshow(dilation),
plt.title('dilate(erosion(img, 1), 1)')
erosion2 = cv2.erode(img,kernel,iterations = 2)
erosion3 = cv2.erode(img,kernel,iterations = 3)
erosion4 = cv2.erode(img,kernel,iterations = 4)
plt.subplot(4,3,4),
plt.imshow(erosion2),
plt.title('erosion(img, 2)')
plt.subplot(4,3,5),
plt.imshow(erosion3),
plt.title('erosion(img, 3)')
plt.subplot(4,3,6),
plt.imshow(erosion4),
plt.title('erosion(img, 4)')
dilation2 = cv2.dilate(img,kernel,iterations = 2)
dilation3 = cv2.dilate(img,kernel,iterations = 3)
dilation4 = cv2.dilate(img,kernel,iterations = 4)
plt.subplot(4,3,7),
plt.imshow(dilation2),
plt.title('dilate(img, 2)')
plt.subplot(4,3,8),
plt.imshow(dilation3),
plt.title('dilate(img, 3)')
plt.subplot(4,3,9),
plt.imshow(dilation4),
plt.title('dilate(img, 4)')
dil_1_ero_2 = cv2.dilate(
cv2.erode(img,kernel,iterations = 2)
,kernel,iterations = 1
)
dil_2_ero_1 = cv2.dilate(
cv2.erode(img,kernel,iterations = 1)
,kernel,iterations = 2
)
dil_2_ero_2 = cv2.dilate(
cv2.erode(img,kernel,iterations = 2)
,kernel,iterations = 2
)
plt.subplot(4,3,10),
plt.imshow(dil_1_ero_2),
plt.title('dilate(erosion(img, 2), 1)')
plt.subplot(4,3,11),
plt.imshow(dil_2_ero_1),
plt.title('dilate(erosion(img, 1), 2)')
plt.subplot(4,3,12),
plt.imshow(dil_2_ero_2),
plt.title('dilate(erosion(img, 2), 2)')
# plt.tight_layout()
plt.subplots_adjust(wspace=0, hspace=0.1)
plt.show()
plt.figure(figsize=(70,40))
plt.subplot(1,3,1),
plt.imshow(dilation),
plt.title('dilate(erosion(img, 1), 1)')
plt.subplot(1,3,2),
plt.imshow(dil_2_ero_1),
plt.title('dilate(erosion(img, 1), 2)')
plt.subplot(1,3,3),
plt.imshow(dil_2_ero_2),
plt.title('dilate(erosion(img, 2), 2)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Erosion / dilation steps
|
6,616
|
<ASSISTANT_TASK:>
Python Code:
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores =box_class_probs* box_confidence
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis = -1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores > threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
# GRADED FUNCTION: iou
def iou(box1, box2):
Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = box1[0] if box1[0] > box2[0] else box2[0]
yi1 = box1[1] if box1[1] > box2[1] else box2[1]
xi2 = box1[2] if box1[2] < box2[2] else box2[2]
yi2 = box1[3] if box1[3] < box2[3] else box2[3]
inter_area = abs(xi2-xi1)*abs(yi2-yi1)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = abs(box1[2]-box1[0])*abs(box1[1]-box1[3])
box2_area = abs(box2[2]-box2[0])*abs(box2[1]-box2[3])
union_area = box1_area+box2_area-inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area/ union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes,scores,max_output_size=max_boxes)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes = max_boxes, iou_threshold = iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
sess = K.get_session()
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
yolo_model = load_model("model_data/yolo.h5")
yolo_model.summary()
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
def predict(sess, image_file):
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes=sess.run(
[scores, boxes, classes],
feed_dict={
yolo_model.input: image_data,
K.learning_phase(): 0
})### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Important Note
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step9: Expected Output
Step10: 3.1 - Defining classes, anchors and image shape.
Step11: 3.2 - Loading a pretrained model
Step12: This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
Step13: Note
Step14: You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.
Step16: 3.5 - Run the graph on an image
Step17: Run the following cell on the "test.jpg" image to verify that your function is correct.
|
6,617
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('atlas-taggings.csv')
df.head(10)
articles = df[df.tagged_type == 'Article']
articles.head()
def get_tag(x):
return x.split('/')[2]
#changing this function to get_tag_name() in module.
articles.tag_url = articles.tag_url.apply(get_tag)
articles.head()
test = pd.get_dummies(articles.tag_url)
test.head()
articles = articles.join(test)
articles.drop(['tag_id','tag_url','tagged_type','tagged_id'],axis=1,inplace=True)
articles.head()
unique_articles = articles.groupby('tagged_url').sum() #made into func
unique_articles = unique_articles.reset_index()
unique_articles = unique_articles.set_index('tagged_url')
#now we need the pageviews and have to map the URLs to Page Titles
pageviews = pd.read_csv('output_articles_performance.csv',header=None,names=['url','published','pageviews'])
pageviews.head()
#In the future I should import the module and run it here instead of grabbing.
pageviews.url = ['www.atlasobscura.com/articles/' + x for x in pageviews.url]
pageviews.head()
pageviews.describe()
pageviews.set_index('url',inplace=True)
article_set = unique_articles.join(pageviews)
article_set.head()
article_set.reset_index()
article_set['upper_quartile'] = [1 if x > 10000 else 0 for x in article_set.pageviews]
article_set.pageviews.plot(kind='hist', bins=100,title='Page View Distribution, All Content')
article_set['published'] = pd.to_datetime(article_set['published'])
article_set
article_set['year'] = pd.DatetimeIndex(article_set['published']).year
ax = article_set.boxplot(column='pageviews',by='year',figsize=(6,6),showfliers=False)
ax.set(title='PV distribution by year',ylabel='pageviews')
yearly = article_set.set_index('published').resample('M').mean().plot(y='pageviews')
yearly.set(title='Total Pageviews By Month of Article Publication')
time_series = pd.read_csv('time-series.csv')
type(time_series)
time_series = time_series.drop('Unnamed: 0',axis=1)
time_series = time_series.T
time_series.columns
time_series['total'] = time_series.sum(axis=1)
time_series.head()
time_series['days_to_90p']= [(time_series.iloc[x].expanding().sum() > time_series.iloc[x].total*.90).argmax() \
for x in range(len(time_series))]
time_series.reset_index(inplace=True)
time_series.head(1)
time_series['index'] = ['www.atlasobscura.com/articles/' + x for x in time_series['index']]
time_series.set_index('index',inplace=True)
time_series = time_series.join(pageviews.published)
time_series.head(5)
time_series['published'] = pd.to_datetime(time_series.published)
time_series['year_pub'] = pd.DatetimeIndex(time_series['published']).year
time_series.boxplot(column='days_to_90p',by='year_pub')
time_series.year_pub.value_counts(dropna=False)
time_series[['days_to_90p','total','year_pub']].corr()
#I DON'T KNOW WHY THIS WON'T WORK
time_series['30-day-PVs'] = [time_series.fillna(value=0).iloc[x,0:31].sum() for x in range(len(time_series))]
time_series['7-day-PVs'] = [time_series.fillna(value=0).iloc[x,0:8].sum() for x in range(len(time_series))]
total_tagged= pd.DataFrame(data=article_set.sum(),columns = ['num_tagged'])
total_tagged.sort_values('num_tagged',ascending=False,inplace=True)
total_tagged.drop('pageviews',axis=0,inplace=True)
total_tagged[total_tagged.num_tagged >= 10].count()
total_tagged[total_tagged.num_tagged <=5].index
#tag_analysis = article_set.drop(total_tagged[total_tagged.num_tagged < 5].index,axis=1)
#I'm resetting tag_analysis to contain all tags so I can manipulate later whenever I want. It makes it more clear.
tag_analysis = article_set
print tag_analysis.shape
tag_analysis.head()
tag_analysis.tail()
tag_analysis.to_csv('tag_analysis_ready.csv')
total_tagged.head(30)
print total_tagged.shape
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(interaction_only=True)
poly_df = pd.DataFrame(poly.fit_transform(tag_analysis.fillna(0).drop(['published','pageviews','upper_quartile','year'],axis=1)))
poly.n_output_features_
total_tagged.ix['extra-mile']
regular_features = ['places-you-can-no-longer-go','100-wonders','extra-mile','video-wonders','news','features','columns',
'found','animals','fleeting-wonders','visual','other-capitals-of-the-world','video','art','list','objects-of-intrigue',
'maps','morbid-monday','female-explorers','naturecultures']
total_tagged[total_tagged.num_tagged >10].shape
interactions = pd.DataFrame()
for item in regular_features:
for column in tag_analysis.drop(['published','pageviews','upper_quartile','year'],axis=1).drop(
total_tagged[total_tagged.num_tagged < 10].index,axis=1).columns:
interactions[(item + '_' + column)] = tag_analysis[item] + tag_analysis[column]
#Just sum the row and column and then turn any 2s into 1s and 1s into zeros.
def correct_values(x):
if x == 2.0:
return 1
elif x == 1.0:
return 0
else:
return 0
for item in interactions.columns:
interactions[item] = interactions[item].apply(correct_values)
interactions.head(2)
tagged_total = pd.DataFrame(data =interactions.sum(),columns=['num_tagged'])
tagged_total = tagged_total.sort_values('num_tagged',ascending=False)
identity_tags = tagged_total[0:26].index
interactions = interactions.drop(identity_tags,axis=1)
tagged_total = pd.DataFrame(data =interactions.sum(),columns=['num_tagged'])
tagged_total = tagged_total.sort_values('num_tagged',ascending=False)
tagged_total.head(10)
#DO I WANT TO DROP THE EMPTY COLUMNS?
#for item in interactions.columns:
# if interactions[item].sum == 0:
# interactions = interactions.drop(item,axis=1)
interactions.head(10)
interactions = interactions.join(pageviews)
#drop empty cols
def drop_zero_cols(df):
for item in df.columns:
if df[item].sum() == 0:
df = df.drop(item,axis=1)
else:
continue
return df
interactions = drop_zero_cols(interactions.fillna(0).drop(['published','pageviews'],axis=1))
interactions = interactions.join(pageviews)
interactions.head(1)
interaction_totals = pd.DataFrame(interactions.sum().sort_values(ascending=False),columns=['num_tagged'])
interaction_totals[interaction_totals.num_tagged < 4].shape
interactions_analysis = interactions.drop(interaction_totals[interaction_totals.num_tagged < 4].index,axis=1)
interactions_analysis.head()
#Check whether number of Aggregated stories published per day has an impact on average/total Day 0 - 1 traffic.
from sklearn import linear_model
from sklearn import metrics
from sklearn import cross_validation
interactions_analysis['upper_quartile'] = [1 if x > 10000 else 0 for x in interactions.pageviews]
interactions_analysis['twenty_thousand'] = [1 if x > 20000 else 0 for x in interactions.pageviews]
y = interactions_analysis.upper_quartile
X = interactions_analysis.drop(['pageviews','published','upper_quartile','twenty_thousand'],axis=1)
kf = cross_validation.KFold(len(interactions_analysis),n_folds=5)
scores = []
for train_index, test_index in kf:
lr = linear_model.LogisticRegression().fit(X.iloc[train_index],y.iloc[train_index])
scores.append(lr.score(X.iloc[test_index],y.iloc[test_index]))
print "average accuracy for LogisticRegression is", np.mean(scores)
print "average of the set is: ", np.mean(y)
interactions_lr_scores = lr.predict_proba(X)[:,1]
print metrics.roc_auc_score(y,interactions_lr_scores)
interactions_probabilities = pd.DataFrame(zip(X.columns,interactions_lr_scores),columns=['tags','probabilities'])
interactions_probabilities.sort_values('probabilities',ascending=False)
interaction_totals.head(2)
def split_tag(x):
return x.split('_')[1]
interactions_probabilities = interactions_probabilities.reset_index()
interactions_probabilities['subtag'] = interactions_probabilities.tags.apply(split_tag)
interactions_probabilities = interactions_probabilities.sort_values(['tags','probabilities'],ascending=[1, 0])
interactions_probabilities = interactions_probabilities.set_index('tags').join(interaction_totals)
interactions_probabilities
interactions_probabilities['pageviews'] = [sum(interactions['pageviews'][interactions[item]==1]) for item in interactions_probabilities.tags]
interactions_probabilities['mean-PVs'] = interactions_probabilities['pageviews'] // interactions_probabilities['num_tagged']
regular_features
interactions_probabilities[interactions_probabilities.tags.str.contains('features')==True].sort_values('mean-PVs',
ascending = False)
interactions_probabilities.sort_values('probabilities',ascending = False)
np.mean(interactions.pageviews)
#I took the dashes out. Have to add back for this
fix_regular_features = [x.replace(' ','-') for x in regular_features]
fig,axes=plt.subplots(figsize=(10,10))
for item, name in enumerate(fix_regular_features):
interactions.plot(x=interactions['pageviews'][interactions.columns.str.contains(name)==True],kind='box',ax=item)
plt.show()
#doublecheck my work on pageviews vs num-published
pub_volume = tag_analysis[['published','pageviews']]
pub_volume['num_pubbed'] = 1
pub_volume['published'] = pd.to_datetime(pub_volume.published)
pub_volume = pub_volume.set_index('published')
pub_volume.head(10)
pub_volume = pub_volume.resample('M').sum().dropna()
pub_volume['year'] = pub_volume.index.year
pub_volume[pub_volume.index.year >=2015].corr()
pub_volume[pub_volume.index.year >=2015].plot(kind='scatter',x='num_pubbed',y='pageviews')
import seaborn as sns
ax = sns.regplot(x='num_pubbed',y='pageviews',data=pub_volume)
#doublecheck my work on pageviews vs num-published
pub_volume = time_series[['published','7-day-PVs']]
pub_volume['num_pubbed'] = 1
pub_volume['published'] = pd.to_datetime(pub_volume.published)
pub_volume = pub_volume.set_index('published')
pub_volume.head(10)
num_holder = pub_volume.resample('D').sum().dropna().drop('7-day-PVs',axis=1)
pub_volume = pub_volume.resample('D').sum().dropna().drop('num_pubbed',axis=1)
pub_volume = pub_volume.join(num_holder)
pub_volume['year'] = pub_volume.index.year
pub_volume[pub_volume.index.year >=2015].corr()
pub_volume[pub_volume.index >='2016-01-01'].plot(kind='scatter',x='num_pubbed',y='7-day-PVs',title='7-Day PVs')
import seaborn as sns
ax = sns.regplot(x='num_pubbed',y='7-day-PVs',data=pub_volume)
simplereach = pd.read_csv('simplereach-tags.csv')
simplereach.head(1)
simplereach = simplereach.set_index('Tag')
total_tagged2 = total_tagged
total_tagged2.head(4)
total_tagged2.index = [x.replace('-',' ') for x in total_tagged.index]
simplereach = simplereach.join(total_tagged2)
simplereach['mean-PVs'] = simplereach['Page Views'] // simplereach['num_tagged']
simplereach['mean-shares'] = simplereach['Facebook Shares'] // simplereach['num_tagged']
simplereach = simplereach[['mean-PVs','mean-shares','num_tagged']]
simplereach[simplereach['num_tagged'] > 5].sort_values('mean-PVs',ascending=False)
#regular_features = [x.replace('-',' ') for x in regular_features]
simplereach.ix[regular_features].sort_values('mean-PVs',ascending=False)
from sklearn import linear_model
from sklearn import metrics
tag_analysis.fillna(value=0,inplace=True)
y = tag_analysis.upper_quartile
X = tag_analysis.drop(['pageviews','published','upper_quartile'],axis=1)
from sklearn import cross_validation
kf = cross_validation.KFold(len(tag_analysis),n_folds=5)
scores = []
for train_index, test_index in kf:
lr = linear_model.LogisticRegression().fit(X.iloc[train_index],y.iloc[train_index])
scores.append(lr.score(X.iloc[test_index],y.iloc[test_index]))
print "average accuracy for LogisticRegression is", np.mean(scores)
print "average of the set is: ", np.mean(y)
lr_scores = lr.predict_proba(X)[:,1]
print metrics.roc_auc_score(y,lr_scores)
print metrics.roc_auc_score(y,lr_scores)
lr_scores
coefficients = pd.DataFrame(zip(X.columns,lr.coef_[0]),columns=['tags','coefficients'])
probabilities = pd.DataFrame(zip(X.columns,lr_scores),columns=['tags','probabilities'])
probabilities.sort_values('probabilities',ascending=False)
coefficients.sort_values('coefficients',ascending=False)
tag_analysis[tag_analysis['100-wonders'] ==1].describe()
tag_analysis.head()
from sklearn.grid_search import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
params = {'n_neighbors': [x for x in range(2,200,1)],
'weights': ['distance','uniform']}
gs = GridSearchCV(estimator=KNeighborsClassifier(),param_grid=params,n_jobs=8,cv=10)
gs.fit(X,y)
print gs.best_params_
print gs.best_score_
print type(gs.best_estimator_)
knn = gs.best_estimator_.fit(X,y)
knn_scores = knn.predict_proba(X)[:,1]
print np.mean(knn_scores)
print np.mean(lr_scores)
knn_probabilities = pd.DataFrame(zip(X.columns,knn_scores),columns=['tags','probabilities'])
knn_probabilities.sort_values('probabilities',ascending=False)
print 'knn', metrics.roc_auc_score(y,knn_scores)
print 'lr', metrics.roc_auc_score(y,lr_scores)
probabilities = probabilities.set_index('tags')
probabilities = probabilities.join(total_tagged)
probabilities.to_csv('tag-probabilities-logisticregression.csv')
from sklearn.ensemble import RandomForestClassifier
params = {'max_depth': np.arange(20,100,2),
'min_samples_leaf': np.arange(90,200,2),
'n_estimators': 20}
gs1 = GridSearchCV(RandomForestClassifier(),param_grid=params, cv=10, scoring='roc_auc',n_jobs=8,verbose=1)
gs1.fit(X,y)
print gs1.best_params_
print gs1.best_score_
rf = RandomForestClassifier(gs1.best_estimator_)
rf.fit(X,y)
probs = rf.predict_proba(X)[:,1]
print rf.score(X,y)
print metrics.roc_auc_score(y,probs)
probs = pd.DataFrame(zip(X.columns,probs),columns=['tags','probabilities'])
probs.sort_values('probabilities',ascending=False)
tag_analysis2 = article_set.drop(total_tagged[total_tagged.num_tagged < 15].index,axis=1)
tag_analysis2['ten_thousand'] = [1 if x > 10000 else 0 for x in tag_analysis2.pageviews]
tag_analysis2.fillna(value=0,inplace=True)
y2 = tag_analysis2.ten_thousand
X2 = tag_analysis2.drop(['pageviews','upper_quartile','ten_thousand'],axis=1)
kf2 = cross_validation.KFold(len(tag_analysis2),n_folds=5)
scores2 = []
for train_index, test_index in kf2:
lr2 = linear_model.LogisticRegression().fit(X2.iloc[train_index],y2.iloc[train_index])
scores2.append(lr2.score(X2.iloc[test_index],y2.iloc[test_index]))
print "average accuracy for LogisticRegression is", np.mean(scores2)
print "average of the set is: ", np.mean(y2)
print tag_analysis2.shape
print y2.shape
print X2.shape
lr_scores2 = lr2.predict_proba(X2)[:,1]
lr2_probs = pd.DataFrame(zip(X2.columns,lr_scores2),columns=['tags','probabilities'])
lr2_probs.sort_values('probabilities',ascending=False)
metrics.roc_auc_score(y2,lr2.predict_proba(X2)[:,1])
lr2_probs = lr2_probs.set_index('tags')
lr2_probs = lr2_probs.join(total_tagged)
plt.figure(figsize=(10,10))
plt.scatter(lr2_probs.num_tagged,lr2_probs.probabilities)
plt.show()
lr2_probs = lr2_probs.sort_values('probabilities',ascending=False)
lr2_probs = lr2_probs.reset_index()
lr2_probs.to_csv('min15tags_min10000pvs.csv')
lr2_probs.shape
lr2_probs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, import the table of tag-article mappings from our SQL db
Step2: We only care about the articles for this analysis. Place entries are outside scope.
Step3: Extract the tag name from the tag's URL
Step4: Create a tag_url column that just has the tag's name
Step5: Get dummies for each tag
Step6: Join the dummies back to the main dataframe
Step7: De-dupe articles but maintain the tagging data using groupby and sum
Step8: Using a csv generated by a script I wrote that queries Google Analytics for pageviews per article from publish date to n-days post-publication, import pageview data and join it to the tag/article DataFrame
Step9: Set the pageviews index to the url column to make joining easy
Step10: Reset index
Step11: Articles published more recently have, on average, received much more traffic than older articles (this reflects growth and heavier distribution of the newer content). The drop in the mean as we move into 2016 is an artifact of the article's lifecycle not being complete.
Step12: Let's import the time-series I created with a python script that asks GA for the daily time-series of Pageviewsof each article from publication date forward two years.
Step13: It was easier to collect the data from GA by looping over the columns in my original dataframe, but having each row be an article record is easier to work with now, so we transpose.
Step14: Let's determine how many days post-publication it takes for an article to collect 90% of total pageviews.
Step15: Now let's look at the number of articles per tag (we will later join the two DataFrames above into one)
Step16: Now I'm going to try this with the time series 30days pvs
Step17: Let's check average performance when just looking at Simplereach Tag data
Step18: Let's run some regression analysis on our tag_analysis DataFrame
Step19: Now let's try it with KNN
Step20: Let's check the roc_auc scores for both the knn and logistic regression models.
Step21: Looks like they give similar scores, but the scores are easily manipulated by changing the threshold for the number of articles per tag and by changing the threshold for "success" (currently set at > 10,000 Pageviews).
Step22: Now let's try RandomForest
Step23: Let's try the Logistic Model but with more tags
|
6,618
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, execute
from qiskit.quantum_info import Pauli, state_fidelity, basis_state, process_fidelity
q0 = QuantumRegister(2, 'q0')
c0 = ClassicalRegister(2, 'c0')
q1 = QuantumRegister(2, 'q1')
c1 = ClassicalRegister(2, 'c1')
q_test = QuantumRegister(2, 'q0')
print(q0.name)
print(q0.size)
q0==q0
q0==q_test
q0==q1
circ = QuantumCircuit(q0, q1)
circ.x(q0[1])
circ.x(q1[0])
circ.draw()
circ2 = QuantumCircuit()
circ2.add_register(q0)
circ2.add_register(q1)
circ2.x(q0[1])
circ2.x(q1[0])
circ2.draw()
from copy import deepcopy
q3 = QuantumRegister(2, 'q3')
circ3 = deepcopy(circ)
circ3.add_register(q3)
circ3.draw()
meas = QuantumCircuit(q0, q1, c0, c1)
meas.measure(q0, c0)
meas.measure(q1, c1)
qc = circ + meas
qc.draw()
meas2 = QuantumCircuit()
meas2.add_register(q0)
meas2.add_register(q1)
meas2.add_register(c0)
meas2.add_register(c1)
meas2.measure(q0, c0)
meas2.measure(q1, c1)
qc2 = circ2 + meas2
qc2.draw()
circ4 = QuantumCircuit(q1)
circ4.x(q1)
circ4.draw()
circ5 = QuantumCircuit(q3)
circ5.h(q3)
circ5.draw()
(circ4+circ5).draw()
circ4 += circ5
circ4.draw()
circ.draw()
backend_sim = Aer.get_backend('statevector_simulator')
result = execute(circ, backend_sim).result()
state = result.get_statevector(circ)
print(state)
state_fidelity(basis_state('0110', 4), state)
backend_sim = Aer.get_backend('unitary_simulator')
result = execute(circ, backend_sim).result()
unitary = result.get_unitary(circ)
process_fidelity(Pauli(label='IXXI').to_matrix(), unitary)
qc.draw()
backend_sim = Aer.get_backend('qasm_simulator')
result = execute(qc, backend_sim).result()
counts = result.get_counts(qc)
print(counts)
backend_sim = Aer.get_backend('statevector_simulator')
result = execute(circ2, backend_sim).result()
states = result.get_statevector(circ2)
backend_sim = Aer.get_backend('qasm_simulator')
result = execute(qc2, backend_sim).result()
counts = result.get_counts(qc2)
backend_sim = Aer.get_backend('unitary_simulator')
result = execute(circ2, backend_sim).result()
unitary = result.get_unitary(circ2)
print(counts)
state_fidelity(basis_state('0110', 4), state)
process_fidelity(Pauli(label='IXXI').to_matrix(), unitary)
q = QuantumRegister(6)
circuit = QuantumCircuit(q)
circuit.h(q[0])
circuit.ccx(q[0], q[1], q[2])
circuit.cx(q[1], q[3])
circuit.x(q)
circuit.h(q[2])
circuit.h(q[3])
circuit.draw()
# total number of operations in the circuit. no unrolling is done.
circuit.size()
# depth of circuit (number of ops on the critical path)
circuit.depth()
# number of qubits in the circuit
circuit.width()
# a breakdown of operations by type
circuit.count_ops()
# number of unentangled subcircuits in this circuit.
# each subcircuit can in principle be executed on a different quantum processor!
circuit.num_tensor_factors()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quantum and Classical Registers
Step2: The name is optional. If not given Qiskit will name it $qi$ where $i$ is an interger which will count from 0. The name and size can be returned using the following
Step3: You can test if the register are the same or different.
Step4: Quantum Circuits
Step5: is the same as
Step6: <div class="alert alert-block alert-info">
Step7: <div class="alert alert-block alert-info">
Step8: It even works when the circuits have different registers. Let's start by making two new circuits
Step9: The new register is added to the circuit
Step10: We have also overloaded += to the QuantumCircuit object
Step11: Outcomes of Quantum Circuits
Step12: qubit register $Q_0$ is prepared in the state $|10\rangle$ and $Q_1$ is in the state $|01\rangle$ giving a total state $|0110\rangle$ ($Q1\otimes Q0$).
Step13: To check the fidelity of this state with the basis_state in Qiskit Terra you can use
Step14: We can also use Qiskit Terra to make the unitary operator representing the circuit (provided there are no measurements). This will be a $16\otimes16$ matrix equal to $I\otimes X\otimes X\otimes I$. To check this is correct we can use the Pauli class and the process_fidelity function.
Step15: To map the information of the quantum state to the classial world we have to use the example with measurements qc
Step16: This will map the quantum state to the classical world and since the state has no superpositions it will be deterministic and equal to '01 10'. Here a space is used to separate the registers.
Step17: To show that it does not matter how you add the registers we run the same as above on the second example circuit
Step18: Counting circuit resources
|
6,619
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', compute_times=phoebe.linspace(0,1,101), dataset='lc01')
b.add_dataset('mesh', compute_times=[0.25], dataset='mesh01')
b.run_compute(model='original_model')
print(b.filter(qualifier='distortion_method', context='compute'))
print(b.get_parameter(qualifier='distortion_method', component='secondary', context='compute'))
b.set_value('distortion_method', component='secondary', value='none')
print(b.filter(qualifier='pblum*'))
b.run_compute(model='distortion_method_none')
_ = b.plot(kind='lc', legend=True, show=True)
_ = b.plot(kind='lc', model='distortion_method_none', show=True)
_ = b.plot(kind='mesh', model='distortion_method_none', show=True)
_ = b.plot(kind='mesh', model='original_model', show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Datasets
Step3: Distortion Method
Step4: IMPORTANT NOTE
Step5: Plotting
Step6: And if we plot the exposed meshes, we'll see that no mesh was created for the secondary component when setting distortion_method to 'none'.
|
6,620
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-mmh', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
6,621
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
email_list_lst=[('Omar','Bayramov','omarbayramov@hotmail.com',1),
('Ali','Aliyev','alialiyev@example.com',0),
('Dmitry','Vladimirov','v.dmitry@koala.kl',1),
('Donald','Trump','grabthat@pussycatdolls.com',1),
('Rashid','Maniyev','rashid.maniyev@exponential.az',1),
]
email_list_lst_cln=['f_name','l_name','email_adrs','a_status',]
df=pd.DataFrame.from_records(email_list_lst, columns=email_list_lst_cln)
df
email_list_lst=[('f_name', ['Omar', 'Ali', 'Dmitry', 'Donald', 'Rashid',]),
('l_name', ['Bayramov', 'Aliyev', 'Vladimirov', 'Trump', 'Maniyev',]),
('email_adrs', ['omarbayramov@hotmail.com', 'alialiyev@example.com', 'v.dmitry@koala.kl', 'grabthat@pussycatdolls.com', 'rashid.maniyev@exponential.az',]),
('a_status', [1, 0, 1, 1, 1,]),
]
df = pd.DataFrame.from_items(email_list_lst)
df
email_list=[{
'f_name' : 'Omar',
'l_name': 'Bayramov',
'email_adrs' : 'omarbayramov@hotmail.com',
'a_status' : 1
},
{'f_name' : 'Ali', 'l_name': 'Aliyev', 'email_adrs':'alialiyev@example.com', 'a_status' : 0},
{'f_name': 'Dmitry', 'l_name': 'Vladimirov', 'email_adrs':'v.dmitry@koala.kl', 'a_status':1},
{'f_name': 'Donald', 'l_name': 'Trump', 'email_adrs':'grabthat@pussycatdolls.com', 'a_status':1},
{'f_name': 'Rashid', 'l_name': 'Maniyev', 'email_adrs':'rashid.maniyev@exponential.az', 'a_status':1},
]
df=pd.DataFrame(email_list,)
df
df=df[['f_name','l_name','email_adrs','a_status',]]
df
email_list_dct={'f_name': ['Omar', 'Ali', 'Dmitry', 'Donald', 'Rashid',],
'l_name': ['Bayramov', 'Aliyev', 'Vladimirov', 'Trump', 'Maniyev',],
'email_adrs': ['omarbayramov@hotmail.com', 'alialiyev@example.com', 'v.dmitry@koala.kl', 'grabthat@pussycatdolls.com', 'rashid.maniyev@exponential.az',],
'a_status': [1, 0, 1, 1, 1,],
}
df = pd.DataFrame.from_dict(email_list_dct)
df=df[['f_name','l_name','email_adrs','a_status',]]
df
df = pd.read_excel('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/excel_to_dataframe.xlsx',
sheet_name='data_for_ttrl')
df
df = pd.read_csv('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/csv_to_dataframe.csv',
delimiter=',')
df
df = pd.read_json('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/json_to_dataframe.json')
df = df[['f_name','l_name','email_adrs','a_status',]]
df
import sqlalchemy
from sqlalchemy import create_engine
import sqlite3
engine = create_engine('sqlite:///C:/Users/omarbayramov/Documents/GitHub/datasets/Tutorial_datasets/sql_to_dataframe.db')
con=engine.connect()
a=con.execute('SELECT * FROM emails')
data=a.fetchall()
a.close()
data
df=pd.DataFrame(data, columns=['f_name','l_name','email_adrs','a_status',])
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Datafreymin əsasına yerləşəcək verilənlər mənbə üzrə üzrə daxili və xarici formalara bölünür
Step2: Növbəti email_list_lst_cln dəyşəninə isə sütun adlarından ibarət siyahı təhkim edirik.
Step3: Nəhayət, DataFrame-nin "from_records" funksiyasına email_list_lst və email_list_lst_cln dəyərlərini ötürüb email_list_lst dəyərlərindən email_list_lst_cln sütunları ilə cədvəl yaradırıq və sonra cədvəli əks etdiririk.
Step4: Siyahı
Step5: Lüğət
Step6: Burada gördüyünüz kimi məlumat DataFrame daxilinə keçsədə, sütunlar istədiyimiz ardıcıllıqda yox, əlifba ardıcıllığı üzrə sıralanmışdır. Bu məqamı aradan qaldırmaq üçün ya yuxarıda DataFrame yaradan zamandaki kimi əvvəlcədən column parametri vasitəsi ilə sütun adların və ardıcıllığın qeyd etməli, və ya sonradan aşaqda qeyd olunmuş əmr ilə sütun yerlərin dəyşməliyik.
Step7: Lüğət
Step8: Cədvəli yaradaq və sütunların yerlərin dəyişək
Step9: Python xarici mənbələr
Step10: CSV
Step11: JSON
Step12: SQL
Step13: Sorğulama üçün engine yaradaq və məlumat bazası faylına yolu göstərək.
Step14: Qoşulma yaradıb, məlumat bazasında yerləşən emails cədvəlindən bütün sətrləri sorğulayaq.
Step15: Məlumat sorğulandıqdan sonra fetchall funksiyası vasitəsi ilə sətrləri "oxuyub" data dəyişkəninə təhkim edək və sonda MB bağlantısın bağlayaq.
Step16: Əldə olunan məlumatın strukturu tanış qəlir mi? Diqqət ilə baxsanız ilk tanış olduğumuz Siyahı
|
6,622
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from __future__ import print_function
print(plt.style.available)
# Let's choose one style. And while we are at it, define thicker lines and big graphic sizes
plt.style.use('bmh')
plt.rcParams['lines.linewidth'] = 1.5
plt.rcParams['figure.figsize'] = (15, 5)
import numpy as np
x = np.arange( -10, 11 )
y = x*x
plt.plot(x,y)
plt.xlabel('x');
plt.ylabel('x square');
plt.plot(x,y,'ro-');
# Create a figure object
fig = plt.figure()
# Add a graph to the figure. We get an axes object
ax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum)
# Create two vectors: x, y
x = np.linspace(0, 10, 1000)
y = np.sin(x)
# Plot those vectors on the axes we have
ax.plot(x, y)
# Add another plot to the same axes
y2 = np.cos(x)
ax.plot(x, y2)
# Modify the axes
ax.set_ylim(-1.5, 1.5)
# Add labels
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
ax.set_title("Sinusoids")
# Add a legend
ax.legend(['sine', 'cosine']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
Step2: Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in "styles". Let's see which styles are available
Step3: Simple plots
Step4: And we plot it
Step5: We can extensively alter the aspect of the plot. For instance, we can add markers and change color
Step6: Matplotlib syntax
|
6,623
|
<ASSISTANT_TASK:>
Python Code:
the_number_four = 4
print(the_number_four)
x = 4
x = 5
print(x)
the_number_four = 4
print("theNumberFour")
print(the_number_five)
the_number_five = 5
the_number_five = 5
print(the_number_five)
a = 4
a = b # throws a NameError, b is not defined!
# So what should we do if we want b to equal a? Well, we would have to write:
a = 4
b = a
# and now both a and b should equal 4
print(a)
print(b)
# and lastly, a and b are both numbers, so a + b is that same as 4 + 4 (more on numbers later).
print(4 + 4)
print(a + b)
a = 5
a = a + 5
print(a)
a = 5
b = 5
a = a + b # 5 + 5 = 10 ('a' therefore now equals 10, not 5)
a = a + b # 10 + 5 = 15
print(a)
long_variable_name = 10
long_variable_name = long_variable_name + 10
# is the same as:
long_variable_name += 10
a = 5
a2 = 5
a += 5
a2 = a2 + 5
print(a, a2) # <--- They are both the same, see!?
# "updating" works for strings too:
s = "hello"
s += " "
s += "world"
print(s)
# my_name code goes here...
# kitchen_utensil code goes here...
# copy & paste the above print statement
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: QUESTION
Step2: So what is going on here? Well, Python first defines ‘x’ as equal to 4. In the very next line Python says "oh, x is equal to 5 now", and it "forgets" the old value of 4. In short, there never two versions of ‘x’ here. Rather, there is one ‘x’ and it used to equal 4 and now it equals 5.
Step3: So...what happened? Well, when we use quotation marks Python interprets that as a string. But without quotes, Python looks at the_number_four and says
Step4: You may recall in the very first lecture we got a name error when we tried to all Print() when we actually meant print(). We got a name error in this case too, but the cause it is a bit different. What happened here is that Python executes code sequentially line by line, thus line 1 is executed before line 2 is and so on.
Step5: Don't confuse assignment with 'Equals'
Step6: Incrementing Variables
Step7: In the example above we set 'a' to the value 5. In the next line we say assign 'a' to the value ('a' + 5). And since 'a' is 5 Python interprets this as 5 + 5, which is of course 10. Here is another example
Step8: The final point I'd like on this is that since this concept is super-useful the designers of Python thought it would be a good idea to be able to do this sort of calculation by using just two characters
Step9: 'a += b' functions EXACTLY the same as 'a = a + b', the former is simply a short-cut that means you can do less typing, which can be useful if something has a long_variable_name.
Step10: Homework
|
6,624
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
%%time
Empoyees = pd.read_excel('/home/data/AdventureWorks/Employees.xls')
%%time
Territory = pd.read_excel('/home/data/AdventureWorks/SalesTerritory.xls')
%%time
Customers = pd.read_excel('/home/data/AdventureWorks/Customers.xls')
%%time
Orders = pd.read_excel('/home/data/AdventureWorks/ItemsOrdered.xls')
SELECT e.EmployeeID, e.Title, e.FirstName, e.LastName, e.VacationHours, e.SickLeaveHours
FROM dbo.Employees AS e
WHERE e.VacationHours < 60
;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filtering (with)
|
6,625
|
<ASSISTANT_TASK:>
Python Code:
from pymldb import Connection
mldb = Connection("http://localhost/")
mldb.put('/v1/procedures/import_bench_train_1m', {
"type": "import.text",
"params": {
"dataFileUrl": "https://s3.amazonaws.com/benchm-ml--main/train-1m.csv",
"outputDataset":"bench_train_1m",
"runOnCreation": True
}
})
mldb.put('/v1/procedures/import_bench_test', {
"type": "import.text",
"params": {
"dataFileUrl": "https://s3.amazonaws.com/benchm-ml--main/test.csv",
"outputDataset":"bench_test",
"runOnCreation": True
}
})
print "Datasets loaded."
mldb.put('/v1/procedures/benchmark', {
"type": "classifier.experiment",
"params": {
"experimentName": "benchm_ml",
"inputData":
select
{* EXCLUDING(dep_delayed_15min)} as features,
dep_delayed_15min = 'Y' as label
from bench_train_1m
,
"testingDataOverride":
select
{* EXCLUDING(dep_delayed_15min)} as features,
dep_delayed_15min = 'Y' as label
from bench_test
,
"configuration": {
"type": "bagging",
"num_bags": 100,
"validation_split": 0,
"weak_learner": {
"type": "decision_tree",
"max_depth": 20,
"random_feature_propn": 0.3
}
},
"modelFileUrlPattern": "file:///mldb_data/models/benchml_$runid.cls",
"mode": "boolean"
}
})
print "Ready to go!"
import time
start_time = time.time()
result = mldb.post('/v1/procedures/benchmark/runs')
run_time = time.time() - start_time
auc = result.json()["status"]["folds"][0]["resultsTest"]["auc"]
print "\n\nAUC = %0.10f, time = %0.4f\n\n" % (auc, run_time)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we create the datasets directly from the remote files.
Step4: Now we create the experimental setup.
Step5: Finally, we run the experiment inside a timing block. On an otherwise-unloaded AWS EC2 r3.8xlarge instance (32 cores, 240GB of RAM) it takes around 20 seconds to reach an AUC of more than 0.74.
|
6,626
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip install -q tensorflow-model-optimization
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tempfile
import zipfile
import os
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 16,
'cluster_centroids_init': CentroidInitialization.LINEAR
}
# Cluster a whole model
clustered_model = cluster_weights(model, **clustering_params)
# Use smaller learning rate for fine-tuning clustered model
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
clustered_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
clustered_model.summary()
# Fine-tune model
clustered_model.fit(
train_images,
train_labels,
batch_size=500,
epochs=1,
validation_split=0.1)
_, clustered_model_accuracy = clustered_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Clustered test accuracy:', clustered_model_accuracy)
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
_, clustered_keras_file = tempfile.mkstemp('.h5')
print('Saving clustered model to: ', clustered_keras_file)
tf.keras.models.save_model(final_model, clustered_keras_file,
include_optimizer=False)
clustered_tflite_file = '/tmp/clustered_mnist.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
tflite_clustered_model = converter.convert()
with open(clustered_tflite_file, 'wb') as f:
f.write(tflite_clustered_model)
print('Saved clustered TFLite model to:', clustered_tflite_file)
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file)))
print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
_, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_clustered_tflite_file, 'wb') as f:
f.write(tflite_quant_model)
print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()
test_accuracy = eval_model(interpreter)
print('Clustered and quantized TFLite test_accuracy:', test_accuracy)
print('Clustered TF test accuracy:', clustered_model_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Weight clustering in Keras example
Step2: Train a tf.keras model for MNIST without clustering
Step3: Evaluate the baseline model and save it for later usage
Step4: Fine-tune the pre-trained model with clustering
Step5: Fine-tune the model and evaluate the accuracy against baseline
Step6: For this example, there is minimal loss in test accuracy after clustering, compared to the baseline.
Step7: Create 6x smaller models from clustering
Step8: Then, create compressible models for TFLite. You can convert the clustered model to a format that's runnable on your targeted backend. TensorFlow Lite is an example you can use to deploy to mobile devices.
Step9: Define a helper function to actually compress the models via gzip and measure the zipped size.
Step10: Compare and see that the models are 6x smaller from clustering
Step11: Create an 8x smaller TFLite model from combining weight clustering and post-training quantization
Step12: See the persistence of accuracy from TF to TFLite
Step13: You evaluate the model, which has been clustered and quantized, and then see the accuracy from TensorFlow persists to the TFLite backend.
|
6,627
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
X_train.shape
from sklearn.svm import LinearSVC
svm = LinearSVC(C=0.1)
svm.fit(X_train, y_train)
print(svm.predict(X_test))
svm.score(X_train, y_train)
svm.score(X_test, y_test)
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=50)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
%load https://raw.githubusercontent.com/scikit-learn/scikit-learn/master/examples/classification/plot_classifier_comparison.py
# %load solutions/train_iris.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Really Simple API
Step2: 1) Instantiate an object and set the parameters
Step3: 2) Fit the model
Step4: 3) Apply / evaluate
Step5: And again
Step6: Exercises
|
6,628
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
GMAT_PATH = '/home/daniel/GMAT/R2018a/bin/GMAT-R2018a'
import numpy as np
import matplotlib.pyplot as plt
from astropy.time import Time
import subprocess
# Larger figure size
fig_size = [10, 6]
plt.rcParams['figure.figsize'] = fig_size
gmat_script_template =
%----------------------------------------
%---------- Spacecraft
%----------------------------------------
Create Spacecraft DSLWP_B;
DSLWP_B.DateFormat = UTCModJulian;
DSLWP_B.Epoch = '{epoch}';
DSLWP_B.CoordinateSystem = EarthFixed;
DSLWP_B.DisplayStateType = Cartesian;
DSLWP_B.X = {x};
DSLWP_B.Y = {y};
DSLWP_B.Z = {z};
DSLWP_B.VX = {vx};
DSLWP_B.VY = {vy};
DSLWP_B.VZ = {vz};
DSLWP_B.DryMass = 45;
DSLWP_B.DragArea = 0.25;
DSLWP_B.SRPArea = 0.25;
%----------------------------------------
%---------- ForceModels
%----------------------------------------
Create ForceModel LunaProp_ForceModel;
LunaProp_ForceModel.CentralBody = Luna;
LunaProp_ForceModel.PrimaryBodies = {{Luna}};
LunaProp_ForceModel.PointMasses = {{Earth, Jupiter, Mars, Neptune, Saturn, Sun, Uranus, Venus}};
LunaProp_ForceModel.Drag = None;
LunaProp_ForceModel.SRP = On;
LunaProp_ForceModel.RelativisticCorrection = On;
LunaProp_ForceModel.ErrorControl = RSSStep;
LunaProp_ForceModel.GravityField.Luna.Degree = 10;
LunaProp_ForceModel.GravityField.Luna.Order = 10;
LunaProp_ForceModel.GravityField.Luna.StmLimit = 100;
LunaProp_ForceModel.GravityField.Luna.PotentialFile = 'LP165P.cof';
LunaProp_ForceModel.GravityField.Luna.TideModel = 'None';
%----------------------------------------
%---------- Propagators
%----------------------------------------
Create Propagator LunaProp;
LunaProp.FM = LunaProp_ForceModel;
LunaProp.Type = PrinceDormand78;
LunaProp.InitialStepSize = 1;
LunaProp.Accuracy = 1e-13;
LunaProp.MinStep = 0;
LunaProp.MaxStep = 600;
%----------------------------------------
%---------- Coordinate Systems
%----------------------------------------
Create CoordinateSystem LunaInertial;
LunaInertial.Origin = Luna;
LunaInertial.Axes = BodyInertial;
%----------------------------------------
%---------- Subscribers
%----------------------------------------
Create OrbitView LunaOrbitView;
GMAT LunaOrbitView.SolverIterations = None;
GMAT LunaOrbitView.UpperLeft = [ 0.1801470588235294 0.04190751445086705 ];
GMAT LunaOrbitView.Size = [ 0.9926470588235294 0.9552023121387283 ];
GMAT LunaOrbitView.RelativeZOrder = 126;
GMAT LunaOrbitView.Maximized = true;
GMAT LunaOrbitView.Add = {{DSLWP_B, Earth, Luna, Sun}};
GMAT LunaOrbitView.CoordinateSystem = LunaInertial;
GMAT LunaOrbitView.DrawObject = [ true true true true ];
GMAT LunaOrbitView.DataCollectFrequency = 1;
GMAT LunaOrbitView.UpdatePlotFrequency = 50;
GMAT LunaOrbitView.NumPointsToRedraw = 0;
GMAT LunaOrbitView.ShowPlot = true;
GMAT LunaOrbitView.MaxPlotPoints = 20000;
GMAT LunaOrbitView.ShowLabels = true;
GMAT LunaOrbitView.ViewPointReference = Luna;
GMAT LunaOrbitView.ViewPointVector = [ 30000 0 0 ];
GMAT LunaOrbitView.ViewDirection = Luna;
GMAT LunaOrbitView.ViewScaleFactor = 1;
GMAT LunaOrbitView.ViewUpCoordinateSystem = LunaInertial;
GMAT LunaOrbitView.ViewUpAxis = Z;
GMAT LunaOrbitView.EclipticPlane = Off;
GMAT LunaOrbitView.XYPlane = On;
GMAT LunaOrbitView.WireFrame = Off;
GMAT LunaOrbitView.Axes = On;
GMAT LunaOrbitView.Grid = Off;
GMAT LunaOrbitView.SunLine = Off;
GMAT LunaOrbitView.UseInitialView = On;
GMAT LunaOrbitView.StarCount = 7000;
GMAT LunaOrbitView.EnableStars = On;
GMAT LunaOrbitView.EnableConstellations = Off;
Create ReportFile OrbitReport;
OrbitReport.Filename = '/home/daniel/jupyter_notebooks/dslwp/OrbitReport_{label}.txt';
OrbitReport.Add = {{DSLWP_B.UTCModJulian, DSLWP_B.Luna.SMA, DSLWP_B.Luna.ECC, DSLWP_B.LunaInertial.INC, DSLWP_B.LunaInertial.RAAN, DSLWP_B.LunaInertial.AOP, DSLWP_B.Luna.MA, DSLWP_B.Luna.TA}};
OrbitReport.WriteHeaders = false;
OrbitReport.WriteReport = true;
%----------------------------------------
%---------- Mission Sequence
%----------------------------------------
BeginMissionSequence;
Toggle OrbitReport Off
If DSLWP_B.UTCModJulian <= {start}
Propagate LunaProp(DSLWP_B) {{DSLWP_B.UTCModJulian = {start}}}
Else
Propagate BackProp LunaProp(DSLWP_B) {{DSLWP_B.UTCModJulian = {start}}}
EndIf
Toggle OrbitReport On
Propagate LunaProp(DSLWP_B) {{DSLWP_B.UTCModJulian = {end}}}
mjd_unixtimestamp_offset = 10587.5
seconds_in_day = 3600 * 24
def mjd2unixtimestamp(m):
return (m - mjd_unixtimestamp_offset) * seconds_in_day
def unixtimestamp2mjd(u):
return u / seconds_in_day + mjd_unixtimestamp_offset
unixtimestamp2mjd(1528607994)
def load_tracking_file(path):
ncols = 7
data = np.fromfile(path, sep=' ', count=ncols)
return data
def load_orbit_file(path):
ncols = 8
data = np.fromfile(path, sep=' ')
return data.reshape((data.size // ncols, ncols))
utc = 0
sma = 1
ecc = 2
inc = 3
raan = 4
aop = 5
ma = 6
ta = 7
def gmat_propagate_tracking(track, start, end, label = '', do_not_close = False):
data = {'label' : label, 'start' : start, 'end' : end}
data['epoch'] = unixtimestamp2mjd(track[0])
data['x'], data['y'], data['z'] = track[1:4]
data['vx'], data['vy'], data['vz'] = track[4:7]
SCRIPT_PATH = '/tmp/gmat.script'
with open(SCRIPT_PATH, 'w') as f:
f.write(gmat_script_template.format(**data))
subprocess.call([GMAT_PATH, '-r', SCRIPT_PATH] + (['-x'] if not do_not_close else []))
#parts = ['20180526', '20180528', '20180529', '20180531', '20180601', '20180602', '20180603', '20180607', '20180609']
#parts = ['20180602', '20180603', '20180607', '20180609', '20180610', '20180615', '20180619', '20180622']
#parts = ['20180610', '20180615', '20180619', '20180622']
parts = ['20180629', '20180714', '20180727a']
parts = ['20180727a']
parts = ['20180622']
parts = ['20180727a', '20180803', '20180812', '20180814', '20180816', '20180818']
parts = ['20180812', '20180912', '20180914', '20180916', '20180930', '20181004', '20181006']
parts = ['20180812', '20180912', '20180914', '20180916', '20180930', '20181004', '20181006']
parts = ['20181006', '20181008', '20181010', '20181013', '20181015', '20181017', '20181019', '20181021']
parts = ['20181019']
parts = ['20190317']
parts = ['20190426']
parts = ['20190520']
parts = ['20190603']
parts = ['20190630']
for part in parts:
tracking = load_tracking_file('tracking_files/program_tracking_dslwp-b_{}.txt'.format(part))
gmat_propagate_tracking(tracking, start = '28560', end = '28570', label = part)
fig1 = plt.figure(figsize = [15,8], facecolor='w')
fig2 = plt.figure(figsize = [15,8], facecolor='w')
fig3 = plt.figure(figsize = [15,8], facecolor='w')
fig4 = plt.figure(figsize = [15,8], facecolor='w')
fig5 = plt.figure(figsize = [15,8], facecolor='w')
fig6 = plt.figure(figsize = [15,8], facecolor='w')
sub1 = fig1.add_subplot(111)
sub2 = fig2.add_subplot(111)
sub3 = fig3.add_subplot(111)
sub4 = fig4.add_subplot(111)
sub5 = fig5.add_subplot(111)
sub6 = fig6.add_subplot(111)
for part in parts:
orbit = load_orbit_file('OrbitReport_{}.txt'.format(part))
t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix')
sub1.plot(t.datetime, orbit[:,sma])
sub2.plot(t.datetime, orbit[:,ma])
sub3.plot(t.datetime, orbit[:,ecc])
sub4.plot(t.datetime, orbit[:,aop])
sub5.plot(t.datetime, orbit[:,inc])
sub6.plot(orbit[:,utc], orbit[:,raan])
sub1.legend(parts)
sub2.legend(parts)
sub3.legend(parts)
sub4.legend(parts)
sub5.legend(parts)
sub6.legend(parts)
sub1.set_xlabel('UTC time')
sub2.set_xlabel('UTC time')
sub3.set_xlabel('UTC time')
sub4.set_xlabel('UTC time')
sub5.set_xlabel('UTC time')
sub6.set_xlabel('UTC time')
sub1.set_ylabel('SMA (km)')
sub2.set_ylabel('MA (deg)')
sub3.set_ylabel('ECC')
sub4.set_ylabel('AOP (deg)')
sub5.set_ylabel('INC (deg)')
sub6.set_ylabel('RAAN (deg)')
sub1.set_title('Semi-major axis')
sub2.set_title('Mean anomaly')
sub3.set_title('Eccentricity')
sub4.set_title('Argument of periapsis')
sub5.set_title('Inclination')
sub6.set_title('Right ascension of ascending node');
plt.figure(figsize = [15,8], facecolor='w')
for part in parts:
orbit = load_orbit_file('OrbitReport_{}.txt'.format(part))
t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix')
plt.plot(t[:200].datetime, orbit[:200,ma])
plt.legend(parts)
plt.xlabel('UTC time')
plt.ylabel('MA (deg)')
plt.title('Mean anomaly');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set this to the path of your GMAT executable
Step3: The GMAT script template contains fields ready to be filled in using Python's format() function.
Step4: Conversion between UNIX timestamp (used by the tracking files) and GMAT Modified Julian Day.
Step5: Utility function to load the first row from a tracking file and to load the Keplerian state report from GMAT.
Step6: Keys for each of the columns in the orbit (Keplerian state) report.
Step7: The function below takes the data from a tracking file, generates a GMAT script and executes it. GMAT is closed automatically after the script has run unless do_not_close is set to True. This can be useful to examine the simulation output in more detail.
Step8: Load the cartesian state from each tracking file, propagate the orbit and write a Keplerian state report using GMAT.
Step9: Plot the orbital parameters which are vary significantly between different tracking files.
|
6,629
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
%load_ext tensorboard
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model = setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def get_gzipped_model_size(model):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, keras_file = tempfile.mkstemp('.h5')
model.save(keras_file, include_optimizer=False)
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(keras_file)
return os.path.getsize(zipped_file)
setup_model()
pretrained_weights = setup_pretrained_weights()
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended.
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
model_for_pruning.summary()
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `prune_low_magnitude` to make only the
# Dense layers train with pruning.
def apply_pruning_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.sparsity.keras.prune_low_magnitude(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense`
# to the layers of the model.
model_for_pruning = tf.keras.models.clone_model(
base_model,
clone_function=apply_pruning_to_dense,
)
model_for_pruning.summary()
print(base_model.layers[0].name)
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
i = tf.keras.Input(shape=(20,))
x = tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
model_for_pruning = tf.keras.Model(inputs=i, outputs=o)
model_for_pruning.summary()
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
class MyDenseLayer(tf.keras.layers.Dense, tfmot.sparsity.keras.PrunableLayer):
def get_prunable_weights(self):
# Prune bias also, though that usually harms model accuracy too much.
return [self.kernel, self.bias]
# Use `prune_low_magnitude` to make the `MyDenseLayer` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(MyDenseLayer(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
log_dir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# Log sparsity and other metrics in Tensorboard.
tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir)
]
model_for_pruning.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model_for_pruning.fit(
x_train,
y_train,
callbacks=callbacks,
epochs=2,
)
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Boilerplate
loss = tf.keras.losses.categorical_crossentropy
optimizer = tf.keras.optimizers.Adam()
log_dir = tempfile.mkdtemp()
unused_arg = -1
epochs = 2
batches = 1 # example is hardcoded so that the number of batches cannot change.
# Non-boilerplate.
model_for_pruning.optimizer = optimizer
step_callback = tfmot.sparsity.keras.UpdatePruningStep()
step_callback.set_model(model_for_pruning)
log_callback = tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir) # Log sparsity and other metrics in Tensorboard.
log_callback.set_model(model_for_pruning)
step_callback.on_train_begin() # run pruning callback
for _ in range(epochs):
log_callback.on_epoch_begin(epoch=unused_arg) # run pruning callback
for _ in range(batches):
step_callback.on_train_batch_begin(batch=unused_arg) # run pruning callback
with tf.GradientTape() as tape:
logits = model_for_pruning(x_train, training=True)
loss_value = loss(y_train, logits)
grads = tape.gradient(loss_value, model_for_pruning.trainable_variables)
optimizer.apply_gradients(zip(grads, model_for_pruning.trainable_variables))
step_callback.on_epoch_end(batch=unused_arg) # run pruning callback
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
_, keras_model_file = tempfile.mkstemp('.h5')
# Checkpoint: saving the optimizer is necessary (include_optimizer=True is the default).
model_for_pruning.save(keras_model_file, include_optimizer=True)
# Deserialize model.
with tfmot.sparsity.keras.prune_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Typically you train the model here.
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
print("final model")
model_for_export.summary()
print("\n")
print("Size of gzipped pruned model without stripping: %.2f bytes" % (get_gzipped_model_size(model_for_pruning)))
print("Size of gzipped pruned model with stripping: %.2f bytes" % (get_gzipped_model_size(model_for_export)))
base_model = setup_model()
# For using intrinsics on a CPU with 128-bit registers, together with 8-bit
# quantized weights, a 1x16 block size is nice because the block perfectly
# fits into the register.
pruning_params = {'block_size': [1, 16]}
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model, **pruning_params)
model_for_pruning.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: プルーニングの総合ガイド
Step2: モデルを定義する
Step3: 一部のレイヤーをプルーニングする(Sequential と Functional)
Step4: この例ではプルーニングするものを決定するためにレイヤーの種類が使用されていますが、特定のレイヤーをプルーニングする上で最も簡単な方法は、name プロパティを設定し、clone_function でその名前を探す方法です。
Step5: 可読性が高くても、モデルの精度を潜在的に低下させる
Step6: Sequential の例
Step7: カスタム Keras レイヤーのプルーニングまたはプルーニングするレイヤーの部分の変更
Step8: モデルのトレーニング
Step9: Colab を使用していないユーザーは、TensorBoard.dev で、このノートブックの前回の実行結果を閲覧できます。
Step10: Colab を使用していないユーザーは、TensorBoard.dev で、このノートブックの前回の実行結果を閲覧できます。
Step11: 上記は一般的に適用されます。次のコードは、HDF5 モデル形式のみで必要です(HDF5 重みまたはその他の形式では不要です)。
Step12: プルーニングされたモデルをデプロイする
Step13: ハードウェア固有の最適化
|
6,630
|
<ASSISTANT_TASK:>
Python Code:
import theano
import theano.tensor as T
# cf. https://github.com/lisa-lab/DeepLearningTutorials/blob/c4db2098e6620a0ac393f291ec4dc524375e96fd/code/logistic_sgd.py
import cPickle, gzip, numpy
import os
os.getcwd()
os.listdir( os.getcwd() )
f = gzip.open('./Data/mnist.pkl.gz')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
type(train_set), type(valid_set), type(test_set)
type(train_set[0]), type(train_set[1])
def shared_dataset(data_xy):
Function that loads the dataset into shared variables
The reason we store our dataset in shared variables is to allow
Theano to copy it into the GPU memory (when code is run on GPU).
Since copying data into the GPU is slow, copying a minibatch everytime
is needed (the default behavior if the data is not in a shared
variable) would lead to a large decrease in performance.
data_x, data_y = data_xy
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
# When storing data on the GPU it has to be stored as floats
# therefore we will store the labels as ``floatX`` as well
# (``shared_y`` does exactly that). But during our computations
# we need them as ints (we use labels as index, and if they are
# floats it doesn't make sense) therefore instead of returning
# ``shared_y`` we will ahve to cast it to int. This little hack
# lets us get around this issue
return shared_x, T.cast(shared_y, 'int32')
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * batch_size: 3 * batch_size]
label = train_set_y[2 * batch_size: 3 * batch_size]
dir(train_set_x)
os.listdir("../DeepLearningTutorials/code")
import subprocess
subprocess.call(['python','../DeepLearningTutorials/code/logistic_sgd.py'])
subprocess.call(['THEANO_FLAGS=device=gpu,floatX=float32 python',
'../DeepLearningTutorials/code/logistic_sgd.py'])
execfile('../DeepLearningTutorials/code/logistic_sgd_b.py')
os.listdir( '../' )
import sklearn
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: cf. 3.2 Datasets, 3.2.1 MNIST Dataset
Step3: GPU note
|
6,631
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
x3 = np.random.uniform(size=1000)
x0 = 3.0*x3 + np.random.uniform(size=1000)
x2 = 6.0*x3 + np.random.uniform(size=1000)
x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000)
x5 = 4.0*x0 + np.random.uniform(size=1000)
x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000)
X1 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X1.head()
m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0],
[3.0, 0.0, 2.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 6.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[8.0, 0.0,-1.0, 0.0, 0.0, 0.0],
[4.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
make_dot(m)
x3 = np.random.uniform(size=1000)
x0 = 3.5*x3 + np.random.uniform(size=1000)
x2 = 6.5*x3 + np.random.uniform(size=1000)
x1 = 3.5*x0 + 2.5*x2 + np.random.uniform(size=1000)
x5 = 4.5*x0 + np.random.uniform(size=1000)
x4 = 8.5*x0 - 1.5*x2 + np.random.uniform(size=1000)
X2 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X2.head()
m = np.array([[0.0, 0.0, 0.0, 3.5, 0.0, 0.0],
[3.5, 0.0, 2.5, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 6.5, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[8.5, 0.0,-1.5, 0.0, 0.0, 0.0],
[4.5, 0.0, 0.0, 0.0, 0.0, 0.0]])
make_dot(m)
X_list = [X1, X2]
model = lingam.MultiGroupDirectLiNGAM()
model.fit(X_list)
model.causal_order_
print(model.adjacency_matrices_[0])
make_dot(model.adjacency_matrices_[0])
print(model.adjacency_matrices_[1])
make_dot(model.adjacency_matrices_[1])
X_all = pd.concat([X1, X2])
print(X_all.shape)
model_all = lingam.DirectLiNGAM()
model_all.fit(X_all)
model_all.causal_order_
make_dot(model_all.adjacency_matrix_)
p_values = model.get_error_independence_p_values(X_list)
print(p_values[0])
print(p_values[1])
results = model.bootstrap(X_list, n_sampling=100)
cdc = results[0].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01)
print_causal_directions(cdc, 100)
cdc = results[1].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01)
print_causal_directions(cdc, 100)
dagc = results[0].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01)
print_dagc(dagc, 100)
dagc = results[1].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01)
print_dagc(dagc, 100)
prob = results[0].get_probabilities(min_causal_effect=0.01)
print(prob)
causal_effects = results[0].get_total_causal_effects(min_causal_effect=0.01)
df = pd.DataFrame(causal_effects)
labels = [f'x{i}' for i in range(X1.shape[1])]
df['from'] = df['from'].apply(lambda x : labels[x])
df['to'] = df['to'].apply(lambda x : labels[x])
df
df.sort_values('effect', ascending=False).head()
df[df['to']=='x1'].head()
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from_index = 3
to_index = 0
plt.hist(results[0].total_effects_[:, to_index, from_index])
from_index = 3 # index of x3
to_index = 1 # index of x0
pd.DataFrame(results[0].get_paths(from_index, to_index))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test data
Step2: We create a list variable that contains two datasets.
Step3: Causal Discovery
Step4: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.
Step5: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. As you can see from the following, DAG in each dataset is correctly estimated.
Step6: To compare, we run DirectLiNGAM with single dataset concatenating two datasets.
Step7: You can see that the causal structure cannot be estimated correctly for a single dataset.
Step8: Independence between error variables
Step9: Bootstrapping
Step10: Causal Directions
Step11: Directed Acyclic Graphs
Step12: Probability
Step13: Total Causal Effects
Step14: We can easily perform sorting operations with pandas.DataFrame.
Step15: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
Step16: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
Step17: Bootstrap Probability of Path
|
6,632
|
<ASSISTANT_TASK:>
Python Code:
# Ensure the right version of Tensorflow is installed.
!pip install tensorflow==2.6 --user
!pip install gym==0.12.5 --user
import gym
import numpy as np
import random
env = gym.make('FrozenLake-v0', is_slippery=False)
state = env.reset()
env.render()
print(state)
def print_state(state, done):
statement = "Still Alive!"
if done:
statement = "Cocoa Time!" if state == 15 else "Game Over!"
print(state, "-", statement)
#0 left
#1 down
#2 right
#3 up
# Uncomment to reset the game
#env.reset()
action = 2 # Change me, please!
state, _, done, _ = env.step(action)
env.render()
print_state(state, done)
def play_game(actions):
state = env.reset()
step = 0
done = False
while not done and step < len(actions):
action = actions[step]
state, _, done, _ = env.step(action)
env.render()
step += 1
print_state(state, done)
actions = [1, 1, 2, 2, 1, 2] # Replace with your favorite path.
play_game(actions)
LAKE = np.array([[0, 0, 0, 0],
[0, -1, 0, -1],
[0, 0, 0, -1],
[-1, 0, 0, 1]])
LAKE_WIDTH = len(LAKE[0])
LAKE_HEIGHT = len(LAKE)
DISCOUNT = .9 # Change me to be a value between 0 and 1.
current_values = np.zeros_like(LAKE)
print("env.observation_space -", env.observation_space)
print("env.observation_space.n -", env.observation_space.n)
print("env.action_space -", env.action_space)
print("env.action_space.n -", env.action_space.n)
STATE_SPACE = env.observation_space.n
ACTION_SPACE = env.action_space.n
STATE_RANGE = range(STATE_SPACE)
ACTION_RANGE = range(ACTION_SPACE)
def get_neighbor_value(state_x, state_y, values, action):
Returns the value of a state's neighbor.
Args:
state_x (int): The state's horizontal position, 0 is the lake's left.
state_y (int): The state's vertical position, 0 is the lake's top.
values (float array): The current iteration's state values.
policy (int): Which action to check the value for.
Returns:
The corresponding action's value.
left = [state_y, state_x-1]
down = [state_y+1, state_x]
right = [state_y, state_x+1]
up = [state_y-1, state_x]
actions = [left, down, right, up]
direction = actions[action]
check_x = direction[1]
check_y = direction[0]
is_boulder = check_y < 0 or check_y >= LAKE_HEIGHT \
or check_x < 0 or check_x >= LAKE_WIDTH
value = values[state_y, state_x]
if not is_boulder:
value = values[check_y, check_x]
return value
def get_state_coordinates(state_number):
state_x = state_number % LAKE_WIDTH
state_y = state_number // LAKE_HEIGHT
return state_x, state_y
def get_max_neighbor(state_number, values):
Finds the maximum valued neighbor for a given state.
Args:
state_number (int): the state to find the max neighbor for
state_values (float array): the respective value of each state for
each cell of the lake.
Returns:
max_value (float): the value of the maximum neighbor.
policy (int): the action to take to move towards the maximum neighbor.
state_x, state_y = get_state_coordinates(state_number)
# No policy or best value yet
best_policy = -1
max_value = -np.inf
# If the cell has something other than 0, it's a terminal state.
if LAKE[state_y, state_x]:
return LAKE[state_y, state_x], best_policy
for action in ACTION_RANGE:
neighbor_value = get_neighbor_value(state_x, state_y, values, action)
if neighbor_value > max_value:
max_value = neighbor_value
best_policy = action
return max_value, best_policy
def iterate_value(current_values):
Finds the future state values for an array of current states.
Args:
current_values (int array): the value of current states.
Returns:
next_values (int array): The value of states based on future states.
next_policies (int array): The recommended action to take in a state.
next_values = []
next_policies = []
for state in STATE_RANGE:
value, policy = get_max_neighbor(state, current_values)
next_values.append(value)
next_policies.append(policy)
next_values = np.array(next_values).reshape((LAKE_HEIGHT, LAKE_WIDTH))
return next_values, next_policies
next_values, next_policies = iterate_value(current_values)
next_values
np.array(next_policies).reshape((LAKE_HEIGHT ,LAKE_WIDTH))
current_values = DISCOUNT * next_values
current_values
next_values, next_policies = iterate_value(current_values)
print("Value")
print(next_values)
print("Policy")
print(np.array(next_policies).reshape((4,4)))
current_values = DISCOUNT * next_values
def play_game(policy):
state = env.reset()
step = 0
done = False
while not done:
action = policy[state] # This line is new.
state, _, done, _ = env.step(action)
env.render()
step += 1
print_state(state, done)
play_game(next_policies)
env = gym.make('FrozenLake-v0', is_slippery=True)
state = env.reset()
env.render()
play_game(next_policies)
def find_future_values(current_values, current_policies):
Finds the next set of future values based on the current policy.
next_values = []
for state in STATE_RANGE:
current_policy = current_policies[state]
state_x, state_y = get_state_coordinates(state)
# If the cell has something other than 0, it's a terminal state.
value = LAKE[state_y, state_x]
if not value:
value = get_neighbor_value(
state_x, state_y, current_values, current_policy)
next_values.append(value)
return np.array(next_values).reshape((LAKE_HEIGHT, LAKE_WIDTH))
def find_best_policy(next_values):
Finds the best policy given a value mapping.
next_policies = []
for state in STATE_RANGE:
state_x, state_y = get_state_coordinates(state)
# No policy or best value yet
max_value = -np.inf
best_policy = -1
if not LAKE[state_y, state_x]:
for policy in ACTION_RANGE:
neighbor_value = get_neighbor_value(
state_x, state_y, next_values, policy)
if neighbor_value > max_value:
max_value = neighbor_value
best_policy = policy
next_policies.append(best_policy)
return next_policies
def iterate_policy(current_values, current_policies):
Finds the future state values for an array of current states.
Args:
current_values (int array): the value of current states.
current_policies (int array): a list where each cell is the recommended
action for the state matching its index.
Returns:
next_values (int array): The value of states based on future states.
next_policies (int array): The recommended action to take in a state.
next_values = find_future_values(current_values, current_policies)
next_policies = find_best_policy(next_values)
return next_values, next_policies
def get_locations(state_x, state_y, policy):
left = [state_y, state_x-1]
down = [state_y+1, state_x]
right = [state_y, state_x+1]
up = [state_y-1, state_x]
directions = [left, down, right, up]
num_actions = len(directions)
gumdrop_right = (policy - 1) % num_actions
gumdrop_left = (policy + 1) % num_actions
locations = [gumdrop_left, policy, gumdrop_right]
return [directions[location] for location in locations]
def get_neighbor_value(state_x, state_y, values, policy):
Returns the value of a state's neighbor.
Args:
state_x (int): The state's horizontal position, 0 is the lake's left.
state_y (int): The state's vertical position, 0 is the lake's top.
values (float array): The current iteration's state values.
policy (int): Which action to check the value for.
Returns:
The corresponding action's value.
locations = get_locations(state_x, state_y, policy)
location_chance = 1.0 / len(locations)
total_value = 0
for location in locations:
check_x = location[1]
check_y = location[0]
is_boulder = check_y < 0 or check_y >= LAKE_HEIGHT \
or check_x < 0 or check_x >= LAKE_WIDTH
value = values[state_y, state_x]
if not is_boulder:
value = values[check_y, check_x]
total_value += location_chance * value
return total_value
current_values = np.zeros_like(LAKE)
policies = np.random.choice(ACTION_RANGE, size=STATE_SPACE)
np.array(policies).reshape((4,4))
next_values, policies = iterate_policy(current_values, policies)
print("Value")
print(next_values)
print("Policy")
print(np.array(policies).reshape((4,4)))
current_values = DISCOUNT * next_values
play_game(policies)
new_row = np.zeros((1, env.action_space.n))
q_table = np.copy(new_row)
q_map = {0: 0}
def print_q(q_table, q_map):
print("mapping")
print(q_map)
print("q_table")
print(q_table)
print_q(q_table, q_map)
def get_action(q_map, q_table, state_row, random_rate):
Find max-valued actions and randomly select from them.
if random.random() < random_rate:
return random.randint(0, ACTION_SPACE-1)
action_values = q_table[state_row]
max_indexes = np.argwhere(action_values == action_values.max())
max_indexes = np.squeeze(max_indexes, axis=-1)
action = np.random.choice(max_indexes)
return action
def update_q(q_table, new_state_row, reward, old_value):
Returns an updated Q-value based on the Bellman Equation.
learning_rate = .1 # Change to be between 0 and 1.
future_value = reward + DISCOUNT * np.max(q_table[new_state_row])
return old_value + learning_rate * (future_value - old_value)
def play_game(q_table, q_map, random_rate, render=False):
state = env.reset()
step = 0
done = False
while not done:
state_row = q_map[state]
action = get_action(q_map, q_table, state_row, random_rate)
new_state, _, done, _ = env.step(action)
#Add new state to table and mapping if it isn't there already.
if new_state not in q_map:
q_map[new_state] = len(q_table)
q_table = np.append(q_table, new_row, axis=0)
new_state_row = q_map[new_state]
reward = -.01 #Encourage exploration.
if done:
reward = 1 if new_state == 15 else -1
current_q = q_table[state_row, action]
q_table[state_row, action] = update_q(
q_table, new_state_row, reward, current_q)
step += 1
if render:
env.render()
print_state(new_state, done)
state = new_state
return q_table, q_map
# Run to refresh the q_table.
random_rate = 1
q_table = np.copy(new_row)
q_map = {0: 0}
q_table, q_map = play_game(q_table, q_map, random_rate, render=True)
print_q(q_table, q_map)
for _ in range(1000):
q_table, q_map = play_game(q_table, q_map, random_rate)
random_rate = random_rate * .99
print_q(q_table, q_map)
random_rate
q_table, q_map = play_game(q_table, q_map, 0, render=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE
Step2: There are four methods from Gym that are going to be useful to us in order to save the gumdrop.
Step3: If we print the state we'll get 0. This is telling us which square we're in. Each square is labeled from 0 to 15 from left to right, top to bottom, like this
Step4: We can make a simple print function to let us know whether it's game won, game over, or game on.
Step5: We can control the gumdrop ourselves with the step method. Run the below cell over and over again trying to move from the starting position to the goal. Good luck!
Step6: Were you able to reach the hot chocolate? If so, great job! There are multiple paths through the maze. One solution is [1, 1, 2, 2, 1, 2]. Let's loop through our actions in order to get used to interacting with the environment programmatically.
Step7: Nice, so we know how to get through the maze, but how do we teach that to the gumdrop? It's just some bytes in an android phone. It doesn't have our human insight.
Step8: The Gym environment class has a handy property for finding the number of states in an environment called observation_space. In our case, there a 16 integer states, so it will label it as "Discrete". Similarly, action_space will tell us how many actions are available to the agent.
Step10: We'll need some sort of function to figure out what the best neighboring cell is. The below function take's a cell of the lake, and looks at the current value mapping (to be called with current_values, and see's what the value of the adjacent state is corresponding to the given action.
Step12: But this doesn't find the best action, and the gumdrop is going to need that if it wants to greedily get off the lake. The get_max_neighbor function we've defined below takes a number corresponding to a cell as state_number and the same value mapping as get_neighbor_value.
Step14: Now, let's write our value iteration code. We'll write a function that comes out one step of the iteration by checking each state and finding its maximum neighbor. The values will be reshaped so that it's in the form of the lake, but the policy will stay as a list of ints. This way, when Gym returns a state, all we need to do is look at the corresponding index in the policy list to tell our agent where to go.
Step15: This is what our values look like after one step. Right now, it just looks like the lake. That's because we started with an array of zeros for current_values, and the terminal states of the lake were loaded in.
Step16: And this is what our policy looks like reshaped into the form of the lake. The -1's are terminal states. Right now, the agent will move left in any non-terminal state, because it sees all of those states as equal. Remember, if the gumdrop is along the leftmost side of the lake, and tries to move left, it will slip on a boulder and return to the same position.
Step17: There's one last step to apply the Bellman Equation, the discount! We'll multiply our next states by the discount and set that to our current_values. One loop done!
Step18: Run the below cell over and over again to see how our values change with each iteration. It should be complete after six iterations when the values no longer change. The policy will also change as the values are updated.
Step19: Have a completed policy? Let's see it in action! We'll update our play_game function to instead take our list of policies. That way, we can start in a random position and still get to the end.
Step20: Phew! Good job, team! The gumdrop made it out alive. So what became of our gumdrop hero? Well, the next day, it was making another snowman and fell onto an even more slippery and deadly lake. Doh! Turns out this story is part of a trilogy. Feel free to move onto the next section after your own sip of cocoa, coffee, tea, or poison of choice.
Step21: Hmm, looks the same as before. Let's try applying our old policy and see what happens.
Step23: Was there a game over? There's a small chance that the gumdrop made it to the end, but it's much more likely that it accidentally slipped and fell into a hole. Oh no! We can try repeatedly testing the above code cell over and over again, but it might take a while. In fact, this is a similar roadblock Bellman and his colleagues faced.
Step25: After we've calculated our new values, then we'll update the policy (and not the values) based on the maximum neighbor. If there's no change in the policy, then we're done. The below is very similar to our get_max_neighbor function. Can you see the differences?
Step27: To complete the Policy Iteration algorithm, we'll combine the two functions above. Conceptually, we'll be alternating between updating our value function and updating our policy function.
Step28: Next, let's modify the get_neighbor_value function to now include the slippery ice. Remember the P in the Bellman Equation above? It stands for the probability of ending up in a new state given the current state and action taken. That is, we'll take a weighted sum of the values of all possible states based on our chances to be in those states.
Step30: Then, we can add it to get_neighbor_value to find the weighted value of all the possible states the gumdrop can end up in.
Step31: For Policy Iteration, we'll start off with a random policy if only because the Gumdrop doesn't know any better yet. We'll reset our current values while we're at it.
Step32: As before with Value Iteration, run the cell below multiple until the policy no longer changes. It should only take 2-3 clicks compared to Value Iteration's 6.
Step33: Hmm, does this work? Let's see! Run the cell below to watch the gumdrop slip its way to victory.
Step34: So what was the learned strategy here? The gumdrop learned to hug the left wall of boulders until it was down far enough to make a break for the exit. Instead of heading directly for it though, it took advantage of actions that did not have a hole of death in them. Patience is a virtue!
Step36: Our new get_action function will help us read the q_table and find the best action.
Step38: Here, we'll define how the q_table gets updated. We'll apply the Bellman Equation as before, but since there is so much luck involved between slipping and random actions, we'll update our q_table as a weighted average between the old_value we're updating and the future_value based on the best action in the next state. That way, there's a little bit of memory between old and new experiences.
Step39: We'll update our play_game function to take our table and mapping, and at the end, we'll return any updates to them. Once we observe new states, we'll check our mapping and add then to the table if space isn't allocated for them already.
Step40: Ok, time to shine, gumdrop emoji! Let's do one simulation and see what happens.
Step41: Unless the gumdrop was incredibly lucky, chances were, it fell in some death water. Q-learning is markedly different from Value Iteration or Policy Iteration in that it attempts to simulate how an animal learns in unknown situations. Since the layout of the lake is unknown to the Gumdrop, it doesn't know which states are death holes, and which ones are safe. Because of this, it's going to make many mistakes before it can start making successes.
Step42: Cats have nine lives, our Gumdrop lived a thousand! Moment of truth. Can it get out of the lake now that it matters?
|
6,633
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
drawdown = np.loadtxt('data/oudekorendijk_h30.dat')
tobs = drawdown[:,0] / 60 / 24
robs = 30
Q = 788
ml = ModelMaq(kaq=60, z=(-18, -25), Saq=1e-4, tmin=1e-5, tmax=1)
w = Well(ml, xw=0, yw=0, rw=0.1, tsandQ=[(0, 788)], layers=0)
ml.solve()
np.random.seed(2)
hobs = ml.head(robs, 0, tobs)[0] + 0.05 * np.random.randn(len(tobs))
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=100)
cal.set_parameter(name='Saq0', initial=1e-3)
cal.series(name='obs1', x=robs, y=0, layer=0, t=tobs, h=hobs)
cal.fit()
cal.parameters
print('rmse:', cal.rmse())
hm = ml.head(robs, 0, tobs, 0)
plt.semilogx(tobs, hobs, '.k')
plt.semilogx(tobs, hm[0], 'r')
print('correlation matrix')
print(cal.fitresult.covar)
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=100)
cal.set_parameter(name='Saq0', initial=1e-3)
cal.series(name='obs1', x=robs, y=0, layer=0, t=tobs, h=hobs)
cal.fit_least_squares(report=True)
ml = ModelMaq(kaq=[10., 10.], z=(-10, -16, -18, -25), c=[10.], Saq=[0.1, 1e-4], tmin=1e-5, tmax=1)
w = Well(ml, xw=0, yw=0, rw=0.1, tsandQ=[(0, 788)], layers=1)
ml.solve()
hobs0 = ml.head(robs, 0, tobs, layers=[0])[0]
hobs1 = ml.head(robs, 0, tobs, layers=[1])[0]
cal.parameters
cal = Calibrate(ml)
cal.set_parameter(name='kaq0_1', initial=20., pmin=0., pmax=30.) # layers 0 and 1 have the same k-value
cal.set_parameter(name='Saq0', initial=1e-3, pmin=1e-5, pmax=0.2)
cal.set_parameter(name='Saq1', initial=1e-3, pmin=1e-5, pmax=0.2)
cal.set_parameter(name='c1', initial=1., pmin=0.1, pmax=200.)
cal.series(name='obs0', x=robs, y=0, layer=0, t=tobs, h=hobs0)
cal.series(name='obs1', x=robs, y=0, layer=1, t=tobs, h=hobs1)
cal.fit(report=False)
display(cal.parameters)
plt.semilogx(tobs, hobs0, '.C0', label="obs layer 0")
plt.semilogx(tobs, hobs1, '.C1', label="obs layer 1")
hm = ml.head(robs, 0, tobs)
plt.semilogx(tobs, hm[0], 'C0', label="modelled head layer 0")
plt.semilogx(tobs, hm[1], 'C1', label="modelled head layer 1")
plt.legend(loc="best")
tobs2 = np.hstack((tobs, np.arange(0.61, 1, 0.01)))
ml = ModelMaq(kaq=60, z=(-18, -25), Saq=1e-4, tmin=1e-5, tmax=1)
w = Well(ml, xw=0, yw=0, rw=0.3, res=0.02, tsandQ=[(0, 788), (0.6, 0)], layers=0)
ml.solve()
np.random.seed(2)
hobs2 = w.headinside(tobs2)[0] + 0.05 * np.random.randn(len(tobs2))
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=100)
cal.set_parameter(name='Saq0', initial=1e-3)
cal.set_parameter_by_reference(name='res', parameter=w.res[:], initial=0.05)
cal.seriesinwell(name='obs1', element=w, t=tobs2, h=hobs2)
cal.fit()
hm = w.headinside(tobs2)
plt.semilogx(tobs2, hobs2, '.k')
plt.semilogx(tobs2, hm[0], 'r')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use observation times from Oude Korendijk
Step2: Generate data
Step3: See if TTim can find aquifer parameters back
Step4: Fit with scipy.least_squares (not recommended)
Step5: Calibrate parameters in multiple layers
Step6: Generate data for head measured in well
|
6,634
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
import warnings
warnings.filterwarnings('ignore')
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
X = pd.read_csv('nonlinear_data.csv')
m = np.array([
[0, 0, 0, 0, 0],
[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 0, 1, 0]])
dot = make_dot(m)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot
from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor(max_depth=4, random_state=0)
model = lingam.RESIT(regressor=reg)
model.fit(X)
model.causal_order_
model.adjacency_matrix_
make_dot(model.adjacency_matrix_)
import warnings
warnings.filterwarnings('ignore', category=UserWarning)
n_sampling = 100
model = lingam.RESIT(regressor=reg)
result = model.bootstrap(X, n_sampling=n_sampling)
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
print_causal_directions(cdc, n_sampling)
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
print_dagc(dagc, n_sampling)
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
from_index = 0 # index of x0
to_index = 3 # index of x3
pd.DataFrame(result.get_paths(from_index, to_index))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test data
Step2: Causal Discovery
Step3: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering.
Step4: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.
Step5: We can draw a causal graph by utility funciton.
Step6: Bootstrapping
Step7: Causal Directions
Step8: We can check the result by utility function.
Step9: Directed Acyclic Graphs
Step10: We can check the result by utility function.
Step11: Probability
Step12: Bootstrap Probability of Path
|
6,635
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
import numpy as np
import openmc
%matplotlib inline
# Instantiate some elements
elements = {}
for elem in ['H', 'O', 'U', 'Zr', 'Gd', 'B', 'C', 'Fe']:
elements[elem] = openmc.Element(elem)
materials = {}
# Fuel
materials['Fuel'] = openmc.Material(name='Fuel')
materials['Fuel'].set_density('g/cm3', 10.32)
materials['Fuel'].add_element(elements['O'], 2)
materials['Fuel'].add_element(elements['U'], 1, enrichment=3.)
# Gadolinia bearing fuel
materials['Gad'] = openmc.Material(name='Gad')
materials['Gad'].set_density('g/cm3', 10.23)
materials['Gad'].add_element(elements['O'], 2)
materials['Gad'].add_element(elements['U'], 1, enrichment=3.)
materials['Gad'].add_element(elements['Gd'], .02)
# Zircaloy
materials['Zirc2'] = openmc.Material(name='Zirc2')
materials['Zirc2'].set_density('g/cm3', 6.55)
materials['Zirc2'].add_element(elements['Zr'], 1)
# Boiling Water
materials['Water'] = openmc.Material(name='Water')
materials['Water'].set_density('g/cm3', 0.6)
materials['Water'].add_element(elements['H'], 2)
materials['Water'].add_element(elements['O'], 1)
# Boron Carbide for the Control Rods
materials['B4C'] = openmc.Material(name='B4C')
materials['B4C'].set_density('g/cm3', 0.7 * 2.52)
materials['B4C'].add_element(elements['B'], 4)
materials['B4C'].add_element(elements['C'], 1)
# Steel
materials['Steel'] = openmc.Material(name='Steel')
materials['Steel'].set_density('g/cm3', 7.75)
materials['Steel'].add_element(elements['Fe'], 1)
# Instantiate a Materials object
materials_file = openmc.Materials(materials.values())
# Export to "materials.xml"
materials_file.export_to_xml()
# Set constants for the problem and assembly dimensions
fuel_rad = 0.53213
clad_rad = 0.61341
Np = 8
pin_pitch = 1.6256
length = float(Np + 2) * pin_pitch
assembly_width = length - 2. * pin_pitch
rod_thick = 0.47752 / 2. + 0.14224
rod_span = 7. * pin_pitch
surfaces = {}
# Create boundary planes to surround the geometry
surfaces['Global x-'] = openmc.XPlane(x0=0., boundary_type='reflective')
surfaces['Global x+'] = openmc.XPlane(x0=length, boundary_type='reflective')
surfaces['Global y-'] = openmc.YPlane(y0=0., boundary_type='reflective')
surfaces['Global y+'] = openmc.YPlane(y0=length, boundary_type='reflective')
# Create cylinders for the fuel and clad
surfaces['Fuel Radius'] = openmc.ZCylinder(R=fuel_rad)
surfaces['Clad Radius'] = openmc.ZCylinder(R=clad_rad)
surfaces['Assembly x-'] = openmc.XPlane(x0=pin_pitch)
surfaces['Assembly x+'] = openmc.XPlane(x0=length - pin_pitch)
surfaces['Assembly y-'] = openmc.YPlane(y0=pin_pitch)
surfaces['Assembly y+'] = openmc.YPlane(y0=length - pin_pitch)
# Set surfaces for the control blades
surfaces['Top Blade y-'] = openmc.YPlane(y0=length - rod_thick)
surfaces['Top Blade x-'] = openmc.XPlane(x0=pin_pitch)
surfaces['Top Blade x+'] = openmc.XPlane(x0=rod_span)
surfaces['Left Blade x+'] = openmc.XPlane(x0=rod_thick)
surfaces['Left Blade y-'] = openmc.YPlane(y0=length - rod_span)
surfaces['Left Blade y+'] = openmc.YPlane(y0=9. * pin_pitch)
# Set regions for geometry building
regions = {}
regions['Global'] = \
(+surfaces['Global x-'] & -surfaces['Global x+'] &
+surfaces['Global y-'] & -surfaces['Global y+'])
regions['Assembly'] = \
(+surfaces['Assembly x-'] & -surfaces['Assembly x+'] &
+surfaces['Assembly y-'] & -surfaces['Assembly y+'])
regions['Fuel'] = -surfaces['Fuel Radius']
regions['Clad'] = +surfaces['Fuel Radius'] & -surfaces['Clad Radius']
regions['Water'] = +surfaces['Clad Radius']
regions['Top Blade'] = \
(+surfaces['Top Blade y-'] & -surfaces['Global y+']) & \
(+surfaces['Top Blade x-'] & -surfaces['Top Blade x+'])
regions['Top Steel'] = \
(+surfaces['Global x-'] & -surfaces['Top Blade x-']) & \
(+surfaces['Top Blade y-'] & -surfaces['Global y+'])
regions['Left Blade'] = \
(+surfaces['Left Blade y-'] & -surfaces['Left Blade y+']) & \
(+surfaces['Global x-'] & -surfaces['Left Blade x+'])
regions['Left Steel'] = \
(+surfaces['Left Blade y+'] & -surfaces['Top Blade y-']) & \
(+surfaces['Global x-'] & -surfaces['Left Blade x+'])
regions['Corner Blade'] = \
regions['Left Steel'] | regions['Top Steel']
regions['Water Fill'] = \
regions['Global'] & ~regions['Assembly'] & \
~regions['Top Blade'] & ~regions['Left Blade'] &\
~regions['Corner Blade']
universes = {}
cells = {}
for name, mat, in zip(['Fuel Pin', 'Gd Pin'],
[materials['Fuel'], materials['Gad']]):
universes[name] = openmc.Universe(name=name)
cells[name] = openmc.Cell(name=name)
cells[name].fill = mat
cells[name].region = regions['Fuel']
universes[name].add_cell(cells[name])
cells[name + ' Clad'] = openmc.Cell(name=name + ' Clad')
cells[name + ' Clad'].fill = materials['Zirc2']
cells[name + ' Clad'].region = regions['Clad']
universes[name].add_cell(cells[name + ' Clad'])
cells[name + ' Water'] = openmc.Cell(name=name + ' Water')
cells[name + ' Water'].fill = materials['Water']
cells[name + ' Water'].region = regions['Water']
universes[name].add_cell(cells[name + ' Water'])
universes['Hole'] = openmc.Universe(name='Hole')
cells['Hole'] = openmc.Cell(name='Hole')
cells['Hole'].fill = materials['Water']
universes['Hole'].add_cell(cells['Hole'])
# Create fuel assembly Lattice
universes['Assembly'] = openmc.RectLattice(name='Assembly')
universes['Assembly'].pitch = (pin_pitch, pin_pitch)
universes['Assembly'].lower_left = [pin_pitch, pin_pitch]
f = universes['Fuel Pin']
g = universes['Gd Pin']
h = universes['Hole']
lattices = [[f, f, f, f, f, f, f, f],
[f, f, f, f, f, f, f, f],
[f, f, f, g, f, g, f, f],
[f, f, g, h, h, f, g, f],
[f, f, f, h, h, f, f, f],
[f, f, g, f, f, f, g, f],
[f, f, f, g, f, g, f, f],
[f, f, f, f, f, f, f, f]]
# Store the array of lattice universes
universes['Assembly'].universes = lattices
cells['Assembly'] = openmc.Cell(name='Assembly')
cells['Assembly'].fill = universes['Assembly']
cells['Assembly'].region = regions['Assembly']
# The top portion of the blade, poisoned with B4C
cells['Top Blade'] = openmc.Cell(name='Top Blade')
cells['Top Blade'].fill = materials['B4C']
cells['Top Blade'].region = regions['Top Blade']
# The left portion of the blade, poisoned with B4C
cells['Left Blade'] = openmc.Cell(name='Left Blade')
cells['Left Blade'].fill = materials['B4C']
cells['Left Blade'].region = regions['Left Blade']
# The top-left corner portion of the blade, with no poison
cells['Corner Blade'] = openmc.Cell(name='Corner Blade')
cells['Corner Blade'].fill = materials['Steel']
cells['Corner Blade'].region = regions['Corner Blade']
# Water surrounding all other cells and our assembly
cells['Water Fill'] = openmc.Cell(name='Water Fill')
cells['Water Fill'].fill = materials['Water']
cells['Water Fill'].region = regions['Water Fill']
# Create root Universe
universes['Root'] = openmc.Universe(name='root universe', universe_id=0)
universes['Root'].add_cells([cells['Assembly'], cells['Top Blade'],
cells['Corner Blade'], cells['Left Blade'],
cells['Water Fill']])
universes['Root'].plot(center=(length / 2., length / 2., 0.),
pixels=(500, 500), width=(length, length),
color_by='material',
colors={materials['Fuel']: (1., 0., 0.),
materials['Gad']: (1., 1., 0.),
materials['Zirc2']: (0.5, 0.5, 0.5),
materials['Water']: (0.0, 0.0, 1.0),
materials['B4C']: (0.0, 0.0, 0.0),
materials['Steel']: (0.4, 0.4, 0.4)})
# Create Geometry and set root universe
geometry = openmc.Geometry(universes['Root'])
# Export to "geometry.xml"
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 1000
inactive = 20
particles = 1000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [pin_pitch, pin_pitch, 10, length - pin_pitch, length - pin_pitch, 10]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6])
# Initialize a 2-group Isotropic MGXS Library for OpenMC
iso_mgxs_lib = openmc.mgxs.Library(geometry)
iso_mgxs_lib.energy_groups = groups
# Specify multi-group cross section types to compute
iso_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
# Instantiate a tally Mesh
mesh = openmc.Mesh()
mesh.type = 'regular'
mesh.dimension = [10, 10]
mesh.lower_left = [0., 0.]
mesh.upper_right = [length, length]
# Specify a "mesh" domain type for the cross section tally filters
iso_mgxs_lib.domain_type = "mesh"
# Specify the mesh over which to compute multi-group cross sections
iso_mgxs_lib.domains = [mesh]
# Set the scattering format to histogram and then define the number of bins
# Avoid a warning that corrections don't make sense with histogram data
iso_mgxs_lib.correction = None
# Set the histogram data
iso_mgxs_lib.scatter_format = 'histogram'
iso_mgxs_lib.histogram_bins = 11
# Let's repeat all of the above for an angular MGXS library so we can gather
# that in the same continuous-energy calculation
angle_mgxs_lib = openmc.mgxs.Library(geometry)
angle_mgxs_lib.energy_groups = groups
angle_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
angle_mgxs_lib.domain_type = "mesh"
angle_mgxs_lib.domains = [mesh]
angle_mgxs_lib.correction = None
angle_mgxs_lib.scatter_format = 'histogram'
angle_mgxs_lib.histogram_bins = 11
# Set the angular bins to 8
angle_mgxs_lib.num_azimuthal = 8
# Check the libraries - if no errors are raised, then the library is satisfactory.
iso_mgxs_lib.check_library_for_openmc_mgxs()
angle_mgxs_lib.check_library_for_openmc_mgxs()
# Construct all tallies needed for the multi-group cross section library
iso_mgxs_lib.build_library()
angle_mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
iso_mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
angle_mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Move the StatePoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
# Load the statepoint file, but not the summary file, as it is a different filename than expected.
sp = openmc.StatePoint(ce_spfile, autolink=False)
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
# Initialize MGXS Library with OpenMC statepoint data
iso_mgxs_lib.load_from_statepoint(sp)
angle_mgxs_lib.load_from_statepoint(sp)
# Allow the API to create our Library, materials, and geometry file
iso_mgxs_file, materials_file, geometry_file = iso_mgxs_lib.create_mg_mode()
# Tell the materials file what we want to call the multi-group library
materials_file.cross_sections = 'mgxs.h5'
# Write our newly-created files to disk
iso_mgxs_file.export_to_hdf5('mgxs.h5')
materials_file.export_to_xml()
geometry_file.export_to_xml()
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add our fission rate mesh tally
tallies_file.add_tally(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
geometry_file.root_universe.plot(center=(length / 2., length / 2., 0.),
pixels=(300, 300), width=(length, length),
color_by='material')
# Execute the Isotropic MG OpenMC Run
openmc.run()
# Move the StatePoint File
iso_mg_spfile = './statepoint_mg_iso.h5'
os.rename('statepoint.' + str(batches) + '.h5', iso_mg_spfile)
# Move the Summary file
iso_mg_sumfile = './summary_mg_iso.h5'
os.rename('summary.h5', iso_mg_sumfile)
# Let's repeat for the angle-dependent case
angle_mgxs_lib.load_from_statepoint(sp)
angle_mgxs_file, materials_file, geometry_file = angle_mgxs_lib.create_mg_mode()
angle_mgxs_file.export_to_hdf5()
# Execute the angle-dependent OpenMC Run
openmc.run()
# Load the isotropic statepoint file
iso_mgsp = openmc.StatePoint(iso_mg_spfile, autolink=False)
iso_mgsum = openmc.Summary(iso_mg_sumfile)
iso_mgsp.link_with_summary(iso_mgsum)
# Load the angle-dependent statepoint file
angle_mgsp = openmc.StatePoint('statepoint.' + str(batches) + '.h5')
ce_keff = sp.k_combined
iso_mg_keff = iso_mgsp.k_combined
angle_mg_keff = angle_mgsp.k_combined
# Find eigenvalue bias
iso_bias = 1.0E5 * (ce_keff[0] - iso_mg_keff[0])
angle_bias = 1.0E5 * (ce_keff[0] - angle_mg_keff[0])
print('Isotropic to CE Bias [pcm]: {0:1.1f}'.format(iso_bias))
print('Angle to CE Bias [pcm]: {0:1.1f}'.format(angle_bias))
sp_files = [sp, iso_mgsp, angle_mgsp]
titles = ['Continuous-Energy', 'Isotropic Multi-Group',
'Angle-Dependent Multi-Group']
fiss_rates = []
fig = plt.figure(figsize=(12, 6))
for i, (case, title) in enumerate(zip(sp_files, titles)):
# Get our mesh tally information
mesh_tally = case.get_tally(name='mesh tally')
fiss_rates.append(mesh_tally.get_values(scores=['fission']))
# Reshape the array
fiss_rates[-1].shape = mesh.dimension
# Normalize the fission rates
fiss_rates[-1] /= np.mean(fiss_rates[-1])
# Set 0s to NaNs so they show as white
fiss_rates[-1][fiss_rates[-1] == 0.] = np.nan
fig = plt.subplot(1, len(titles), i + 1)
# Plot only the fueled regions
plt.imshow(fiss_rates[-1][1:-1, 1:-1], cmap='jet', origin='lower',
vmin=0.4, vmax=4.)
plt.title(title + '\nFission Rates')
# Calculate and plot the ratios of MG to CE for each of the 2 MG cases
ratios = []
fig, axes = plt.subplots(figsize=(12, 6), nrows=1, ncols=2)
for i, (case, title, axis) in enumerate(zip(sp_files[1:], titles[1:], axes.flat)):
# Get our ratio relative to the CE (in fiss_ratios[0])
ratios.append(np.divide(fiss_rates[i + 1], fiss_rates[0]))
# Plot only the fueled regions
im = axis.imshow(ratios[-1][1:-1, 1:-1], cmap='bwr', origin='lower',
vmin = 0.9, vmax = 1.1)
axis.set_title(title + '\nFission Rates Relative\nto Continuous-Energy')
# Add a color bar
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will be running a rodded 8x8 assembly with Gadolinia fuel pins. Let's create all the elemental data we would need for this case.
Step2: With the elements we defined, we will now create the materials we will use later.
Step3: We can now create a Materials object that can be exported to an actual XML file.
Step4: Now let's move on to the geometry. The first step is to define some constants which will be used to set our dimensions and then we can start creating the surfaces and regions for the problem, the 8x8 lattice, the rods and the control blade.
Step5: With the surfaces defined, we can now construct regions with these surfaces before we use those to create cells
Step6: We will begin building the 8x8 assembly. To do that we will have to build the cells and universe for each pin type (fuel, gadolinia-fuel, and water).
Step7: Let's use this pin information to create our 8x8 assembly.
Step8: So far we have the rods and water within the assembly , but we still need the control blade and the water which fills the rest of the space. We will create those cells now
Step9: OpenMC requires that there is a "root" universe. Let us create our root universe and fill it with the cells just defined.
Step10: What do you do after you create your model? Check it! We will use the plotting capabilities of the Python API to do this for us.
Step11: Looks pretty good to us!
Step12: With the geometry and materials finished, we now just need to define simulation parameters, including how to run the model and what we want to learn from the model (i.e., define the tallies). We will start with our simulation parameters in the next block.
Step13: Create an MGXS Library
Step14: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the problem geometry. This library will use the default setting of isotropically-weighting the multi-group cross sections.
Step15: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions.
Step16: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material" "cell", "universe", and "mesh" domain types.
Step17: Now we will set the scattering treatment that we wish to use.
Step18: Ok, we made our isotropic library with histogram-scattering!
Step19: Now that our libraries have been setup, let's make sure they contain the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of the mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
Step20: Lastly, we use our two Library objects to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
Step21: The tallies within the libraries can now be exported to a "tallies.xml" input file for OpenMC.
Step22: In addition, we instantiate a fission rate mesh tally for eventual comparison of results.
Step23: Time to run the calculation and get our results!
Step24: To make the files available and not be over-written when running the multi-group calculation, we will now rename the statepoint and summary files.
Step25: Tally Data Processing
Step26: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.Library to properly process the tally data. We first create a Summary object and link it with the statepoint. Normally this would not need to be performed, but since we have renamed our summary file to avoid conflicts with the Multi-Group calculation's summary file, we will load this in explicitly.
Step27: The statepoint is now ready to be analyzed. To create our libraries we simply have to load the tallies from the statepoint into each Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step28: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Step29: Next, we can make the changes we need to the settings file.
Step30: Let's clear up the tallies file so it doesn't include all the extra tallies for re-generating a multi-group library
Step31: Before running the calculation let's look at our meshed model. It might not be interesting, but let's take a look anyways.
Step32: So, we see a 10x10 grid with a different color for every material, sounds good!
Step33: Before we go the angle-dependent case, let's save the StatePoint and Summary files so they don't get over-written
Step34: Angle-Dependent Multi-Group OpenMC Calculation
Step35: At this point, the problem is set up and we can run the multi-group calculation.
Step36: Results Comparison
Step37: Eigenvalue Comparison
Step38: Let's compare the eigenvalues in units of pcm
Step39: We see a large reduction in error by switching to the usage of angle-dependent multi-group cross sections!
Step40: With this colormap, dark blue is the lowest power and dark red is the highest power.
|
6,636
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
a = np.identity(n, dtype=np.int)
return (n-1)*a
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
a = np.ones((n,n), dtype=np.int)
b = np.identity(n, dtype=np.int)
return (a-b)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
L = D-A
np.linalg.eigvals(L)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
6,637
|
<ASSISTANT_TASK:>
Python Code:
combined["g_minus_r"] = combined.gcmodel_mag - combined.rcmodel_mag
combined["r_minus_i"] = combined.rcmodel_mag - combined.icmodel_mag
combined["i_minus_z"] = combined.icmodel_mag - combined.zcmodel_mag
combined["z_minus_y"] = combined.zcmodel_mag - combined.ycmodel_mag
mask = np.isfinite(combined["g_minus_r"]) & np.isfinite(combined["r_minus_i"]) \
& np.isfinite(combined["i_minus_z"]) & np.isfinite(combined["z_minus_y"]) \
& np.isfinite(combined["icmodel_mag"]) \
& (~combined.gcmodel_flux_flags) & (~combined.rcmodel_flux_flags) \
& (~combined.icmodel_flux_flags) & (~combined.zcmodel_flux_flags) \
& (~combined.ycmodel_flux_flags)
combined = combined[mask]
low_z = (combined.photo_z < .15)
low_mass = (combined.log_mass > 8) & (combined.log_mass < 9)
combined["low_z_low_mass"] = (low_z & low_mass)
combined.low_z_low_mass.mean()
target_ids = pd.read_csv("../narrowband_deconfusion/target_galaxies-HSC_ids.csv")
target_ids.head()
contaminant_ids = pd.read_csv("../narrowband_deconfusion/contaminant_galaxies-HSC_ids.csv")
contaminant_ids.head()
targets_path = pathlib.Path.home() / "dwarfz" \
/ "galaxies_narrowband" \
/ "target"
targets_path.mkdir(parents=True, exist_ok=True)
contaminants_path = pathlib.Path.home() / "dwarfz" \
/ "galaxies_narrowband" \
/ "contaminant"
contaminants_path.mkdir(parents=True, exist_ok=True)
google_dir = pathlib.Path.home() / "Google Drive" \
/ "HSC_COSMOS_galaxies" \
/ "galaxies"
predownloaded_ids = {int(image_dir.name)
for image_dir in google_dir.iterdir()
if image_dir.is_dir()}
i = 0
for (_, _, HSC_id) in target_ids.itertuples():
if HSC_id in predownloaded_ids:
image_dir = google_dir / str(HSC_id)
new_dir = targets_path / image_dir.name
if not new_dir.is_dir():
new_dir.symlink_to(image_dir, target_is_directory=True)
i += 1
print("symlinked {} galaxies".format(i))
j = 0
for (_, _, HSC_id) in contaminant_ids.itertuples():
if HSC_id in predownloaded_ids:
image_dir = google_dir / str(HSC_id)
new_dir = contaminants_path / image_dir.name
if not new_dir.is_dir():
new_dir.symlink_to(image_dir, target_is_directory=True)
j += 1
print("symlinked {} galaxies".format(j))
# num galaxies remaining to download
target_ids.shape[0] + contaminant_ids.shape[0] - i - j
target_ids_to_download = set(target_ids.HSC_id) - predownloaded_ids
contaminant_ids_to_download = set(contaminant_ids.HSC_id) - predownloaded_ids
ids_to_download = target_ids_to_download | contaminant_ids_to_download
galaxy_coords = combined[["catalog_2_ids", "ra", "dec"]]
galaxy_coords = galaxy_coords.rename(columns={"catalog_2_ids":"HSC_index"})
galaxy_coords = galaxy_coords.set_index("HSC_index")
galaxy_coords = galaxy_coords.loc[ids_to_download]
galaxy_coords.head()
width = "20asec"
filters = ["HSC-G", "HSC-R", "HSC-I", "HSC-Z", "HSC-Y"]
rerun = "pdr1_deep"
quarry_input_dir = pathlib.Path("galaxy_images_training") \
/ "quarry_input_files"
quarry_input_dir.mkdir(exist_ok=True)
quarry_name_format = "tmp_quarry_{:>04d}.txt"
batch_i = 0
files_in_batch = 0
max_files_per_batch = 1000
tmp_filename = quarry_input_dir / quarry_name_format.format(batch_i)
f = open(tmp_filename, mode="w")
print("#? ra dec filter sw sh rerun", file=f)
print_formatter = " {galaxy.ra:.6f}deg {galaxy.dec:.6f}deg {filter} {width} {width} {rerun} # {galaxy.Index}"
for galaxy in galaxy_coords.itertuples():
for filter in filters:
print(print_formatter.format(galaxy=galaxy,
width=width,
filter=filter,
rerun=rerun),
file=f)
files_in_batch += 1
if files_in_batch == max_files_per_batch:
f.close()
files_in_batch = 0
batch_i += 1
tmp_filename = quarry_input_dir / quarry_name_format.format(batch_i)
f = open(tmp_filename, mode="w")
print("#? ra dec filter sw sh rerun", file=f)
f.close()
!head -n 10 $tmp_filename
!wc -l $tmp_filename
!ls galaxy_images_training/quarry_input_files/ | wc -l
!ls -lh galaxy_images_training/quarry_input_files/ | head -n 10
filenames = sorted(quarry_input_dir.iterdir())
min_batch_number_to_pull = 1
max_batch_number_to_pull = 100
new_data_dir = targets_path.parent / "staging"
new_data_dir.mkdir(exist_ok=True)
for i, filename in enumerate(filenames):
if i < min_batch_number_to_pull:
continue
if i >= max_batch_number_to_pull:
break
print("Currently processing file: {}".format(os.path.basename(filename)), end="\r", flush=True)
os.system(("curl -k --netrc-file galaxy_images_training/curl_netrc "
"https://hsc-release.mtk.nao.ac.jp/das_quarry/cgi-bin/quarryImage "
"--form list=@{filename} "
"| tar -xvf -").format(filename=filename))
arch_dirs = list(pathlib.Path.cwd().glob("arch-*"))
assert(len(arch_dirs)==1)
arch_dir = arch_dirs[0]
with open(filename, "r") as f:
_ = f.readline() # skip header
line_number = 1 # 1 indexed, and then also with header
for line in f:
line_number += 1
HSC_id = int(line.split("#")[-1].strip())
HSC_dir = new_data_dir / str(HSC_id)
HSC_dir.mkdir(exist_ok=True)
image_filenames = list(arch_dir.glob(
str(line_number) + "-cutout-HSC-?-????-pdr1_deep.fits"
))
if len(image_filenames) == 0:
continue
elif len(image_filenames) >1:
raise RuntimeError("Too many files for line {} id {}".format(
line_number, HSC_id,
))
image_filename = image_filenames[0]
# rename with HSC id and move to within `new_data_dir`
image_filename.rename(
HSC_dir / image_filename.name.replace(
"{}-cutout".format(line_number),
"{}-cutout".format(HSC_id),
)
)
arch_dir.rmdir()
staging_dir = google_dir.parent / "staging"
num_removed = 0
for staged_dir in staging_dir.iterdir():
if not staged_dir.is_dir(): continue
num_images = len({*staged_dir.glob("*.fits")})
if num_images>5:
raise ValueError("{} has {} fits files".format(staged_dir, num_images))
elif num_images < 5:
print("too few images in {} (n={}); removing".format(
staged_dir,
num_images,
))
num_removed += 1
send2trash.send2trash(str(staged_dir))
else:
staged_dir.rename(staged_dir.parent.parent / "galaxies" / staged_dir.name)
num_removed
pre_linked_ids = {int(path.name) for path in contaminants_path.iterdir() if path.is_dir()}
pre_linked_ids |= {int(path.name) for path in targets_path.iterdir() if path.is_dir()}
len(pre_linked_ids)
narrowband_ids = set(target_ids.HSC_id) | set(contaminant_ids.HSC_id)
len(narrowband_ids)
all_downloaded_ids = {int(path.name) for path in google_dir.iterdir()
if path.is_dir()}
len(all_downloaded_ids)
num_to_link = 0
already_linked = 0
missing = 0
for HSC_id in narrowband_ids:
if HSC_id in pre_linked_ids:
already_linked += 1
if HSC_id not in all_downloaded_ids:
missing += 1
if HSC_id in target_ids.HSC_id.values:
class_path = targets_path
elif HSC_id in contaminant_ids.HSC_id.values:
class_path = contaminants_path
else:
raise ValueError("HSC id {} in neither targets nor contaminants".format(HSC_id))
image_dir = google_dir / str(HSC_id)
new_dir = class_path / image_dir.name
if not new_dir.is_dir():
# new_dir.symlink_to(image_dir, target_is_directory=True)
pass
num_to_link += 1
print("just linked: ", num_to_link)
print("previously linked: ", already_linked)
print("missing: ", missing)
for path in targets_path.iterdir():
if not path.is_dir():
continue
HSC_id = int(path.name)
if HSC_id not in target_ids.HSC_id.values:
raise ValueError("HSC id {} should not be in target path".format(HSC_id))
for path in contaminants_path.iterdir():
if not path.is_dir():
continue
HSC_id = int(path.name)
if HSC_id not in contaminant_ids.HSC_id.values:
raise ValueError("HSC id {} should not be in contaminant path".format(HSC_id))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filter out bad data
Step2: Create classification labels
Step3: Load the IDs of the narrowband population
Step4: Setup locations of images
Step5: Copy over pre-downloaded images
Step6: Get the images from the quarry
Step7: Make the request via curl
Step8: Remove incomplete dirs, then transfer to main google drive directory
Step9: Now link these new images to the project data directory
Step10: Check that the directory contents are correct
|
6,638
|
<ASSISTANT_TASK:>
Python Code:
from SeisCL import SeisCL
seis = SeisCL()
import numpy as np
import matplotlib.pyplot as plt
seis.csts['ND'] = 3 # Number of dimension
seis.csts['N'] = np.array([300, 300, 300]) # Grid size [NZ, NX, NY]
seis.csts['dt'] = dt = 0.25e-03 # Time step size
seis.csts['dh'] = dh = 2 # Grid spatial spacing
seis.csts['NT'] = NT = 800 # Number of time steps
seis.csts['FDORDER'] = 4 # Order of the finite difference
seis.csts['abs_type'] = 1 # Absorbing boundary type: 1: CPML, 2: Absorbing layer of Cerjan
seis.csts['nab'] = nab = 16 # Width in grid points of the absorbing layer
vp = 3500
vs = 2000
rho = 2000
taup = 0.2
taus = 0.2
vp_a = np.zeros(seis.csts['N']) + vp
vs_a = np.zeros(seis.csts['N']) + vs
rho_a = np.zeros(seis.csts['N']) + rho
taup_a = np.zeros(seis.csts['N']) + taup
taus_a = np.zeros(seis.csts['N']) + taus
model_dict = {"vp": vp_a, "rho": rho_a, "vs": vs_a, "taup": taup_a, "taus": taus_a}
seis.csts['f0'] = f0 = 20
seis.csts['FL'] = np.array([f0])
seis.csts['L'] = 1 #Number of attenuation mechanism (L=0 elastic)
N = seis.csts['N'][0]
nbuf = seis.csts['FDORDER'] * 2
sx = (nab + nbuf) * dh
sy = N // 2 * dh
sz = N // 2 * dh
offmin = 5 * dh
offmax = (N - nab - nbuf) * dh - sx
gx = np.arange(sx + offmin, sx + offmax, dh)
gy = gx * 0 + N // 2 * dh
gz = gx * 0 + N // 2 * dh
offsets = gx-sx
seis.src_pos_all = np.stack([[sx], [sy], [sz], [0], [2]], axis=0)
gsid = np.zeros(np.shape(gx))
gid = np.arange(0, len(gz))
#Pourquoi les 3 dernières variables?
seis.rec_pos_all = np.stack([gx, gy, gz, gsid, gid, gx * 0 + 2,
gx * 0, gx * 0], axis=0)
seis.csts['seisout'] = 1
seis.set_forward(gsid, model_dict, withgrad=False)
seis.execute()
datafd = seis.read_data()
datafd = datafd[-1]
from analytical.viscoelastic import viscoelastic_3D, ricker_wavelet
src = ricker_wavelet(f0, 2*NT, dt)
rec_pos = []
for ii in range(0,len(gx)):
rec_pos.append([offsets[ii], gy[ii]-sy, gz[ii]-sz])
analytic = viscoelastic_3D(vp, vs, rho, taup, taus,
f0, seis.csts['FL'],
dt, rec_pos, src)
analytic = analytic[-1][:NT, :]
datafd = datafd / np.sqrt(np.sum(datafd ** 2))
analytic = analytic / np.sqrt(np.sum(analytic ** 2))
def compare_data(data_fd, analytic, offset, dt, testname):
Compare the analytical and finite difference solution.
err = np.sqrt(np.sum( (data_fd - analytic)**2) / np.sum(analytic**2))
#Plot with shot fd, shot ana, diff
clip = 0.1
vmax = np.max(data_fd) * clip
vmin = -vmax
extent=[np.min(offset), np.max(offset), (data_fd.shape[0]-1)*dt, 0]
fig, ax = plt.subplots(1, 3, figsize=[12, 6])
plt.suptitle(testname)
ax[0].imshow(data_fd, aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
ax[0].set_title("FD solution", fontsize=16, fontweight='bold')
ax[0].set_xlabel("offset (m)")
ax[0].set_ylabel("time (s)")
ax[1].imshow(analytic, aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
ax[1].set_title("Analytic Solution", fontsize=16, fontweight='bold')
ax[1].set_xlabel("offset (m)")
ax[2].imshow(data_fd - analytic, aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
ax[2].set_title("Difference", fontsize=16, fontweight='bold')
ax[2].set_xlabel("offset (m)")
plt.tight_layout(h_pad=2, w_pad=2, pad=3)
plt.show()
compare_data(datafd, analytic, offsets, dt, r'Viscoelastic 3D, $\tau = 0.2$')
taup = 0
taus = 0
seis.csts['L'] = 0
seis.csts['FL'] = np.array([])
taup_a = np.zeros(seis.csts['N']) + taup
taus_a = np.zeros(seis.csts['N']) + taus
model_dict = {"vp": vp_a, "rho": rho_a, "vs": vs_a, "taup": taup_a, "taus": taus_a}
seis.set_forward(gsid, model_dict, withgrad=False)
seis.execute()
datafd = seis.read_data()
datafd = datafd[-1]
analytic = viscoelastic_3D(vp, vs, rho, taup, taus,
f0, seis.csts['FL'],
dt, rec_pos, src)
analytic = analytic[-1][:NT, :]
datafd = datafd / np.sqrt(np.sum(datafd ** 2))
analytic = analytic / np.sqrt(np.sum(analytic ** 2))
compare_data(datafd, analytic, offsets, dt, 'Elastic modelization')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <!-- ## Simulation constants -->
Step2: Boundary layers
Step3: Model parameters
Step4: Sources
Step5: Simulation
Step6: The result of SeisCL is a .out file which is read with seis.read_data(). Output data is a (NT, N-2*(nab-nbuf)) array which...
Step8: Visulization and comparaison
Step9: Elastic modeling
|
6,639
|
<ASSISTANT_TASK:>
Python Code:
# !gcloud compute tpus list --zone=YOUR_ZONE_HERE_SUCH_AS_us-central1-b
# tpu_ip_address='10.1.2.3'
# tpu_cores=8
# # TPU configuration
# %env XRT_TPU_CONFIG=tpu_worker;0;$tpu_ip_address:8470
# # Use bfloat16
# %env XLA_USE_BF16=1
!pip install -U pytorch-lightning --quiet
from pytorch_lightning.utilities.xla_device_utils import XLADeviceUtils
if XLADeviceUtils.tpu_device_exists():
import torch_xla # noqa: F401
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.utils.data import DataLoader, Dataset, random_split
import pandas as pd
from google.cloud import storage
from pytorch_lightning.core import LightningModule, LightningDataModule
from pytorch_lightning.metrics.functional import accuracy
from pytorch_lightning.trainer.trainer import Trainer
_ = !nproc
tpu_cores = tpu_cores if 'tpu_cores' in vars() else 0
num_cpus = int(_[0])
num_gpus = torch.cuda.device_count()
device = torch.device('cuda') if num_gpus else 'cpu'
print(f'Device: {device}')
print(f'CPUs: {num_cpus}')
print(f'GPUs: {num_gpus}')
print(f'TPUs: {tpu_cores}')
# Public bucket holding data for samples
BUCKET = 'cloud-samples-data'
# Path to the directory inside the public bucket containing the sample data
BUCKET_PATH = 'ai-platform/sonar/'
# Sample data file
FILE = 'sonar.all-data'
bucket = storage.Client().bucket(BUCKET)
blob = bucket.blob(BUCKET_PATH + FILE)
blob.download_to_filename(FILE)
class SonarDataset(Dataset):
def __init__(self, csv_file):
self.dataframe = pd.read_csv(csv_file, header=None)
def __len__(self):
return len(self.dataframe)
def __getitem__(self, idx):
# When iterating through the dataset get the features and targets
features = self.dataframe.iloc[idx, :-1].values.astype(dtype='float64')
# Convert the targets to binary values:
# R = rock --> 0
# M = mine --> 1
target = self.dataframe.iloc[idx, -1:].values
if target[0] == 'R':
target[0] = 0
elif target[0] == 'M':
target[0] = 1
target = target.astype(dtype='float64')
# Load the data as a tensor
data = {'features': torch.from_numpy(features),
'target': target}
return data
class SonarDataModule(LightningDataModule):
def __init__(self, bucket=BUCKET, bucket_path=BUCKET_PATH, file=FILE, batch_size=32, num_workers=0):
super().__init__()
self.batch_size = batch_size
self.num_workers = num_workers
self.bucket = bucket
self.bucket_path = bucket_path
self.file = file
def prepare_data(self):
# Public bucket holding the data
bucket = storage.Client().bucket(self.bucket)
# Path to the data inside the public bucket
blob = bucket.blob(self.bucket_path + self.file)
# Download the data
blob.download_to_filename(self.file)
def setup(self, stage=None):
# Load the data
sonar_dataset = SonarDataset(self.file)
# Create indices for the split
dataset_size = len(sonar_dataset)
test_size = int(0.2 * dataset_size) # Use a test_split of 0.2
val_size = int(0.2 * dataset_size) # Use a test_split of 0.2
train_size = dataset_size - test_size - val_size
# Assign train/test/val datasets for use in dataloaders
self.sonar_train, self.sonar_val, self.sonar_test = random_split(sonar_dataset, [train_size, val_size, test_size])
def train_dataloader(self):
return DataLoader(self.sonar_train, batch_size=self.batch_size, num_workers=self.num_workers)
def val_dataloader(self):
return DataLoader(self.sonar_val, batch_size=self.batch_size, num_workers=self.num_workers)
def test_dataloader(self):
return DataLoader(self.sonar_test, batch_size=self.batch_size, num_workers=self.num_workers)
dm = SonarDataModule(num_workers=num_cpus)
class SonarModel(LightningModule):
def __init__(self):
super().__init__()
# Define PyTorch model
self.model = nn.Sequential(
nn.Linear(60, 60),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(60, 30),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(30, 1),
nn.Sigmoid()
)
def forward(self, x):
return self.model(x.float())
def training_step(self, batch, batch_idx):
x, y = batch['features'].float(), batch['target'].float()
y_hat = self(x)
loss = F.binary_cross_entropy(y_hat, y)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch['features'].float(), batch['target'].float()
y_hat = self(x)
loss = F.binary_cross_entropy(y_hat, y)
# Binarize the output
y_hat_binary = y_hat.round()
acc = accuracy(y_hat_binary, y.int())
# Log metrics for TensorBoard
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', acc, prog_bar=True)
return loss
def test_step(self, batch, batch_idx):
# Reuse validation step
return self.validation_step(batch, batch_idx)
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), lr=0.01, momentum=0.5, nesterov=False)
model = SonarModel()
epochs = 100
if tpu_cores:
trainer = Trainer(tpu_cores=tpu_cores, max_epochs=epochs)
elif num_gpus:
trainer = Trainer(gpus=num_gpus, accelerator='dp', max_epochs=epochs)
else:
trainer = Trainer(max_epochs=epochs)
trainer.fit(model, dm)
trainer.test(datamodule=dm)
torch.save(model.state_dict(), 'model.pt')
model.load_state_dict(torch.load('model.pt'))
model.eval()
rock_feature = torch.tensor([[3.6800e-02, 4.0300e-02, 3.1700e-02, 2.9300e-02, 8.2000e-02, 1.3420e-01,
1.1610e-01, 6.6300e-02, 1.5500e-02, 5.0600e-02, 9.0600e-02, 2.5450e-01,
1.4640e-01, 1.2720e-01, 1.2230e-01, 1.6690e-01, 1.4240e-01, 1.2850e-01,
1.8570e-01, 1.1360e-01, 2.0690e-01, 2.1900e-02, 2.4000e-01, 2.5470e-01,
2.4000e-02, 1.9230e-01, 4.7530e-01, 7.0030e-01, 6.8250e-01, 6.4430e-01,
7.0630e-01, 5.3730e-01, 6.6010e-01, 8.7080e-01, 9.5180e-01, 9.6050e-01,
7.7120e-01, 6.7720e-01, 6.4310e-01, 6.7200e-01, 6.0350e-01, 5.1550e-01,
3.8020e-01, 2.2780e-01, 1.5220e-01, 8.0100e-02, 8.0400e-02, 7.5200e-02,
5.6600e-02, 1.7500e-02, 5.8000e-03, 9.1000e-03, 1.6000e-02, 1.6000e-02,
8.1000e-03, 7.0000e-03, 1.3500e-02, 6.7000e-03, 7.8000e-03, 6.8000e-03]], dtype=torch.float64, device=device)
rock_prediction = model(rock_feature)
mine_feature = torch.tensor([[5.9900e-02, 4.7400e-02, 4.9800e-02, 3.8700e-02, 1.0260e-01, 7.7300e-02,
8.5300e-02, 4.4700e-02, 1.0940e-01, 3.5100e-02, 1.5820e-01, 2.0230e-01,
2.2680e-01, 2.8290e-01, 3.8190e-01, 4.6650e-01, 6.6870e-01, 8.6470e-01,
9.3610e-01, 9.3670e-01, 9.1440e-01, 9.1620e-01, 9.3110e-01, 8.6040e-01,
7.3270e-01, 5.7630e-01, 4.1620e-01, 4.1130e-01, 4.1460e-01, 3.1490e-01,
2.9360e-01, 3.1690e-01, 3.1490e-01, 4.1320e-01, 3.9940e-01, 4.1950e-01,
4.5320e-01, 4.4190e-01, 4.7370e-01, 3.4310e-01, 3.1940e-01, 3.3700e-01,
2.4930e-01, 2.6500e-01, 1.7480e-01, 9.3200e-02, 5.3000e-02, 8.1000e-03,
3.4200e-02, 1.3700e-02, 2.8000e-03, 1.3000e-03, 5.0000e-04, 2.2700e-02,
2.0900e-02, 8.1000e-03, 1.1700e-02, 1.1400e-02, 1.1200e-02, 1.0000e-02]], dtype=torch.float64, device=device)
mine_prediction = model(mine_feature)
print('Result Values: (Rock: 0) - (Mine: 1)\n')
print(f'Rock Prediction:\n\t{"Rock" if rock_prediction <= 0.5 else "Mine"} - {rock_prediction.item()}')
print(f'Mine Prediction:\n\t{"Rock" if mine_prediction <= 0.5 else "Mine"} - {mine_prediction.item()}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Update TPU configuration
Step2: Set TPU environment variables
Step3: Install and import packages
Step4: Environment configuration
Step5: Download data
Step6: Define the PyTorch Dataset
Step7: Define a data processing module
Step8: Define a model
Step9: Train and evaluate the model
Step10: Save and load a trained model
Step11: Predict with the model
|
6,640
|
<ASSISTANT_TASK:>
Python Code:
!mkdir followers
!mkdir following
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
#make sure the path is correct for your chromedriver. can place next to it
# driver = webdriver.Chrome('./chromedriver.exe')
driver = webdriver.Chrome('C:/Program Files/chromedriver/chromedriver.exe')
#opens Chrome, maximize window
driver.get("https://www.twitter.com");
driver.maximize_window()
# sign in
users=['AraKovacs',
'AndreaBocskor',
'katka_cseh',
'AndorDeli',
'dajcstomi',
'DobrevKlara',
'donath_anna',
'GyongyosiMarton',
'GyoriEniko',
'BalazsHidveghi',
'HolvenyiGyorgy',
'JarokaLivia',
'adamkosamep',
'drcsabamolnar',
'sandor_ronai',
'toth_edina',
'trocsanyi',
'istvan_ujhelyi',
'vinczelorant',
'IuliuWinkler'] #ide tedd be as osszes user IDt
# for user in users[1:]:
# print(user)
# for path in ['followers','following']:
# url='https://twitter.com/'+user+'/'+path
# driver.get(url)
# time.sleep(2) #szunet 2 masodpercig
# actions = ActionChains(driver)
# #press PageDown key on the page 50 times - adjust as necessary
# for i in range(300):
# time.sleep(0.3+np.random.uniform(5)/10.0) #szunet random ideig scrollok kozott
# actions.send_keys(Keys.PAGE_DOWN)
# actions.perform()
# #copy everything in page
# time.sleep(2)
# actions.key_down(Keys.CONTROL).send_keys("a").key_up(Keys.CONTROL).perform();
# time.sleep(2)
# actions.key_down(Keys.CONTROL).send_keys("c").key_up(Keys.CONTROL).perform();
# #get data from clipboard
# win32clipboard.OpenClipboard()
# data = win32clipboard.GetClipboardData()
# win32clipboard.CloseClipboard()
# #save to file
# open(path+'/'+user+'.txt','wb').write(data.encode('utf8'))
# #repeat for following
# time.sleep(3)
# #repeat for next user
# time.sleep(3)
for user in users[2:]:
print(user)
for path in ['followers','following']:
names=[]
ids=[]
descs=[]
url='https://twitter.com/'+user+'/'+path
driver.get(url)
time.sleep(2) #szunet 2 masodpercig
actions = ActionChains(driver)
#press PageDown key on the page XX times - adjust as necessary
for i in range(300):
try:
time.sleep(0.3+np.random.uniform(5)/10.0) #szunet random ideig scrollok kozott
actions.send_keys(Keys.PAGE_DOWN)
actions.perform()
#copy everything in page
time.sleep(2)
soup=BeautifulSoup(driver.page_source, 'html.parser')
links=[i for i in
soup.find('div',{'aria-label':'Timeline: '+path.capitalize()}).find('div').findAll('div',{'class':'css-1dbjc4n r-1iusvr4 r-16y2uox'})]
names+=[i.findChildren("div" , recursive=False)[0].findChildren("span")[1].text for i in links]
ids+=[i.findChildren("div" , recursive=False)[0].findChildren("span")[2].text for i in links]
desc=[]
for i in links:
try:
desc.append(i.findChildren("div" , recursive=False)[1].text)
except:
desc.append('')
descs+=desc
except:
print(user,i)
#compile df
df=pd.DataFrame([names,ids,descs]).T.drop_duplicates()
df.columns=['name','id','desc']
#save to file
df.to_csv(path+'/'+user+'.csv',encoding='utf8')
#repeat for following
time.sleep(3)
#repeat for next user
time.sleep(3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make new, throwaway Twitter account with throwaway email address from here and throwaway phone number here (UK numbers work typically)
|
6,641
|
<ASSISTANT_TASK:>
Python Code:
x = 2
y = 2.0
x = 2
y = 3
z = x / y
type(z)
x = 2
y = 3
z = x * y
type(z)
x = 2.5
y = 3.5
z = x * y
type(z)
x = 2.5
y = 3.5
z = x * y
print("Float z:\t{}\nInteger z:\t{}".format(z, int(z)))
x = "this is a string"
type(x)
x = "some string"
y = "another string"
z = x + " " + y
print(z)
s = "2"
t = "divisor"
x = s / t
s = "2"
x = int(s)
print("x = {} and has type {}.".format(x, type(x)))
x = 2
s = str(x)
print("s = {} and has type {}.".format(s, type(s)))
s = "Some string with WORDS"
print(s.upper()) # make all the letters uppercase
print(s.lower()) # make all the letters lowercase
s1 = " python "
s2 = " python"
s3 = "python "
print("|" + s1.strip() + "|")
print("|" + s2.strip() + "|")
print("|" + s3.strip() + "|")
s = "some string"
t = 'this also works'
s = "some string"
len(s)
x = 2
y = 2
x == y
s1 = "a string"
s2 = "a string"
s1 == s2
s3 = "another string"
s1 == s3
x = 1
y = 2
x < y
x > y
x = 2
y = 3
x <= y
x = 3
x <= y
x = 3.00001
x <= y
s1 = "some string"
s2 = "another string"
s1 > s2
s1 = "Some string"
s1 > s2
# Adds two numbers that are initially strings by converting them to an int and a float,
# then converting the final result to an int and storing it in the variable x.
x = int(int("1345") + float("31.5"))
print(x)
str_length = len("some string")
x = 5
x += 10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It's easy to determine the name of the variable; in this case, the name is $x$. It can be a bit more complicated to determine the type of the variable, as it depends on the value the variable is storing. In this case, it's storing the number 2. Since there's no decimal point on the number, we call this number an integer, or int for short.
Step2: In this example, since y is assigned a value of 2.0, it is referred to as a floating-point variable, or float for short. It doesn't matter that the decimal is 0; internally, Python sees the explicit presence of a decimal and treats the variable y as having type float.
Step3: In this case, we've defined two variables x and y and assigned them integer values, so they are both of type int. However, we've used them both in a division operation and assigned the result to a variable named z. If we were to check the type of z, what type do you think it would be?
Step4: How does that happen? Shouldn't an operation involving two ints produce an int? In general, yes it does. However, in cases where a decimal number is outputted, Python implicitly "promotes" the variable storing the result. This is known as casting, and it can take two forms
Step5: Explicit casting, on the other hand, is a little trickier. In this case, it's you the programmer who are making explicit (hence the name) what type you want your variables to be. Python has a couple special built-in functions for performing explicit casting on variables, and they're named what you would expect
Step6: Whoa! What's going on here?
Step7: Unlike numerical types like ints and floats, you can't really perform arithmetic operations on strings, with one exception
Step8: The + operator, when applied to strings, is called string concatenation. This means, quite literally, that it glues or concatenates two strings together to create a new string. In this case, we took the string in x, concatenated it to an empty space " ", and concatenated that again to the string in y, storing the whole thing in a final string z.
Step9: Casting, however, is alive and well with strings. In particular, if you know the string you're working with is a string representation of a number, you can cast it from a string to a numeric type
Step10: And back again
Step11: Strings also have some useful methods that numeric types don't for doing some basic text processing.
Step12: A very useful method that will come in handy later in the course when we do some text processing is strip(). Often when you're reading text from a file and splitting it into tokens, you're left with strings that have leading or trailing whitespace
Step13: Anyone who looked at these three strings would say they're the same, but the whitespace before and after the word python in each of them results in Python treating them each as unique. Thankfully, we can use the strip method
Step14: You can also delimit strings using either single-quotes or double-quotes. Either is fine and largely depends on your preference.
Step15: Python also has a built-in method len() that can be used to return the length of a string. The length is simply the number of individual characters (including any whitespace) in the string.
Step16: Variable comparisons and Boolean types
Step17: Hooray! The == sign is the equality comparison operator, and it will return True or False depending on whether or not the two values are exactly equal. This works for strings as well
Step18: We can also ask if variables are less than or greater than each other, using the < and > operators, respectively.
Step19: In a small twist of relative magnitude comparisons, we can also ask if something is less than or equal to or greater than or equal to some other value. To do this, in addition to the comparison operators < or >, we also add an equal sign
Step20: Interestingly, these operators also work for strings. Be careful, though
Step21: Part 2
Step22: Comments are important to good coding style and should be used often for clarification. However, even more preferable to the liberal use of comments is a good variable naming convention. For instance, instead of naming a variable "x" or "y" or "c", give it a name that describes its purpose.
Step23: I could've used a comment to explain how this variable was storing the length of the string, but by naming the variable itself in terms of what it was doing, I don't even need such a comment. It's self-evident from the name itself what this variable is doing.
|
6,642
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
6,643
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import graphistry
try:
from urllib.parse import unquote # Python 3
except ImportError:
from urllib import unquote # Python 2
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
url = 'http://www.secrepo.com/self.logs/2015/access.log.2015-02-14.gz'
def parseApacheLogs(filename):
fields = ['host', 'identity', 'user', 'time_part1', 'time_part2', 'cmd_path_proto',
'http_code', 'response_bytes', 'referer', 'user_agent', 'unknown']
data = pandas.read_csv(url, compression='gzip', sep=' ', header=None, names=fields, na_values=['-'])
# Panda's parser mistakenly splits the date into two columns, so we must concatenate them
time = data.time_part1 + data.time_part2
time_trimmed = time.map(lambda s: s.strip('[]').split('-')[0]) # Drop the timezone for simplicity
data['time'] = pandas.to_datetime(time_trimmed, format='%d/%b/%Y:%H:%M:%S')
# Split column `cmd_path_proto` into three columns, and decode the URL (ex: '%20' => ' ')
data['command'], data['path'], data['protocol'] = zip(*data['cmd_path_proto'].str.split().tolist())
data['path'] = data['path'].map(lambda s: unquote(s))
# Drop the fixed columns and any empty ones
data1 = data.drop(['time_part1', 'time_part2', 'cmd_path_proto'], axis=1)
return data1.dropna(axis=1, how='all')
logs = parseApacheLogs(url)
logs[:3]
def host2pathGraph(logs):
def getEdgeTable(logs):
edges = logs.copy()
# Color edges by HTTP result code
http_code_to_color = {code: color for color, code in enumerate(edges['http_code'].unique())}
edges['ecolor'] = edges['http_code'].map(lambda code: http_code_to_color[code])
return edges
def getNodeTable(edges):
nodes0 = logs['host'].to_frame('nodeid')
nodes0['pcolor'] = 96000
nodes1 = logs['path'].to_frame('nodeid')
nodes1['pcolor'] = 96001
return pandas.concat([nodes0, nodes1], ignore_index=True).drop_duplicates()
edges = getEdgeTable(logs)
nodes = getNodeTable(edges)
return (edges, nodes)
g = graphistry.bind(source='host', destination='path', node='nodeid', \
edge_color='ecolor', point_color='pcolor')
g.plot(*host2pathGraph(logs))
#Bundle edges into a Pandas group when they share the same attributes like 'host' and 'path'
grouped_logs = logs.groupby(['host', 'path', 'user_agent', 'command', 'protocol', 'http_code'])
# Make dataframes count, min_time, max_time, and referer that are indexed by the groupby keys.
count = grouped_logs.size().to_frame('count')
min_time = grouped_logs['time'].agg('min').to_frame('time (min)')
max_time = grouped_logs['time'].agg('max').to_frame('time (max)')
referer = grouped_logs['referer'].agg(pandas.Series.mode).apply(lambda x: str(x)).to_frame('referer')
# Join into one table based on the same groupby keys
# We remove the indexes (via reset_index) since we do not need them anymore.
summary = count.join([min_time, max_time, referer]).reset_index()
summary[:3]
g.plot(*host2pathGraph(summary))
def path2pathGraph(summary):
host2path = summary[['host', 'path']].copy()
host2path['path'] = host2path['path'].map(lambda p: p.split('?')[0])
sessions = pandas.merge(host2path, host2path, on='host').drop_duplicates()
host2color = {host: 265000 + index for index, host in enumerate(sessions.host.unique())}
sessions['ecolor'] = sessions['host'].map(lambda x: host2color[x])
return sessions
sessionEdges = path2pathGraph(summary)
sessionEdges[:3]
graphistry.bind(source='path_x', destination='path_y', edge_color='ecolor').plot(sessionEdges)
graphistry.bind(source='host', destination='user_agent').plot(summary)
hg = graphistry.hypergraph(
logs,
entity_types=['host', 'path', 'referer', 'user_agent'],
direct=True,
opts={
'EDGES': {
'host': ['path', 'user_agent'],
'user_agent': ['path'],
'referer': ['path']
},
'CATEGORIES': {
'url': ['path', 'referer']
}
})
hg['graph'].plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download+Parse Apache Logs to Create a Pandas Dataframe
Step2: Graph connecting Hosts to URLs
Step3: Graph connecting Hosts to URLs
Step4: Plot. For an even cleaner view, in the visualization, try using a histogram filter to only show nodes with a degree of 100 or less.
Step5: Switching Lenses
Step6: Explore In-Tool for Deeper Insights
Step7: Multiple views, simulateneously
|
6,644
|
<ASSISTANT_TASK:>
Python Code:
!pip install matplotlib==3.2.2
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
#@title Helper function for displaying examples
def plot(examples, predictions=None):
# Get the images, labels, and optionally predictions
images = examples['image']
labels = examples['label']
batch_size = len(images)
if predictions is None:
predictions = batch_size * [None]
# Configure the layout of the grid
x = np.ceil(np.sqrt(batch_size))
y = np.ceil(batch_size / x)
fig = plt.figure(figsize=(x * 6, y * 7))
for i, (image, label, prediction) in enumerate(zip(images, labels, predictions)):
# Render the image
ax = fig.add_subplot(x, y, i+1)
ax.imshow(image, aspect='auto')
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
# Display the label and optionally prediction
x_label = 'Label: ' + name_map[class_names[label]]
if prediction is not None:
x_label = 'Prediction: ' + name_map[class_names[prediction]] + '\n' + x_label
ax.xaxis.label.set_color('green' if label == prediction else 'red')
ax.set_xlabel(x_label)
plt.show()
dataset, info = tfds.load('cassava', with_info=True)
info
# Extend the cassava dataset classes with 'unknown'
class_names = info.features['label'].names + ['unknown']
# Map the class names to human readable names
name_map = dict(
cmd='Mosaic Disease',
cbb='Bacterial Blight',
cgm='Green Mite',
cbsd='Brown Streak Disease',
healthy='Healthy',
unknown='Unknown')
print(len(class_names), 'classes:')
print(class_names)
print([name_map[name] for name in class_names])
def preprocess_fn(data):
image = data['image']
# Normalize [0, 255] to [0, 1]
image = tf.cast(image, tf.float32)
image = image / 255.
# Resize the images to 224 x 224
image = tf.image.resize(image, (224, 224))
data['image'] = image
return data
batch = dataset['validation'].map(preprocess_fn).batch(25).as_numpy_iterator()
examples = next(batch)
plot(examples)
classifier = hub.KerasLayer('https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2')
probabilities = classifier(examples['image'])
predictions = tf.argmax(probabilities, axis=-1)
plot(examples, predictions)
#@title Parameters {run: "auto"}
DATASET = 'cassava' #@param {type:"string"} ['cassava', 'beans', 'i_naturalist2017']
DATASET_SPLIT = 'test' #@param {type:"string"} ['train', 'test', 'validation']
BATCH_SIZE = 32 #@param {type:"integer"}
MAX_EXAMPLES = 1000 #@param {type:"integer"}
def label_to_unknown_fn(data):
data['label'] = 5 # Override label to unknown.
return data
# Preprocess the examples and map the image label to unknown for non-cassava datasets.
ds = tfds.load(DATASET, split=DATASET_SPLIT).map(preprocess_fn).take(MAX_EXAMPLES)
dataset_description = DATASET
if DATASET != 'cassava':
ds = ds.map(label_to_unknown_fn)
dataset_description += ' (labels mapped to unknown)'
ds = ds.batch(BATCH_SIZE)
# Calculate the accuracy of the model
metric = tf.keras.metrics.Accuracy()
for examples in ds:
probabilities = classifier(examples['image'])
predictions = tf.math.argmax(probabilities, axis=-1)
labels = examples['label']
metric.update_state(labels, predictions)
print('Accuracy on %s: %.2f' % (dataset_description, metric.result().numpy()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: データセット
Step2: データセットの情報を見て、説明や引用、例の数などの詳細情報を確認しましょう。
Step3: キャッサバデータセットには、健康なキャッサバの葉とそれぞれ異なる病気を持つ 4 枚のキャッサバの葉の画像があります。モデルはこれらの全てのクラスの予測ができ、予測に自信がない場合、モデルは 6 番目の "unknown(不明)" のクラスを予測します。
Step4: データをモデルに送る前に、少し前処理をする必要があります。モデルは RGB チャンネル値が [0, 1] の 224 x 224 の画像を想定しています。画像を正規化してサイズを変更しましょう。
Step5: データセットからいくつかの例を見てみましょう。
Step6: モデル
Step7: 評価とロバスト性
|
6,645
|
<ASSISTANT_TASK:>
Python Code:
class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
def forward(self, input):
Takes an input object, and computes the corresponding output of the module.
return self.updateOutput(input)
def backward(self,input, gradOutput):
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
pass
def zeroGradParameters(self):
Zeroes `gradParams` variable if the module has params.
pass
def getParameters(self):
Returns a list with its parameters.
If the module does not have parameters return empty list.
return []
def getGradParameters(self):
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
return []
def training(self):
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = True
def evaluate(self):
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = False
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Module"
class Sequential(Module):
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
Adds a module to the container.
self.modules.append(module)
def updateOutput(self, input):
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
# Your code goes here. ################################################
return self.output
def backward(self, input, gradOutput):
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
# Your code goes here. ################################################
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
Should gather all parameters in a list.
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
Should gather all gradients w.r.t parameters in a list.
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
class Linear(Module):
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def accGradParameters(self, input, gradOutput):
# Your code goes here. ################################################
pass
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "SoftMax"
class BatchMeanSubtraction(Module):
def __init__(self, alpha = 0.):
super(BatchMeanSubtraction, self).__init__()
self.alpha = alpha
self.old_mean = None
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "BatchMeanNormalization"
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "Dropout"
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "LeakyReLU"
class ELU(Module):
def __init__(self, alpha = 1.0):
super(ELU, self).__init__()
self.alpha = alpha
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "ELU"
class SoftPlus(Module):
def __init__(self):
super(SoftPlus, self).__init__()
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "SoftPlus"
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
return self.updateOutput(input, target)
def backward(self, input, target):
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
Function to override.
return self.output
def updateGradInput(self, input, target):
Function to override.
return self.gradInput
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Criterion"
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step12: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
Step19: Sequential container
Step21: Layers
Step22: This one is probably the hardest but as others only takes 5 lines of code in total.
Step23: One of the most significant recent ideas that impacted NNs a lot is Batch normalization. The idea is simple, yet effective
Step24: Implement dropout. The idea and implementation is really simple
Step25: Activation functions
Step26: Implement Leaky Rectified Linear Unit. Expriment with slope.
Step27: Implement Exponential Linear Units activations.
Step28: Implement SoftPlus activations. Look, how they look a lot like ReLU.
Step34: Criterions
Step35: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
Step36: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
|
6,646
|
<ASSISTANT_TASK:>
Python Code:
PROJECT_ID = "[your-project-id]"
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
!gcloud config set project $PROJECT_ID
REGION = 'us-central1'
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = "[your-bucket-name]"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "_xai_flowers_" + TIMESTAMP
! gsutil mb -l $REGION gs://$BUCKET_NAME
# Install explainable-ai-sdk
!pip install explainable-ai-sdk
import io
import os
import random
import numpy as np
import PIL
import tensorflow as tf
from matplotlib import pyplot as plt
from base64 import b64encode
# should be >= 2.1
print("Tensorflow version " + tf.__version__)
if tf.__version__ < "2.1":
raise Exception("TF 2.1 or greater is required")
AUTO = tf.data.experimental.AUTOTUNE
print("AUTO", AUTO)
import explainable_ai_sdk
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec'
IMAGE_SIZE = [192, 192]
BATCH_SIZE = 32
VALIDATION_SPLIT = 0.19
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)
# Split data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames)))
validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps))
# @title display utilities [RUN ME]
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
if tf.executing_eagerly():
# In eager mode, iterate in the Dataset directly.
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break
else:
# In non-eager mode, must get the TF note that
# yields the nextitem and run it in a tf.Session.
get_next_item = dataset.make_one_shot_iterator().get_next()
with tf.Session() as ses:
numpy_images, numpy_labels = ses.run(get_next_item)
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
label = np.argmax(label, axis=-1) # one-hot to class number
correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image)
plt.title(title, fontsize=16, color='red' if red else 'black')
return subplot + 1
def display_9_images_from_dataset(dataset):
subplot = 331
plt.figure(figsize=(13, 13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot = 331
plt.figure(figsize=(13, 13))
for i, image in enumerate(images):
title, correct = title_from_label_and_target(predictions[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot % 10 == 1: # set up the subplots on the first call
plt.subplots(figsize=(10, 10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model ' + title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example['image'], channels=3)
image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range
image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU
one_hot_class = tf.sparse.to_dense(example['one_hot_class'])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
def load_dataset(filenames):
# Read data from TFRecords
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16, num_parallel_calls=AUTO) # faster
dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)
return dataset
display_9_images_from_dataset(load_dataset(training_filenames))
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
dataset = dataset.cache() # This dataset fits in RAM
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)
# For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
return dataset
def get_training_dataset():
return get_batched_dataset(training_filenames)
def get_validation_dataset():
return get_batched_dataset(validation_filenames)
some_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), 8 * 20)
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, GlobalAveragePooling2D, BatchNormalization
from tensorflow.keras.optimizers import Adam
model = Sequential([
# Stem
Conv2D(kernel_size=3, filters=16, padding='same', activation='relu', input_shape=[*IMAGE_SIZE, 3]),
BatchNormalization(),
Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=96, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
BatchNormalization(),
# 1x1 Reduction
Conv2D(kernel_size=1, filters=32, padding='same', activation='relu'),
BatchNormalization(),
# Classifier
GlobalAveragePooling2D(),
Dense(5, activation='softmax')
])
model.compile(
optimizer=Adam(lr=0.005, decay=0.98),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75%
with tf.device("/GPU:0"):
history = model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=get_validation_dataset(), validation_steps=validation_steps)
# Randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(8 * 20)
some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation])
predictions = model.predict(some_flowers, batch_size=16)
evaluations = model.evaluate(some_flowers, some_labels, batch_size=16)
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
print('[val_loss, val_acc]', evaluations)
display_9_images_with_predictions(some_flowers, predictions, some_labels)
export_path = 'gs://' + BUCKET_NAME + '/explanations/mymodel'
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(192, 192))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
with tf.device("cpu:0"):
decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32)
return {"numpy_inputs": decoded_images} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function([tf.TensorSpec(shape=[None, 192, 192, 3], dtype=tf.float32, name="numpy_inputs")])
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
tf.saved_model.save(model, export_path, signatures={
'serving_default': serving_fn,
'xai_preprocess': preprocess_fn, # Required for XAI
'xai_model': m_call # Required for XAI
})
! saved_model_cli show --dir $export_path --all
loaded = tf.saved_model.load(export_path)
input_name = list(loaded.signatures['xai_model'].structured_input_signature[1].keys())[0]
print(input_name)
output_name = list(loaded.signatures['xai_model'].structured_outputs.keys())[0]
print(output_name)
preprocess_name = list(loaded.signatures['xai_preprocess'].structured_input_signature[1].keys())[0]
print(preprocess_name)
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
# We want to explain 'xai_model' signature.
builder = SavedModelMetadataBuilder(export_path, signature_name='xai_model')
random_baseline = np.random.rand(192, 192, 3)
builder.set_image_metadata(
'numpy_inputs',
input_baselines=[random_baseline.tolist()])
builder.save_metadata(export_path)
import datetime
MODEL = 'flowers' + TIMESTAMP
print(MODEL)
# Create the model if it doesn't exist yet (you only need to run this once)
! gcloud ai-platform models create $MODEL --enable-logging --region=$REGION
# Each time you create a version the name should be unique
IG_VERSION = 'v_ig'
! gcloud beta ai-platform versions create $IG_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method integrated-gradients \
--num-integral-steps 25 \
--region=$REGION
# Make sure the IG model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $IG_VERSION --model $MODEL --region=$REGION
# Each time you create a version the name should be unique
XRAI_VERSION = 'v_xrai'
# Create the XRAI version with gcloud
! gcloud beta ai-platform versions create $XRAI_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method xrai \
--num-integral-steps 25 \
--region=$REGION
# Make sure the XRAI model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL --region $REGION
# Download test flowers from public bucket
! mkdir flowers
! gsutil -m cp gs://flowers_model/test_flowers/* ./flowers
# Resize the images to what your model is expecting (192,192)
test_filenames = []
for i in os.listdir('flowers'):
img_path = 'flowers/' + i
with PIL.Image.open(img_path) as ex_img:
resize_img = ex_img.resize([192, 192])
resize_img.save(img_path)
test_filenames.append(img_path)
# Prepare your images to send to your Cloud model
instances = []
for image_path in test_filenames:
img_bytes = tf.io.read_file(image_path)
b64str = b64encode(img_bytes.numpy()).decode('utf-8')
instances.append({preprocess_name: {'b64': b64str}})
# IG EXPLANATIONS
remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(project=PROJECT_ID,
model=MODEL,
version=IG_VERSION,
region=REGION)
ig_response = remote_ig_model.explain(instances)
for response in ig_response:
response.visualize_attributions()
# XRAI EXPLANATIONS
remote_xrai_model = explainable_ai_sdk.load_model_from_ai_platform(project=PROJECT_ID,
model=MODEL,
version=XRAI_VERSION,
region=REGION)
xrai_response = remote_xrai_model.explain(instances)
for response in xrai_response:
response.visualize_attributions()
for i, response in enumerate(ig_response):
attr = response.get_attribution()
baseline_score = attr.baseline_score
predicted_score = attr.example_score
print('Baseline score: ', baseline_score)
print('Predicted score: ', predicted_score)
print('Predicted - Baseline: ', predicted_score - baseline_score, '\n')
# Convert your baseline from above to a base64 string
rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype('uint8'))
buffer = io.BytesIO()
rand_test_img.save(buffer, format="PNG")
new_image_string = b64encode(np.asarray(buffer.getvalue())).decode("utf-8")
# Preview it
plt.imshow(rand_test_img)
sanity_check_img = {preprocess_name: {'b64': new_image_string}}
# Sanity Check explanations EXPLANATIONS
sanity_check_response = remote_ig_model.explain([sanity_check_img])
sanity_check_response[0].visualize_attributions()
attr = sanity_check_response[0].get_attribution()
baseline_score = attr.baseline_score
example_score = attr.example_score
print(abs(baseline_score - example_score))
# Delete model version resource
! gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL
! gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL
# Delete model resource
! gcloud ai-platform models delete $MODEL --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Region
Step2: Timestamp
Step3: Create a Cloud Storage bucket
Step4: Only if your bucket doesn't already exist
Step5: Import libraries
Step6: Download and preprocess the data
Step7: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model.
Step8: Read images and labels from TFRecords
Step9: Use the visualization utility function provided earlier to preview flower images with their labels.
Step10: Create training and validation datasets
Step11: Build, train, and evaluate the model
Step12: Train the model
Step13: Visualize local predictions
Step14: Export the model as a TF 2.x SavedModel
Step15: Get input and output signatures
Step16: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
Step17: Generate explanation metadata
Step18: Deploy model to AI Explanations
Step19: Create explainable model versions
Step20: Deploy an XRAI model
Step21: Get predictions and explanations
Step22: Format your explanation request
Step23: Send the explanations request and visualize
Step24: Check explanations and baselines
Step25: Explain the baseline image
Step26: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.)
Step27: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction
Step28: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model.
Step29: Cleaning up
Step30: If your Cloud Storage bucket doesn't contain any other objects and you would like to delete it, run gsutil rm -r gs
|
6,647
|
<ASSISTANT_TASK:>
Python Code:
print 10*100/10+5.75-5.5
2.0/3
print 4*(6+5)
print 4*6+5
print 4+6*5
print 3+1.5+4
print 2**(0.5)
s = 'hello'
# Print out 'e' using indexing
print s[1]
# Code here
s ='hello'
# Reverse the string using indexing
print s[::-1]
print s[:3:-1]
# Code here
s ='hello'
# Print out the
print s[4]
print s[-1]
# Code here
a = list([0,0,0])
print a
a = list([0,0])
print a
a.append(0)
print a
l = [1,2,[3,4,'hello']]
l[2][2] = 'goodbye'
print l
l = [3,4,5,5,6,1]
print l
l.sort()
print l
l = [3,4,5,5,6,1]
print sorted(l)
print l
d = {'simple_key':'hello'}
# Grab 'hello'
print d['simple_key']
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
print d['k1']['k2']
# Getting a little tricker
d = {'k1':[{'nest_key':['this is deep',['hello']]}]}
# Grab hello
print d['k1'][0]['nest_key'][1][0]
# This will be hard and annoying!
d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]}
print d['k1'][2]['k2'][1]['tough'][2][0]
l = [1,2,2,33,4,4,11,22,3,3,2]
set(l)
# Answer before running cell
2 > 3
# Answer before running cell
3 <= 2
# Answer before running cell
3 == 2.0
# Answer before running cell
3.0 == 3
# Answer before running cell
4**0.5 != 2
# two nested lists
l_one = [1,2,[3,4]]
l_two = [1,2,{'k1':4}]
#True or False?
l_one[2][0] >= l_two[2]['k1']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explain what the cell below will produce and why. Can you change it so the answer is correct?
Step2: Answer these 3 questions without typing code. Then type code to check your answer.
Step3: What is the type of the result of the expression 3 + 1.5 + 4?
Step4: Strings
Step5: Reverse the string 'hello' using indexing
Step6: Given the string hello, give two methods of producing the letter 'o' using indexing.
Step7: Lists
Step8: Reassign 'hello' in this nested list to say 'goodbye' item in this list
Step9: Sort the list below
Step10: Dictionaries
Step11: Can you sort a dictionary? Why or why not?
Step12: Booleans
Step13: Final Question
|
6,648
|
<ASSISTANT_TASK:>
Python Code:
import requests
import json
import random
import getpass
#import couchdb
import pickle
import getpass
#!flask/bin/python
#from flask import Flask, jsonify
myusr = getpass.getuser()
print(myusr)
#couch = couchdb.Server()
with open('/home/{}/prn.pickle'.format(myusr), 'rb') as handle:
prnlis = pickle.load(handle)
#db = couch.create('redtube')
#db = couch['redtube']
payload = {'output' : 'json', 'data' : 'redtube.Videos.searchVideos', 'page' : 1}
getprn = requests.get('http://api.redtube.com/', params = payload)
daprn = getprn.json()
levid = len(daprn['videos'])
porndick = dict()
#for lev in range(0, levid):
# print(daprn['videos'][lev]['video'])
# prntit = (daprn['videos'][lev]['video']['title'])
# prnnow = prntit.replace(' ', '-')
# prnlow = prnnow.lower()
# print(prnlow)
# try:
# somelis = list()
# for dapr in daprn['videos'][lev]['video']['tags']:
# print(dapr['tag_name'])
# somelis.append(dapr['tag_name'])
# porndick.update({daprn['videos'][lev]['video']['video_id'] : {'tags' : ", ".join(str(x) for x in somelis)}})
#db.save(porndick)
#try:
# db = couch.create(prnlow)
#except PreconditionFailed:
# db = couch[prnlow]
#db.save({daprn['videos'][lev]['video']['video_id'] : {'tags' : ", ".join(str(x) for x in somelis)}})
# except KeyError:
# continue
#for i in db:
# print(i)
#db.save(porndick)
#for i in db:
# print(db[i])
#print(pornd['tags'])
#loaPrn = json.loads(getPrn.text)
#print loaUrl
lenvid = len(daprn[u'videos'])
lenvid
#aldic = dict()
with open('/home/{}/prn3.pickle'.format(myusr), 'rb') as handles:
aldic = pickle.load(handles)
import shutil
for napn in range(0, lenvid):
print(daprn[u'videos'][napn]['video']['url'])
print(daprn[u'videos'][napn]['video']['title'])
try:
letae = len(daprn[u'videos'][napn]['video']['tags'])
tagna = (daprn[u'videos'][napn]['video']['tags'])
reqbru = requests.get('http://api.giphy.com/v1/gifs/translate?s={}&api_key=dc6zaTOxFJmzC'.format(tagna))
brujsn = reqbru.json()
print(brujsn['data']['images']['fixed_width']['url'])
gurl = (brujsn['data']['images']['fixed_width']['url'])
gslug = (brujsn['data']['slug'])
#fislg = gslug.repl
try:
somelis = list()
for dapr in daprn['videos'][lev]['video']['tags']:
print(dapr['tag_name'])
somelis.append(dapr['tag_name'])
porndick.update({daprn['videos'][lev]['video']['video_id'] : {'tags' : ", ".join(str(x) for x in somelis)}})
except KeyError:
continue
aldic.update({gslug : gurl})
#print(gurl)
'''
with open('/home/pi/redtube/posts/{}.meta'.format(gslug), 'w') as blmet:
blmet.write('.. title: ' + glug + ' \n' + '.. slug: ' + nameofblogpost + ' \n' + '.. date: ' + str(nowtime) + ' \n' + '.. tags: ' + tagblog + '\n' + '.. link:\n.. description:\n.. type: text')
response = requests.get(gurl, stream=True)#
response
with open('/home/pi/redtube/galleries/{}.gif'.format(gslug), 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
del response
tan = tagna.replace(' ', '-')
tanq = tan.lower()
print(tanq)
'''
except KeyError:
continue
with open('/home/{}/prn.pickle'.format(myusr), 'wb') as handle:
pickle.dump(porndick, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('/home/{}/prn3.pickle'.format(myusr), 'wb') as handle:
pickle.dump(aldic, handle, protocol=pickle.HIGHEST_PROTOCOL)
#db.save(aldic)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Requests and json are the two main modules used for this. Random can also be handy
Step2: Convert it into readable text that you can work with
|
6,649
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import os
import time
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
# TODO 1
# Read, then decode for py2 compat.
text = # TODO 1: Your code goes here
# length of text is the number of characters in it
print(f'Length of text: {len(text)} characters')
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print(f'{len(vocab)} unique characters')
example_texts = ['abcdefg', 'xyz']
# Split the text into tokens
chars = # TODO 2: Your code goes here
chars
ids_from_chars = tf.keras.layers.StringLookup(
vocabulary=list(vocab), mask_token=None)
ids = ids_from_chars(chars)
ids
chars_from_ids = tf.keras.layers.StringLookup(
vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None)
chars = chars_from_ids(ids)
chars
tf.strings.reduce_join(chars, axis=-1).numpy()
def text_from_ids(ids):
return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)
all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8'))
all_ids
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10):
print(chars_from_ids(ids).numpy().decode('utf-8'))
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)
# Convert the individual characters to sequences
sequences = # TODO 3: Your code goes here
for seq in sequences.take(1):
print(chars_from_ids(seq))
for seq in sequences.take(5):
print(text_from_ids(seq).numpy())
def split_input_target(sequence):
input_text = sequence[:-1]
target_text = sequence[1:]
return input_text, target_text
split_input_target(list("Tensorflow"))
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
print("Input :", text_from_ids(input_example).numpy())
print("Target:", text_from_ids(target_example).numpy())
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = (
dataset
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE))
dataset
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__(self)
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(rnn_units,
return_sequences=True,
return_state=True)
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, inputs, states=None, return_state=False, training=False):
x = inputs
x = self.embedding(x, training=training)
if states is None:
states = self.gru.get_initial_state(x)
x, states = self.gru(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
model = MyModel(
# Be sure the vocabulary size matches the `StringLookup` layers.
# TODO 4: Your code goes here)
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
model.summary()
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy()
sampled_indices
print("Input:\n", text_from_ids(input_example_batch[0]).numpy())
print()
print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)
example_batch_mean_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("Mean loss: ", example_batch_mean_loss)
tf.exp(example_batch_mean_loss).numpy()
model.compile(optimizer='adam', loss=loss)
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = # TODO 5: Your code goes here
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
EPOCHS = 10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
class OneStep(tf.keras.Model):
def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):
super().__init__()
self.temperature = temperature
self.model = model
self.chars_from_ids = chars_from_ids
self.ids_from_chars = ids_from_chars
# Create a mask to prevent "[UNK]" from being generated.
skip_ids = self.ids_from_chars(['[UNK]'])[:, None]
sparse_mask = tf.SparseTensor(
# Put a -inf at each bad index.
values=[-float('inf')]*len(skip_ids),
indices=skip_ids,
# Match the shape to the vocabulary
dense_shape=[len(ids_from_chars.get_vocabulary())])
self.prediction_mask = tf.sparse.to_dense(sparse_mask)
@tf.function
def generate_one_step(self, inputs, states=None):
# Convert strings to token IDs.
input_chars = tf.strings.unicode_split(inputs, 'UTF-8')
input_ids = self.ids_from_chars(input_chars).to_tensor()
# Run the model.
# predicted_logits.shape is [batch, char, next_char_logits]
predicted_logits, states = self.model(inputs=input_ids, states=states,
return_state=True)
# Only use the last prediction.
predicted_logits = predicted_logits[:, -1, :]
predicted_logits = predicted_logits/self.temperature
# Apply the prediction mask: prevent "[UNK]" from being generated.
predicted_logits = predicted_logits + self.prediction_mask
# Sample the output logits to generate token IDs.
predicted_ids = tf.random.categorical(predicted_logits, num_samples=1)
predicted_ids = tf.squeeze(predicted_ids, axis=-1)
# Convert from token ids to characters
predicted_chars = self.chars_from_ids(predicted_ids)
# Return the characters and model state.
return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)
start = time.time()
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]
# Generate the text
for n in range(1000):
# TODO 6: Your code goes here
result = tf.strings.join(result)
end = time.time()
print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80)
print('\nRun time:', end - start)
start = time.time()
states = None
next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:'])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(next_char, states=states)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result, '\n\n' + '_'*80)
print('\nRun time:', end - start)
tf.saved_model.save(one_step_model, 'one_step')
one_step_reloaded = tf.saved_model.load('one_step')
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]
for n in range(100):
next_char, states = one_step_reloaded.generate_one_step(next_char, states=states)
result.append(next_char)
print(tf.strings.join(result)[0].numpy().decode("utf-8"))
class CustomTraining(MyModel):
@tf.function
def train_step(self, inputs):
inputs, labels = inputs
with tf.GradientTape() as tape:
predictions = self(inputs, training=True)
loss = self.loss(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
self.optimizer.apply_gradients(zip(grads, model.trainable_variables))
return {'loss': loss}
model = CustomTraining(
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units)
model.compile(optimizer = tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(dataset, epochs=1)
EPOCHS = 10
mean = tf.metrics.Mean()
for epoch in range(EPOCHS):
start = time.time()
mean.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
logs = model.train_step([inp, target])
mean.update_state(logs['loss'])
if batch_n % 50 == 0:
template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}"
print(template)
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print()
print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}')
print(f'Time taken for 1 epoch {time.time() - start:.2f} sec')
print("_"*80)
model.save_weights(checkpoint_prefix.format(epoch=epoch))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download the Shakespeare dataset
Step2: Read the data
Step3: Process the text
Step4: Now create the tf.keras.layers.StringLookup layer
Step5: It converts from tokens to character IDs
Step6: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True).
Step7: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters
Step8: You can tf.strings.reduce_join to join the characters back into strings.
Step9: The prediction task
Step10: The batch method lets you easily convert these individual characters to sequences of the desired size.
Step11: It's easier to see what this is doing if you join the tokens back into strings
Step12: For training you'll need a dataset of (input, label) pairs. Where input and
Step13: Create training batches
Step14: Build The Model
Step15: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character
Step16: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length
Step17: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Step18: This gives us, at each timestep, a prediction of the next character index
Step19: Decode these to see the text predicted by this untrained model
Step20: Train the model
Step21: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized
Step22: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function.
Step23: Configure checkpoints
Step24: Execute the training
Step25: Generate text
Step26: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
Step27: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30).
Step28: Export the generator
Step29: Advanced
Step30: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods.
Step31: Or if you need more control, you can write your own complete custom training loop
|
6,650
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
class Circle(object):
def __init__(self,radius=3,color='blue'):
self.radius=radius
self.color=color
def add_radius(self,r):
self.radius=self.radius+r
return(self.radius)
def drawCircle(self):
plt.gca().add_patch(plt.Circle((0, 0), radius=self.radius, fc=self.color))
plt.axis('scaled')
plt.show()
RedCircle=Circle(10,'red')
dir(RedCircle)
RedCircle.radius
RedCircle.color
RedCircle.radius=1
RedCircle.radius
RedCircle.drawCircle()
print('Radius of object:',RedCircle.radius)
RedCircle.add_radius(2)
print('Radius of object of after applying the method add_radius(2):',RedCircle.radius)
RedCircle.add_radius(5)
print('Radius of object of after applying the method add_radius(5):',RedCircle.radius)
BlueCircle=Circle(radius=100)
BlueCircle.radius
BlueCircle.color
BlueCircle.drawCircle()
class Rectangle(object):
def __init__(self,width=2,height =3,color='r'):
self.height=height
self.width=width
self.color=color
def drawRectangle(self):
import matplotlib.pyplot as plt
plt.gca().add_patch(plt.Rectangle((0, 0),self.width, self.height ,fc=self.color))
plt.axis('scaled')
plt.show()
SkinnyBlueRectangle= Rectangle(2,10,'blue')
SkinnyBlueRectangle.height
SkinnyBlueRectangle.width
SkinnyBlueRectangle.color
SkinnyBlueRectangle.drawRectangle()
FatYellowRectangle = Rectangle(20,5,'yellow')
FatYellowRectangle.height
FatYellowRectangle.width
FatYellowRectangle.color
FatYellowRectangle.drawRectangle()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first step in creating your own class is to use the class keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object
Step2: Creating an instance of a class Circle
Step3: We can use the dir command to get a list of the object's methods. Many of them are default Python methods.
Step4: We can look at the data attributes of the object
Step5: We can change the object's data attributes
Step6: We can draw the object by using the method drawCircle()
Step7: We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5
Step8: Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is
Step9: As before we can access the attributes of the instance of the class by using the dot notation
Step10: We can draw the object by using the method drawCircle()
Step11: Compare the x and y axis of the figure to the figure for RedCircle; they are different.
Step12: Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the colour will be blue
Step13: As before we can access the attributes of the instance of the class by using the dot notation
Step14: We can draw the object
Step15: Let’s create the object “FatYellowRectangle” of type Rectangle
Step16: We can access the attributes of the instance of the class by using the dot notation
Step17: We can draw the object
|
6,651
|
<ASSISTANT_TASK:>
Python Code:
# Evaluate this cell to identifiy your form
from dkrz_forms import form_widgets, form_handler, checks
form_infos = form_widgets.show_selection()
# Evaluate this cell to generate your personal form instance
form_info = form_infos[form_widgets.FORMS.value]
sf = form_handler.init_form(form_info)
form = sf.sub.entity_out.report
form.myattribute = "myinformation"
form.mycomment =".."
form.anything_you_like_tag = ".."
form_handler.save_form(sf,"..my comment..") # edit my comment info
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Edit form information
Step2: Save your form
Step3: Send an email to me to complete the form later on
Step4: officially submit your form
|
6,652
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (14.0, 8.0)
import numpy as np
from dtocean_core import start_logging
from dtocean_core.core import Core
from dtocean_core.menu import ModuleMenu, ProjectMenu, ThemeMenu
from dtocean_core.pipeline import Tree
def html_list(x):
message = "<ul>"
for name in x:
message += "<li>{}</li>".format(name)
message += "</ul>"
return message
def html_dict(x):
message = "<ul>"
for name, status in x.iteritems():
message += "<li>{}: <b>{}</b></li>".format(name, status)
message += "</ul>"
return message
# Bring up the logger
start_logging()
new_core = Core()
project_menu = ProjectMenu()
module_menu = ModuleMenu()
theme_menu = ThemeMenu()
pipe_tree = Tree()
project_title = "DTOcean"
new_project = project_menu.new_project(new_core, project_title)
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection")
variable_id = "device.system_type"
my_var = options_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, "Wave Floating")
my_var.read(new_core, new_project)
project_menu.initiate_pipeline(new_core, new_project)
names_modules = module_menu.get_available(new_core, new_project)
message1 = html_list(names_modules)
HTML(message1)
names_themes = theme_menu.get_available(new_core, new_project)
message2= html_list(names_themes)
HTML(message2)
module_name = 'Hydrodynamics'
module_menu.activate(new_core, new_project, module_name)
theme_name = 'Environmental Impact Assessment (Experimental)'
theme_menu.activate(new_core, new_project, theme_name)
hydro_branch = pipe_tree.get_branch(new_core, new_project, 'Hydrodynamics')
input_status = hydro_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
environmental_branch = pipe_tree.get_branch(new_core, new_project, 'Environmental Impact Assessment (Experimental)')
input_status = environmental_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
project_menu.initiate_dataflow(new_core, new_project)
%run test_data/inputs_wp2_wave.py
hydro_branch.read_test_data(new_core,
new_project,
"test_data/inputs_wp2_wave.pkl")
%run test_data/inputs_environmental.py
environmental_branch.read_test_data(new_core,
new_project,
"test_data/inputs_environmental.pkl")
plant_rated_power = 100.
variable_id = "project.rated_power"
my_var = hydro_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, plant_rated_power)
my_var.read(new_core, new_project)
can_execute = module_menu.is_executable(new_core, new_project, module_name)
display(can_execute)
can_execute_theme = theme_menu.is_executable(new_core, new_project, theme_name)
display(can_execute_theme)
input_status = hydro_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
input_status = environmental_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
module_menu.execute_current(new_core, new_project)
global_eis = new_core.get_data_value(new_project, "project.global_eis")
meta = new_core.get_metadata("project.global_eis")
name = meta.title
message_one = "<p><b>{}:</b> {}</p>".format(name, global_eis)
HTML(message_one)
annual_energy_per_dev_value = new_core.get_data_value(new_project,
"project.annual_energy_per_device")
meta = new_core.get_metadata("project.annual_energy_per_device")
chart_values = np.array(annual_energy_per_dev_value.values())
plt.bar(range(len(annual_energy_per_dev_value)),
chart_values,
align='center')
plt.xticks(range(len(annual_energy_per_dev_value)),
annual_energy_per_dev_value.keys())
plt.title(meta.title)
plt.ylabel(meta.units[0])
plt.tight_layout()
# plt.savefig('annual_energy_per_device.png')
plt.show()
layout_value = new_core.get_data_value(new_project, "project.layout")
layout_meta = new_core.get_metadata("project.layout")
x = []
y = []
for coords in layout_value.itervalues():
x.append(coords.x)
y.append(coords.y)
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1, axisbg='lightskyblue')
ax1.plot(x,y,'k+', mew=2, markersize=10)
plt.title(layout_meta.title)
plt.axis('equal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the core, menus and pipeline tree
Step2: Create a new project
Step3: Set the device type
Step4: Initiate the pipeline
Step5: Discover available modules and themes
Step6: Activate a module and a Theme
Step7: Check the status of the module and theme inputs
Step8: Initiate the dataflow
Step9: Load test data
Step10: Use the interface to enter the variable value
Step11: Check if the module can be executed
Step12: Execute the current module
Step13: Examine the results
Step14: Plotting some graphs
Step15: Plotting the Layout
|
6,653
|
<ASSISTANT_TASK:>
Python Code:
df = load_rossi()
df['age_strata'] = pd.cut(df['age'], np.arange(0, 80, 5))
df = df.drop('age', axis=1)
cph = CoxPHFitter()
cph.fit(df, 'week', 'arrest', strata=['age_strata', 'wexp'])
cph.print_summary()
cph.plot();
r = cph.compute_residuals(df, 'martingale')
r.head()
r.plot.scatter(
x='week', y='martingale', c=np.where(r['arrest'], '#008fd5', '#fc4f30'),
alpha=0.75
)
r = cph.compute_residuals(df, 'deviance')
r.head()
r.plot.scatter(
x='week', y='deviance', c=np.where(r['arrest'], '#008fd5', '#fc4f30'),
alpha=0.75
)
r = r.join(df.drop(['week', 'arrest'], axis=1))
plt.scatter(r['prio'], r['deviance'], color=np.where(r['arrest'], '#008fd5', '#fc4f30'))
r = cph.compute_residuals(df, 'delta_beta')
r.head()
r = r.join(df[['week', 'arrest']])
r.head()
plt.scatter(r['week'], r['prio'], color=np.where(r['arrest'], '#008fd5', '#fc4f30'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Martingale residuals
Step2: Deviance residuals
|
6,654
|
<ASSISTANT_TASK:>
Python Code:
%install_ext http://raw.github.com/jrjohansson/ipython-circuitikz/master/circuitikz.py
%reload_ext circuitikz
%%circuitikz filename=squid dpi=125
\begin{circuitikz}[scale=1]
\draw ( 0, 0) [short, *-] node[anchor=south] {$\Phi_J$} to (0, -1);
% right
\draw ( 0, -1) to (2, -1) to node[anchor=west] {$\Phi_{J}^2$} (2, -2) to (3, -2)
to [barrier, l=$E_J^2$] (3, -4) to (2, -4)to (2, -5) to (0, -5) node[ground] {};
\draw ( 2, -2) to (1, -2) to [capacitor, l=$C_J^2$] (1, -4) to (1, -4) to (2, -4);
% left
\draw ( 0, -1) to (-2, -1) to node[anchor=west] {$\Phi_{J}^1$} (-2, -2) to (-3, -2)
to [capacitor, l=$C_J^1$] (-3, -4) to (-2, -4) to (-2, -5) to (0, -5);
\draw (-2, -2) to (-1, -2) to [barrier, l=$E_J^1$] (-1, -4) to (-1, -4) to (-2, -4);
\end{circuitikz}
%%circuitikz filename=tm dpi=150
\begin{circuitikz}[scale=1.25]
\draw (-1,0) node[anchor=east] {} to [short, *-*] (1,0);
\draw (-1,2) node[anchor=east] {} to [inductor, *-*, l=$\Delta x L$] (1,2);
\draw (-1,0) to [open, l=$\cdots$] (-1,2);
\draw (3, 0) to (1, 0) to [capacitor, l=$\Delta x C$, *-*] (1, 2) to [inductor, *-*, l=$\Delta x L$] (3, 2);
\draw (5, 0) to (3, 0) to [capacitor, l=$\Delta x C$, *-*] (3, 2) to [inductor, *-*, l=$\Delta x L$] (5, 2);
\draw (7, 0) to (5, 0) to [capacitor, l=$\Delta x C$, *-*] (5, 2) to [inductor, *-*, l=$\Delta x L$] (7, 2);
\draw (9, 0) to (7, 0) to [capacitor, l=$\Delta x C$, *-*] (7, 2) to [inductor, *-*, l=$\Delta x L$] (9, 2);
\draw (9,0) node[anchor=east] {} to [short, *-*] (9,0);
\draw (10,0) to [open, l=$\cdots$] (10,2);
\end{circuitikz}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the extension
Step2: Example
Step3: Example
|
6,655
|
<ASSISTANT_TASK:>
Python Code:
import krisk.plot as kk
import pandas as pd
# Use this when you want to nbconvert the notebook (used by nbviewer)
from krisk import init_notebook; init_notebook()
df = pd.read_csv('../krisk/tests/data/gapminderDataFiveYear.txt', sep='\t').sample(50)
p = kk.bar(df,'year',c='continent',stacked=True)
p
p.set_theme('vintage')
p.set_theme('dark')
p.set_theme('macarons')
p.set_theme('infographic')
p.set_theme('roma')
p.set_theme('shine')
pallete = ['Navy','#FF0000','rgb(205,92,92)', '#65c3bf','hsl(60, 100%, 87%)']
p.set_color(background='Aqua', palette=pallete)
import seaborn as sns
palette_sns1 = sns.color_palette('muted').as_hex()
p.set_color(palette=palette_sns1)
palette_sns2 = sns.color_palette('YlGnBu').as_hex()
p.set_color(palette=palette_sns2)
import colorlover as cl
cl2 = cl.to_hsl( cl.scales['3']['div']['RdYlBu'] )
p.set_color(palette=cl2)
import bokeh.palettes as bp
import bokeh.colors as bc
p.set_color(background=bc.aliceblue.to_hex(),palette=bp.PuBuGn6)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Themes
Step2: Vintage
Step3: Dark
Step4: Macarons
Step5: Infographic
Step6: Roma
Step7: Shine
Step8: Colors (Palette and Background)
Step9: You also can using existing palettes provided by visualization libraries you already know. Here I will use libraries like Seaborn, Colorlover, and Bokeh.
Step10: Seaborn also nicely integrate colormap from matplotlib
Step11: Colorlover
Step12: Bokeh
|
6,656
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
# Solution
from scipy.stats import poisson
poisson.pmf(3, 2.9)
# Solution
from thinkbayes2 import EvalPoissonPmf
EvalPoissonPmf(3, 2.9)
# Solution
from thinkbayes2 import MakePoissonPmf
pmf = MakePoissonPmf(2.9, high=10)
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Number of goals',
ylabel='PMF',
xlim=[-0.5, 10.5])
# Solution
pmf = MakePoissonPmf(2.9, high=30)
total = pmf + pmf + pmf
thinkplot.Hist(total)
thinkplot.Config(xlabel='Number of goals',
ylabel='PMF',
xlim=[-0.5, 22.5])
total[9]
# Solution
EvalPoissonPmf(9, 3 * 2.9)
# Solution
from thinkbayes2 import MakeExponentialPmf
pmf = MakeExponentialPmf(lam=2.6, high=2.5)
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Time between goals',
ylabel='PMF')
# Solution
from scipy.stats import expon
expon.cdf(1/3, scale=1/2.6)
# Solution
from thinkbayes2 import EvalExponentialCdf
EvalExponentialCdf(1/3, 2.6)
# Solution
1 - EvalExponentialCdf(1, 2.6)
# Solution
EvalPoissonPmf(0, 2.6)
from thinkbayes2 import MakeNormalPmf
from thinkbayes2 import EvalPoissonPmf
class Hockey(Suite):
Represents hypotheses about the scoring rate for a team.
def __init__(self, label=None):
Initializes the Hockey object.
label: string
mu = 2.8
sigma = 0.3
pmf = MakeNormalPmf(mu, sigma, num_sigmas=4, n=101)
Suite.__init__(self, pmf, label=label)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
Evaluates the Poisson PMF for lambda and k.
hypo: goal scoring rate in goals per game
data: goals scored in one game
lam = hypo
k = data
like = EvalPoissonPmf(k, lam)
return like
suite1 = Hockey('bruins')
suite2 = Hockey('canucks')
thinkplot.PrePlot(num=2)
thinkplot.Pdf(suite1)
thinkplot.Pdf(suite2)
thinkplot.Config(xlabel='Goals per game',
ylabel='Probability')
suite1.UpdateSet([0, 2, 8, 4])
suite2.UpdateSet([1, 3, 1, 0])
thinkplot.PrePlot(num=2)
thinkplot.Pdf(suite1)
thinkplot.Pdf(suite2)
thinkplot.Config(xlabel='Goals per game',
ylabel='Probability')
suite1.Mean(), suite2.Mean()
from thinkbayes2 import MakeMixture
from thinkbayes2 import MakePoissonPmf
def MakeGoalPmf(suite, high=10):
Makes the distribution of goals scored, given distribution of lam.
suite: distribution of goal-scoring rate
high: upper bound
returns: Pmf of goals per game
metapmf = Pmf()
for lam, prob in suite.Items():
pmf = MakePoissonPmf(lam, high)
metapmf.Set(pmf, prob)
mix = MakeMixture(metapmf, label=suite.label)
return mix
goal_dist1 = MakeGoalPmf(suite1)
goal_dist2 = MakeGoalPmf(suite2)
thinkplot.PrePlot(num=2)
thinkplot.Pmf(goal_dist1)
thinkplot.Pmf(goal_dist2)
thinkplot.Config(xlabel='Goals',
ylabel='Probability',
xlim=[-0.7, 11.5])
goal_dist1.Mean(), goal_dist2.Mean()
diff = goal_dist1 - goal_dist2
p_win = diff.ProbGreater(0)
p_loss = diff.ProbLess(0)
p_tie = diff.Prob(0)
print('Prob win, loss, tie:', p_win, p_loss, p_tie)
from thinkbayes2 import MakeExponentialPmf
def MakeGoalTimePmf(suite):
Makes the distribution of time til first goal.
suite: distribution of goal-scoring rate
returns: Pmf of goals per game
metapmf = Pmf()
for lam, prob in suite.Items():
pmf = MakeExponentialPmf(lam, high=2.5, n=1001)
metapmf.Set(pmf, prob)
mix = MakeMixture(metapmf, label=suite.label)
return mix
time_dist1 = MakeGoalTimePmf(suite1)
time_dist2 = MakeGoalTimePmf(suite2)
thinkplot.PrePlot(num=2)
thinkplot.Pmf(time_dist1)
thinkplot.Pmf(time_dist2)
thinkplot.Config(xlabel='Games until goal',
ylabel='Probability')
time_dist1.Mean(), time_dist2.Mean()
p_win_in_overtime = time_dist1.ProbLess(time_dist2)
p_adjust = time_dist1.ProbEqual(time_dist2)
p_win_in_overtime += p_adjust / 2
print('p_win_in_overtime', p_win_in_overtime)
p_win_overall = p_win + p_tie * p_win_in_overtime
print('p_win_overall', p_win_overall)
# Solution
suite1.Update(0)
suite2.Update(0)
time_dist1 = MakeGoalTimePmf(suite1)
time_dist2 = MakeGoalTimePmf(suite2)
p_win_in_overtime = time_dist1.ProbLess(time_dist2)
p_adjust = time_dist1.ProbEqual(time_dist2)
p_win_in_overtime += p_adjust / 2
print('p_win_in_overtime', p_win_in_overtime)
p_win_overall = p_win + p_tie * p_win_in_overtime
print('p_win_overall', p_win_overall)
from thinkbayes2 import MakeGammaPmf
xs = np.linspace(0, 8, 101)
pmf = MakeGammaPmf(xs, 1.3)
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Goals per game')
pmf.Mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Warm-up exercises
Step2: Exercise
Step3: Exercise
Step4: Exercise
Step8: The Boston Bruins problem
Step9: Now we can initialize a suite for each team
Step10: Here's what the priors look like
Step11: And we can update each suite with the scores from the first 4 games.
Step13: To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons
Step14: Here's what the results look like.
Step15: Now we can compute the probability that the Bruins win, lose, or tie in regulation time.
Step17: If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials.
Step18: Here's what the predictive distributions for t look like.
Step19: In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t
Step20: Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime.
Step21: Exercises
Step22: Exercise
|
6,657
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import time
import pandas
import random
import numpy
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import itertools
from pomegranate import *
random.seed(0)
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
X = numpy.random.normal([5, 7], [1.5, 0.4], size=(1000, 2))
d1 = MultivariateGaussianDistribution.blank(2)
d1
d1.summarize(X[:250])
d1.summarize(X[250:500])
d1.summarize(X[500:750])
d1.summarize(X[750:])
d1.summaries
d1.from_summaries()
d1
MultivariateGaussianDistribution.from_samples(X)
X = numpy.concatenate([numpy.random.normal(0, 1, size=(5000, 10)), numpy.random.normal(1, 1, size=(7500, 10))])
n = X.shape[0]
idx = numpy.arange(n)
numpy.random.shuffle(idx)
X = X[idx]
# First we initialize our model on some small chunk of data.
model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[:200], max_iterations=1, init='first-k')
# The base performance on the data set.
base_logp = model.log_probability(X).sum()
from tqdm import tqdm_notebook as tqdm
# Now we write our own iterator. This outer loop will be the number of times we iterate---hard coded to 5 in this case.
for iteration in tqdm(range(5)):
# This internal loop goes over chunks from the data set. We're just loading chunks of a fixed size iteratively
# until we've seen the entire data set.
for i in range(10):
model.summarize(X[i * (n // 10):(i+1) * (n //10)])
# Now we've seen the entire data set and summarized it. We can update the parameters now.
model.from_summaries()
base_logp, model.log_probability(X).sum()
model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[:200], max_iterations=1, init='first-k')
base_logp = model.log_probability(X).sum()
model.fit(X, max_iterations=5)
base_logp, model.log_probability(X).sum()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Training a Probability Distribution
Step2: Then we can make a blank distribution with 2 dimensions. This is equivalent to filling in the mean and standard deviation with dummy values that will be overwritten, and don't effect the calculation.
Step3: Now let's summarize through a few batches of data.
Step4: Now that we've seen the entire data set let's use the from_summaries method to update the parameters.
Step5: And what do we get if we learn directly from the data?
Step6: The exact same model.
Step7: First we have to initialize our model. We can do that either by hand to some value we think is good, or by fitting to the first chunk of data, anticipating that it will be a decent representation of the remainder. We can also calculate the log probability of the data set now to see how much we improved.
Step8: How we does did our model do on the data originally, and how well does it do now?
Step9: Looks like a decent improvement.
|
6,658
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/orlando_furioso.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
#text = text[81:]
# Need to clean by all numbers and subtitute italian tokens not present in english
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
words_ordered = sorted(set(text))
# TODO: Implement Function
vocab_to_int = {word: index for index, word in enumerate(words_ordered)}
int_to_vocab = {index: word for index, word in enumerate(words_ordered)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dict = dict()
token_dict['.'] = "||Period||"
token_dict[','] = "||Comma||"
token_dict['"'] = "||Quotation_Mark||"
token_dict[';'] = "||Semicolon||"
token_dict['!'] = "||Exclamation_Mark||"
token_dict['?'] = "||Question_Mark||"
token_dict['('] = "||Left_Parentheses||"
token_dict[')'] = "||Right_Parentheses||"
token_dict['--'] = "||Dash||"
token_dict['\n'] = "||Return||"
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None,None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_layers = 2 #Need to pass test?! (otherwise final_state shape will be wrong)
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# print(initial_state)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, ids=input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
#print(cell)
#print(inputs)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
# Shape is lstm_layers x 2 (inputs and targets) x None (batch_size) x lstm_units
#print(final_state)
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim=embed_dim)
# outputs shape is batch_size x seq_len x lstm_units
outputs, final_state = build_rnn(cell, inputs=embed)
#print(outputs.shape)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
# logits shape is batch_size x seq_len x vocab_size
#print(logits.shape)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
#print("Batch_size: " + str(batch_size))
#print("Seq length: " + str(seq_length))
# Consider that targets is shifted by 1
num_batches = len(int_text)//(batch_size * seq_length + 1)
#print("Num batches: " + str(num_batches))
#print("Text length: " + str(len(int_text)))
batches = np.zeros(shape=(num_batches, 2, batch_size, seq_length), dtype=np.int32)
#print(batches.shape)
# TODO: Add a smarter check
for batch_index in range(0, num_batches):
for in_batch_index in range(0, batch_size):
start_x = (batch_index * seq_length) + (seq_length * num_batches * in_batch_index)
start_y = start_x + 1
x = int_text[start_x : start_x + seq_length]
y = int_text[start_y : start_y + seq_length]
#print("batch_index: " + str(batch_index))
#print("in_batch_index: " + str(in_batch_index))
#print("start_x: " + str(start_x))
#print(x)
batches[batch_index][0][in_batch_index] = np.asarray(x)
batches[batch_index][1][in_batch_index] = np.asarray(y)
#print(batches)
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# FINAL LOSS: - Seq length 20, LR 0.001, Epochs 200
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 99
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
# probs shape is batch_size x seq_len x vocab_size
probs = tf.nn.softmax(logits, name='probs')
#print(probs.shape)
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
# x and y shapes are batch_size x seq_len
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), \
loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
#print(vocab_to_int)
gen_length = 200
prime_word = 'perché mi piace'
prime_word = str.lower(prime_word)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = prime_word.split()
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
6,659
|
<ASSISTANT_TASK:>
Python Code:
import MySQLdb
#Enter the values for you database connection
dsn_database = "verein" # e.g. "MySQLdbtest"
dsn_hostname = "localhost" # e.g.: "mydbinstance.xyz.us-east-1.rds.amazonaws.com"
dsn_port = 3306 # e.g. 3306
dsn_uid = "steinam" # e.g. "user1"
dsn_pwd = "steinam" # e.g. "Password123"
conn = MySQLdb.connect(host=dsn_hostname, port=dsn_port, user=dsn_uid, passwd=dsn_pwd, db=dsn_database)
conn.query(DROP TABLE IF EXISTS Cars)
conn.query(CREATE TABLE Cars(Id INTEGER PRIMARY KEY, Name VARCHAR(20), Price INT))
conn.query(INSERT INTO Cars VALUES(1,'Audi',52642))
conn.query(INSERT INTO Cars VALUES(2,'Mercedes',57127))
conn.query(INSERT INTO Cars VALUES(3,'Skoda',9000))
conn.query(INSERT INTO Cars VALUES(4,'Volvo',29000))
conn.query(INSERT INTO Cars VALUES(5,'Bentley',350000))
conn.query(INSERT INTO Cars VALUES(6,'Citroen',21000))
conn.query(INSERT INTO Cars VALUES(7,'Hummer',41400))
conn.query(INSERT INTO Cars VALUES(8,'Volkswagen',21600))
cursor=conn.cursor()
cursor.execute(SELECT * FROM Cars)
cursor.fetchone()
print("\nShow me the records:\n")
rows = cursor.fetchall()
import pprint
pprint.pprint(rows)
conn.close()
%load_ext sql
%sql mysql://steinam:steinam@localhost/verein
%sql select * from spieler;
result = _
print(result[3])
%sql describe strafen;
result = %sql SELECT Betrag, spielernr from strafen
%matplotlib inline
result.bar()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Database Connection Properties and Connect to the database
Step12: Let's create a sample table and insert some data into it.
Step14: conn.cursor will return a cursor object, you can use this cursor to perform queries.
Step15: Now you can print out the result set using pretty print
Step16: Am Ende schließen wir die Verbindung
Step17: Using ext sql
|
6,660
|
<ASSISTANT_TASK:>
Python Code:
# from terminal or command window
pip install virtualenv
# from terminal or command window
cd my_project_folder
virtualenv my_project
# from terminal or command window
virtualenv -p /usr/bin/python2.7 my_project
# from terminal or command window
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python2.7
# from terminal or command window
source my_project/bin/activate
# from terminal or command window
pip install requests
# from terminal or command window
deactivate
# from terminal or command window
rm -rf my_project
# from terminal or command window
pip freeze > requirements.txt
# from terminal or command window
pip install -r requirements.txt
# from terminal or command window
python3 -m venv tutorial-env
# from terminal or command window
tutorial-env\Scripts\activate.bat
# from terminal or command window
source tutorial-env/bin/activate
# from terminal or command window
source ~/envs/tutorial-env/bin/activate
(tutorial-env) $ python
Python 3.5.1 (default, May 6 2016, 10:59:36)
...
>>> import sys
>>> sys.path
['', '/usr/local/lib/python35.zip', ...,
'~/envs/tutorial-env/lib/python3.5/site-packages']
>>>
# from terminal or command window
conda update conda
# from terminal or command window
conda search "^python$"
# from terminal or command window
conda create -n yourenvname python=x.x anaconda
# from terminal or command window
source activate yourenvname
# from terminal or command window
conda info -e
# from terminal or command window
conda install -n yourenvname [package]
# from terminal or command window
source deactivate
# from terminal or command window
conda remove -n yourenvname -all
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic Usage
Step2: virtualenv my_project will create a folder in the current directory which will contain the Python executable files, and a copy of the pip library which you can use to install other packages. The name of the virtual environment (in this case, it was my_project) can be anything; omitting the name will place the files in the current directory instead.
Step3: or change the interpreter globally with an env variable in ~/.bashrc
Step4: To begin using the virtual environment, it needs to be activated
Step5: The name of the current virtual environment will now appear on the left of the prompt (e.g. (my_project)Your-Computer
Step6: If you are done working in the virtual environment for the moment, you can deactivate it
Step7: This puts you back to the system’s default Python interpreter with all its installed libraries.
Step8: After a while, though, you might end up with a lot of virtual environments littered across your system, and its possible you’ll forget their names or where they were placed.
Step9: This will create a requirements.txt file, which contains a simple list of all the packages in the current environment, and their respective versions. You can see the list of installed packages without the requirements format using “pip list”. Later it will be easier for a different developer (or you, if you need to re-create the environment) to install the same packages using the same versions
Step10: This can help ensure consistency across installations, across deployments, and across developers.
Step11: This will create the tutorial-env directory if it doesn’t exist, and also create directories inside it containing a copy of the Python interpreter, the standard library, and various supporting files.
Step12: On Unix or MacOS, run
Step13: (This script is written for the bash shell. If you use the csh or fish shells, there are alternate activate.csh and activate.fish scripts you should use instead.)
Step14: Virtual environments with Anaconda
Step15: To search your anaconda distribution (or perhaps even you computer) for the availalbe python version (the ones already installed
Step16: Create a new vertual environment
Step17: Activate the new vertual environment
Step18: Activating a conda environment modifies the PATH and shell variables to point to the specific isolated Python set-up you created. The command prompt will change to indicate which conda environemnt you are currently in by prepending (yourenvname). To see a list of all your environments, use the command
Step19: Install additional Python packages to a virtual environment.
Step20: Deactivate your virtual environment.
Step21: Delete a no longer needed virtual environment
|
6,661
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import math
import numpy as np
import tensorflow as tf
import time
from datasets import dataset_utils
# Main slim library
slim = tf.contrib.slim
def regression_model(inputs, is_training=True, scope="deep_regression"):
Creates the regression model.
Args:
inputs: A node that yields a `Tensor` of size [batch_size, dimensions].
is_training: Whether or not we're currently training the model.
scope: An optional variable_op scope for the model.
Returns:
predictions: 1-D `Tensor` of shape [batch_size] of responses.
end_points: A dict of end points representing the hidden layers.
with tf.variable_scope(scope, 'deep_regression', [inputs]):
end_points = {}
# Set the default weight _regularizer and acvitation for each fully_connected layer.
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(0.01)):
# Creates a fully connected layer from the inputs with 32 hidden units.
net = slim.fully_connected(inputs, 32, scope='fc1')
end_points['fc1'] = net
# Adds a dropout layer to prevent over-fitting.
net = slim.dropout(net, 0.8, is_training=is_training)
# Adds another fully connected layer with 16 hidden units.
net = slim.fully_connected(net, 16, scope='fc2')
end_points['fc2'] = net
# Creates a fully-connected layer with a single hidden unit. Note that the
# layer is made linear by setting activation_fn=None.
predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction')
end_points['out'] = predictions
return predictions, end_points
with tf.Graph().as_default():
# Dummy placeholders for arbitrary number of 1d inputs and outputs
inputs = tf.placeholder(tf.float32, shape=(None, 1))
outputs = tf.placeholder(tf.float32, shape=(None, 1))
# Build model
predictions, end_points = regression_model(inputs)
# Print name and shape of each tensor.
print "Layers"
for k, v in end_points.iteritems():
print 'name = {}, shape = {}'.format(v.name, v.get_shape())
# Print name and shape of parameter nodes (values not yet initialized)
print "\n"
print "Parameters"
for v in slim.get_model_variables():
print 'name = {}, shape = {}'.format(v.name, v.get_shape())
def produce_batch(batch_size, noise=0.3):
xs = np.random.random(size=[batch_size, 1]) * 10
ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise)
return [xs.astype(np.float32), ys.astype(np.float32)]
x_train, y_train = produce_batch(200)
x_test, y_test = produce_batch(200)
plt.scatter(x_train, y_train)
def convert_data_to_tensors(x, y):
inputs = tf.constant(x)
inputs.set_shape([None, 1])
outputs = tf.constant(y)
outputs.set_shape([None, 1])
return inputs, outputs
# The following snippet trains the regression model using a sum_of_squares loss.
ckpt_dir = '/tmp/regression_model/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
inputs, targets = convert_data_to_tensors(x_train, y_train)
# Make the model.
predictions, nodes = regression_model(inputs, is_training=True)
# Add the loss function to the graph.
loss = slim.losses.sum_of_squares(predictions, targets)
# The total loss is the uers's loss plus any regularization losses.
total_loss = slim.losses.get_total_loss()
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.005)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training inside a session.
final_loss = slim.learning.train(
train_op,
logdir=ckpt_dir,
number_of_steps=5000,
save_summaries_secs=5,
log_every_n_steps=500)
print("Finished training. Last batch loss:", final_loss)
print("Checkpoint saved in %s" % ckpt_dir)
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_train, y_train)
predictions, end_points = regression_model(inputs, is_training=True)
# Add multiple loss nodes.
sum_of_squares_loss = slim.losses.sum_of_squares(predictions, targets)
absolute_difference_loss = slim.losses.absolute_difference(predictions, targets)
# The following two ways to compute the total loss are equivalent
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = sum_of_squares_loss + absolute_difference_loss + regularization_loss
# Regularization Loss is included in the total loss by default.
# This is good for training, but not for testing.
total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True)
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init_op) # Will initialize the parameters with random weights.
total_loss1, total_loss2 = sess.run([total_loss1, total_loss2])
print('Total Loss1: %f' % total_loss1)
print('Total Loss2: %f' % total_loss2)
print('Regularization Losses:')
for loss in slim.losses.get_regularization_losses():
print(loss)
print('Loss Functions:')
for loss in slim.losses.get_losses():
print(loss)
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
# Create the model structure. (Parameters will be loaded below.)
predictions, end_points = regression_model(inputs, is_training=False)
# Make a session which restores the old parameters from a checkpoint.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
inputs, predictions, targets = sess.run([inputs, predictions, targets])
plt.scatter(inputs, targets, c='r');
plt.scatter(inputs, predictions, c='b');
plt.title('red=true, blue=predicted')
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
predictions, end_points = regression_model(inputs, is_training=False)
# Specify metrics to evaluate:
names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({
'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets),
'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets)
})
# Make a session which restores the old graph parameters, and then run eval.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
metric_values = slim.evaluation.evaluation(
sess,
num_evals=1, # Single pass over data
eval_op=names_to_update_nodes.values(),
final_op=names_to_value_nodes.values())
names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values))
for key, value in names_to_values.iteritems():
print('%s: %f' % (key, value))
import tensorflow as tf
from datasets import dataset_utils
url = "http://download.tensorflow.org/data/flowers.tar.gz"
flowers_data_dir = '/tmp/flowers'
if not tf.gfile.Exists(flowers_data_dir):
tf.gfile.MakeDirs(flowers_data_dir)
dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir)
from datasets import flowers
import tensorflow as tf
slim = tf.contrib.slim
with tf.Graph().as_default():
dataset = flowers.get_split('train', flowers_data_dir)
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32, common_queue_min=1)
image, label = data_provider.get(['image', 'label'])
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
for i in xrange(4):
np_image, np_label = sess.run([image, label])
height, width, _ = np_image.shape
class_name = name = dataset.labels_to_names[np_label]
plt.figure()
plt.imshow(np_image)
plt.title('%s, %d x %d' % (name, height, width))
plt.axis('off')
plt.show()
def my_cnn(images, num_classes, is_training): # is_training is not used...
with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2):
net = slim.conv2d(images, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.conv2d(net, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.flatten(net)
net = slim.fully_connected(net, 192)
net = slim.fully_connected(net, num_classes, activation_fn=None)
return net
import tensorflow as tf
with tf.Graph().as_default():
# The model can handle any input size because the first layer is convolutional.
# The size of the model is determined when image_node is first passed into the my_cnn function.
# Once the variables are initialized, the size of all the weight matrices is fixed.
# Because of the fully connected layers, this means that all subsequent images must have the same
# input size as the first image.
batch_size, height, width, channels = 3, 28, 28, 3
images = tf.random_uniform([batch_size, height, width, channels], maxval=1)
# Create the model.
num_classes = 10
logits = my_cnn(images, num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
# Initialize all the variables (including parameters) randomly.
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
# Run the init_op, evaluate the model outputs and print the results:
sess.run(init_op)
probabilities = sess.run(probabilities)
print('Probabilities Shape:')
print(probabilities.shape) # batch_size x num_classes
print('\nProbabilities:')
print(probabilities)
print('\nSumming across all classes (Should equal 1):')
print(np.sum(probabilities, 1)) # Each row sums to 1
from preprocessing import inception_preprocessing
import tensorflow as tf
slim = tf.contrib.slim
def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False):
Loads a single batch of data.
Args:
dataset: The dataset to load.
batch_size: The number of images in the batch.
height: The size of each image after preprocessing.
width: The size of each image after preprocessing.
is_training: Whether or not we're currently training or evaluating.
Returns:
images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed.
images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization.
labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes.
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32,
common_queue_min=8)
image_raw, label = data_provider.get(['image', 'label'])
# Preprocess image for usage by Inception.
image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training)
# Preprocess the image for display purposes.
image_raw = tf.expand_dims(image_raw, 0)
image_raw = tf.image.resize_images(image_raw, [height, width])
image_raw = tf.squeeze(image_raw)
# Batch it up.
images, images_raw, labels = tf.train.batch(
[image, image_raw, label],
batch_size=batch_size,
num_threads=1,
capacity=2 * batch_size)
return images, images_raw, labels
from datasets import flowers
# This might take a few minutes.
train_dir = '/tmp/tfslim_model/'
print('Will save model to %s' % train_dir)
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
# Create the model:
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.scalar_summary('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
number_of_steps=1, # For speed, we just do 1 epoch
save_summaries_secs=1)
print('Finished training. Final batch loss %d' % final_loss)
from datasets import flowers
# This might take a few minutes.
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.DEBUG)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False)
predictions = tf.argmax(logits, 1)
# Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5),
})
print('Running evaluation Loop...')
checkpoint_path = tf.train.latest_checkpoint(train_dir)
metric_values = slim.evaluation.evaluate_once(
master='',
checkpoint_path=checkpoint_path,
logdir=train_dir,
eval_op=names_to_updates.values(),
final_op=names_to_values.values())
names_to_values = dict(zip(names_to_values.keys(), metric_values))
for name in names_to_values:
print('%s: %f' % (name, names_to_values[name]))
from datasets import dataset_utils
url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import numpy as np
import os
import tensorflow as tf
import urllib2
from datasets import imagenet
from nets import inception
from preprocessing import inception_preprocessing
slim = tf.contrib.slim
batch_size = 3
image_size = inception.inception_v1.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg'
image_string = urllib2.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
slim.get_model_variables('InceptionV1'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index]))
# Note that this may take several minutes.
import os
from datasets import flowers
from nets import inception
from preprocessing import inception_preprocessing
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
def get_init_fn():
Returns a function run by the chief worker to warm-start the training.
checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"]
exclusions = [scope.strip() for scope in checkpoint_exclude_scopes]
variables_to_restore = []
for var in slim.get_model_variables():
excluded = False
for exclusion in exclusions:
if var.op.name.startswith(exclusion):
excluded = True
break
if not excluded:
variables_to_restore.append(var)
return slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
variables_to_restore)
train_dir = '/tmp/inception_finetuned/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.scalar_summary('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=2)
print('Finished training. Last batch loss %f' % final_loss)
import numpy as np
import tensorflow as tf
from datasets import flowers
from nets import inception
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
batch_size = 3
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
checkpoint_path = tf.train.latest_checkpoint(train_dir)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_path,
slim.get_variables_to_restore())
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
sess.run(tf.initialize_local_variables())
init_fn(sess)
np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels])
for i in xrange(batch_size):
image = np_images_raw[i, :, :, :]
true_label = np_labels[i]
predicted_label = np.argmax(np_probabilities[i, :])
predicted_name = dataset.labels_to_names[predicted_label]
true_name = dataset.labels_to_names[true_label]
plt.figure()
plt.imshow(image.astype(np.uint8))
plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name))
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Creating your first neural network with TF-Slim
Step3: Let's create the model and examine its structure.
Step4: Let's create some 1d regression data .
Step5: Let's fit the model to the data
Step6: Training with multiple loss functions.
Step7: Let's load the saved model and use it for prediction.
Step8: Let's compute various evaluation metrics on the test set.
Step9: Reading Data with TF-Slim
Step10: Display some of the data.
Step11: Convolutional neural nets (CNNs).
Step12: Apply the model to some randomly generated images.
Step14: Train the model on the Flowers dataset.
Step15: Evaluate some metrics.
Step16: Using pre-trained models
Step17: Apply Pre-trained model to Images.
Step19: Fine-tune the model on a different set of labels.
Step20: Apply fine tuned model to some images.
|
6,662
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import os
import numpy as np
import pypmj as jpy
jpy.import_jcmwave('/path/to/your/JCMsuite/installation/directory')
project = jpy.JCMProject('../projects/scattering/mie/mie2D')
mie_keys = {'constants' :{}, # <-- can be anything, but is not looped over and not stored in the HDF5 store
'parameters': {}, # <-- everything that needs to be stored, but is not in layout.jcmt
'geometry': {'radius':np.linspace(0.3, 0.5, 40)}} # <-- same as before, but layout.jcmt-relevant
simuset = jpy.SimulationSet(project, mie_keys)
simuset.make_simulation_schedule()
def read_scs(pp):
results = {} #must be a dict
results['SCS'] = pp[0]['ElectromagneticFieldEnergyFlux'][0][0].real
return results
# simuset.resource_manager.resources['localhost'].set_m_n(4,2)
simuset.run(processing_func=read_scs)
%matplotlib inline
data = simuset.get_store_data().sort_values(by='radius')
data.plot(x='radius', y='SCS', title='Results of the simulation')
simuset.write_store_data_to_file() # default is results.csv in the storage folder
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the path to your JCMsuite installation directory here. You can skip this later if you have a configuration file.
Step2: Prepare
Step3: Set the keys that are necessary to translate the JCM template files.
Step4: Initialize a SimulationSet. We ignore the configured storage here and save everything in the current working dir.
Step5: Make a schedule
Step6: Define a processing function
Step7: Solve
Step8: Run your simulations. The results will be appended to the HDF5 store.
Step9: Plot
Step10: Write the data to CSV.
|
6,663
|
<ASSISTANT_TASK:>
Python Code:
a = 140e-3 /2 # inner conductor radius
b = 230e-3 /2 # inner conductor radius
def coax_electric_field(rho, V, a, b):
Returns the electric field in a coaxial line.
return 1/rho*V/log(b/a)
rho = linspace(a, b, 101)
V = 1 # V
plot(rho/a, coax_electric_field(rho, V, a, b)/V)
xlabel('rho/a')
ylabel('E_{rho} / V')
Ep = 3.0e6 # V/m
Vp = Ep*a*log(b/a)
Vp
Z0 = 30
Pd = Vp**2 / (2*Z0)
print(Pd/1e6)
x = linspace(1, 10, 1001)
y = log(x)/x**2
plot(x, y, lw=2)
xlabel('b/a')
axvline(x=exp(1/2), color='r')
grid(True)
title('$\ln(b/a) / (b/a)^2$')
print('The maximum is for x=', x[argmax(y)])
def coax_char_impedance(a, b, eps_r=1, mu_r=1):
Returns the characteristic impedance of a coaxial line
return 377/(2*pi)*sqrt(mu_r/eps_r)*log(b/a)
coax_char_impedance(1, 1.65)
30/50 * 1/10
(sqrt(6/100) - sqrt(1/10))/sqrt(1/10)
20*log10(sqrt(50/30)*10**(-80/20))
S21 = -80 # Example of attenuation factor [dB]
V_line = 55e3 # Voltage in the transmission line [V]
V_probe = V_line * 10**(S21/20) # voltage at probe in [V]
print('Voltage at probe: {} V'.format(V_probe))
Z0 = 50 # Characteristic Impedance of the probe cable and amplifier [Ohm]
Power_probe = 10*np.log10(V_probe**2 / (2*Z0) * 1000) # power at the probe in [dBm]
print('Power at probe: {} dBm'.format(Power_probe))
from scipy.optimize import minimize
def fun(u):
return abs(1+1/u-log(u))
sol = minimize(fun, x0=3)
print(sol.x)
u = linspace(0.1, 10, 501)
test = 1+1/u-log(u)
plot(u, test, lw=2)
xlabel('b/a')
title('$1+1/(b/a)-ln(b/a)$')
#u_0 = u[argmin(abs(test))]
axhline(y=0, color='k')
grid()
axvline(x=sol.x, color='r')
a = 200e-3
coax_char_impedance(a, 3.6*a)
def coax_maximum_power(a, b, Emax=3.0e6):
# deduces the maximum electric field from voltage breakdown value
Vp = Emax*a*log(b/a)
Z0 = coax_char_impedance(a, b)
return Vp**2 / (2*Z0)
def coax_alpha(a, b, f, sigma=6e7):
# sheet resistance [Ohm]
Rs = sqrt(2*pi*f*mu_0/(2*sigma))
# Resistance per meter [Ohm/m]
R = Rs/(2*pi)*(1/a + 1/b)
# Characteristic Impedance [Ohm]
Z0 = coax_char_impedance(a, b)
# Attenuation [1/m]
return R/(2*Z0)
a = linspace(20e-3, 450e-3) # inner conductor radius
b = 500e-3
f = 60e6
Z0 = coax_char_impedance(a, b)
alpha = coax_alpha(a, b, f)
alpha_norm = alpha/min(alpha)
max_pow = coax_maximum_power(a, b)
max_pow_norm = max_pow/max(max_pow)
semilogx(Z0, alpha_norm, lw=2)
semilogx(Z0, max_pow_norm, '--', lw=2)
legend(('Attenuation', 'Power handling'), fontsize=14, loc='best')
grid(True, which='major')
grid(True, which='minor', axis='x')
xlabel('$Z_0$ [$\Omega$]', fontsize=14)
ylabel('Normalized Values', fontsize=14)
axis([10, 200, 0, 2])
ax=gca()
ax.annotate('Maximum at $30\Omega$', xy=(30,1), xytext=(15,0.60),
color='g', arrowprops=dict(facecolor='g', width=5),
fontsize=14)
ax.annotate('Maximum at $77\Omega$', xy=(77,1), xytext=(50,1.5),
color='b', arrowprops=dict(facecolor='b', width=5),
fontsize=14)
freq, reB2overB1, imB2overB1 = loadtxt('ICRH_Probe-Calibration/SLIDING_BF.ASC', skiprows=14, delimiter=';', unpack=True)
B2overB1 = reB2overB1 + 1j*imB2overB1
B2overB1_dB = 20*log10(abs(B2overB1))
plot(freq/1e6, B2overB1_dB)
xlabel('f [MHz]')
ylabel('$b_2/b_1$ [dB]')
grid()
xlim(min(freq/1e6), max(freq/1e6))
semilogx(freq, B2overB1_dB)
xlabel('f [Hz]')
ylabel('$b_2/b_1$ [dB]')
grid()
xlim(min(freq), max(freq))
freq_log = log10(freq)
# fits the curve with a linear polynom y=a*x+b
a,b=polyfit(freq_log, B2overB1_dB, deg=1)
print(a,b)
semilogx(freq, B2overB1_dB, lw=2)
semilogx(freq, a*freq_log + b, '--r', lw=3)
grid()
grid(which='minor', axis='x')
xlim(min(freq), max(freq))
xlabel('f [Hz]')
ylabel('$b_2/b_1$ [dB]')
f = 62.5e6
G = B2overB1_dB + 20*log10(f/freq)
G_mean = mean(G)
G_std = std(G)
plot(G)
axhline(y=G_mean, color='r')
grid()
ylabel('G [dB]')
xlabel('measurement points')
print(G_mean, G_std)
# Band sweep
CEA1_freq, CEA1_reB2overB1, CEA1_imB2overB1 = loadtxt('ICRH_Probe-Calibration/CEA_BF1.ASC', skiprows=14, delimiter=';', unpack=True)
CEA1_B2overB1_dB = 20*log10(abs(CEA1_reB2overB1 + 1j*CEA1_imB2overB1))
# Point sweep
CEA2_freq, CEA2_reB2overB1, CEA2_imB2overB1 = loadtxt('ICRH_Probe-Calibration/CEA_BF2.ASC', skiprows=14, delimiter=';', unpack=True)
CEA2_B2overB1_dB = 20*log10(abs(CEA2_reB2overB1 + 1j*CEA2_imB2overB1))
DUT1_freq, DUT1_reB2overB1, DUT1_imB2overB1 = loadtxt('ICRH_Probe-Calibration/DUT_BF1.ASC', skiprows=14, delimiter=';', unpack=True)
DUT1_B2overB1_dB = 20*log10(abs(DUT1_reB2overB1 + 1j*DUT1_imB2overB1))
DUT2_freq, DUT2_reB2overB1, DUT2_imB2overB1 = loadtxt('ICRH_Probe-Calibration/DUT_BF2.ASC', skiprows=14, delimiter=';', unpack=True)
DUT2_B2overB1_dB = 20*log10(abs(DUT2_reB2overB1 + 1j*DUT2_imB2overB1))
semilogx(CEA1_freq, CEA1_B2overB1_dB)
semilogx(CEA2_freq, CEA2_B2overB1_dB)
grid()
grid(which='minor', axis='x')
xlim(min(CEA1_freq), max(CEA1_freq))
xlabel('f [Hz]')
ylabel('$b_2/b_1$ [dB]')
title('CEA probe')
a,b = polyfit(CEA1_freq[1000:], CEA1_B2overB1_dB[1000:], deg=1)
print(a,b)
a,b = polyfit(CEA2_freq, CEA2_B2overB1_dB, deg=1)
print(a,b)
plot(CEA1_freq, CEA1_B2overB1_dB)
plot(CEA1_freq[500:], CEA1_B2overB1_dB[500:])
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
import matplotlib.pylab as pylab
pylab.rcParams['figure.figsize'] = 20, 10
f = 62.5e6
G_DUT1 = DUT1_B2overB1_dB + 20*log10(f/DUT1_freq)
G_DUT2 = DUT2_B2overB1_dB + 20*log10(f/DUT2_freq)
G_CEA1 = CEA1_B2overB1_dB + 20*log10(f/CEA1_freq)
G_CEA2 = CEA2_B2overB1_dB + 20*log10(f/CEA2_freq)
subplot(121)
plot(G_CEA1)
plot(G_CEA2)
grid()
ylabel('G [dB]')
xlabel('measurement points')
title('CEA-side probe')
plot(moving_average((G_CEA1+G_CEA2)/2,10), lw=2)
subplot(122)
plot(G_DUT1)
plot(G_DUT2)
grid()
ylabel('G [dB]')
xlabel('measurement points')
title('DUT-side probe')
plot(moving_average((G_DUT1+G_DUT2)/2,10), lw=2)
# average for the last 1000 points
# Standard error calculation to determine the error bar
N=500
CEA_avg = mean(moving_average((G_CEA1[N:]+G_CEA2[N:])/2,10))
DUT_avg = mean(moving_average((G_DUT1[N:]+G_DUT2[N:])/2,10))
DUT_standard_error = std((G_DUT1[N:]+G_DUT2[N:])/2) / sqrt(len(G_DUT1[N:]))
CEA_standard_error = std((G_CEA1[N:]+G_CEA2[N:])/2) / sqrt(len(G_CEA1[N:]))
print('CEA probe average extrapoled gain : G={}+/-{} dB'.format(CEA_avg, CEA_standard_error))
print('DUT probe average extrapoled gain : G={}+/-{} dB'.format(DUT_avg, DUT_standard_error))
R = 50
C1 = 1e-15 # unknow
Cs = C1 * (10**(211/20) -1) # fit from the sliding probe linear fit (-211 dB at omega=0)
f = logspace(0, 9, 301)
Z_Cs = 1/(1j*2*pi*f*Cs)
Z_C1 = 1/(1j*2*pi*f*C1)
G = (R + Z_Cs)/(R + Z_Cs + Z_C1)
G_dB = 20*log10(abs(G))
20*log10( 1/Cs / (1/Cs + 1/C1) )
semilogx(f, G_dB, lw=2)
grid()
grid(which='minor', axis='x')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Power Handling
Step2: This is maximum for $\rho=a$. Let us suppose that the peak voltage $V_p$ (or equivalently the maximum RF power $P_p$ the line can handle) is set by the breakdown voltage of the insulator. The electric field strength at which breakdown occurs depends on the respective geometries of the insulator and the electrodes with which the electric field is applied. In the case of the air, the dielectric strength is $E_p$=3.0 MV/m.
Step3: One get a peak voltage of 104kV. In terms of power, for a 30 Ohm characteric impedance, this leads to
Step4: A maximum power of 181 MW.
Step6: The second term is maximum at $b/a$ is around 1.65.
Step7: This maximum corresponds to an impedance of
Step8: Probe equations
Step9: The voltage in the rigid line is thus given from the [0-10V] signal voltage by
Step10: As we want to measure voltage between 5 and 55 kV, using the previous expressions lead to
Step11: The latter equation is satisfied for $b/a=3.6$, which gives an optimum characteristic impedance of 77 Ohm
Step12: Optimum ?
Step13: Low Frequency Measurements
Step14: Let's plot the results versus the frequency
Step15: Let's do the same into a log scale
Step16: The plot look like pretty linear. Let's fit with a linear trends to see how it evolves with respect to the log10 of the frequency
Step17: One can clearly see that the $b_2/b_1$ ratio increases linearly with respect to frequency decades (a decade is a factor of 10 difference between two numbers). More precisely, it increases by 20dB per decade.
Step18: The gain at 60 MHz extrapolated from low frequency measurement is thus G=-52.99 dB $\pm$ 0.25 dB
Step19: A little bit of theory
|
6,664
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-vol', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
6,665
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pdf = pd.DataFrame({
'x1': ['a','a','b','b', 'b', 'c'],
'x2': ['apple', 'orange', 'orange','orange', 'peach', 'peach'],
'x3': [1, 1, 2, 2, 2, 4],
'x4': [2.4, 2.5, 3.5, 1.4, 2.1,1.5],
'y1': [1, 0, 1, 0, 0, 1],
'y2': ['yes', 'no', 'no', 'yes', 'yes', 'yes']
})
df = spark.createDataFrame(pdf)
df.show()
from pyspark.ml.feature import StringIndexer
# build indexer
string_indexer = StringIndexer(inputCol='x1', outputCol='indexed_x1')
# learn the model
string_indexer_model = string_indexer.fit(df)
# transform the data
df_stringindexer = string_indexer_model.transform(df)
# resulting df
df_stringindexer.show()
df_ohe = df.select('x1')
df_ohe.show()
df_x1_indexed = StringIndexer(inputCol='x1', outputCol='indexed_x1').fit(df_ohe).transform(df_ohe)
df_x1_indexed.show()
from pyspark.ml.linalg import DenseVector, SparseVector, DenseMatrix, SparseMatrix
x = [SparseVector(3, {0: 1.0}).toArray()] + \
[SparseVector(3, {1: 1.0}).toArray()] + \
[SparseVector(3, {2: 1.0}).toArray()]
import numpy as np
np.array(x)
from pyspark.ml.feature import OneHotEncoder
OneHotEncoder(inputCol='indexed_x1', outputCol='encoded_x1').transform(df_x1_indexed).show()
OneHotEncoder(dropLast=False, inputCol='indexed_x1', outputCol='encoded_x1').transform(df_x1_indexed).show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: StringIndexer
Step2: From the result above, we can see that (a, b, c) in column x1 are converted to (1.0, 0.0, 2.0). They are ordered by their frequencies in column x1.
Step3: StringIndex column 'x1'
Step4: 'x1' has three categories
Step5: The obtained matrix is exactly the matrix that we would use to represent our categorical variable in a statistical class.
Step6: Specify to not drop the last category
|
6,666
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
p = 5
groups_var = 1
level1_var = 2
level2_var = 3
resid_var = 4
n_groups = 100
group_size = 20
level1_size = 10
level2_size = 5
n = n_groups * group_size * level1_size * level2_size
xmat = np.random.normal(size=(n, p))
groups_ix = np.kron(np.arange(n // group_size), np.ones(group_size)).astype(np.int)
level1_ix = np.kron(np.arange(n // level1_size), np.ones(level1_size)).astype(np.int)
level2_ix = np.kron(np.arange(n // level2_size), np.ones(level2_size)).astype(np.int)
groups_re = np.sqrt(groups_var) * np.random.normal(size=n // group_size)
level1_re = np.sqrt(level1_var) * np.random.normal(size=n // level1_size)
level2_re = np.sqrt(level2_var) * np.random.normal(size=n // level2_size)
y = groups_re[groups_ix] + level1_re[level1_ix] + level2_re[level2_ix]
y += np.sqrt(resid_var) * np.random.normal(size=n)
df = pd.DataFrame(xmat, columns=["x%d" % j for j in range(p)])
df["y"] = y + xmat[:, 0] - xmat[:, 3]
df["groups_ix"] = groups_ix
df["level1_ix"] = level1_ix
df["level2_ix"] = level2_ix
cs = sm.cov_struct.Nested()
dep_fml = "0 + level1_ix + level2_ix"
m = sm.GEE.from_formula("y ~ x0 + x1 + x2 + x3 + x4", cov_struct=cs,
dep_data=dep_fml, groups="groups_ix", data=df)
r = m.fit()
r.cov_struct.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the number of covariates.
Step2: These parameters define the population variance for each level of grouping.
Step3: Set the number of groups
Step4: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
Step5: Calculate the total sample size.
Step6: Construct the design matrix.
Step7: Construct labels showing which group each observation belongs to at each level.
Step8: Simulate the random effects.
Step9: Simulate the response variable.
Step10: Put everything into a dataframe.
Step11: Fit the model.
Step12: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above.
|
6,667
|
<ASSISTANT_TASK:>
Python Code:
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
ipyclient = ipp.Client()
len(ipyclient)
locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci"
newick = "./analysis-raxml/RAxML_bestTree.pedic"
## parse the newick tree, re-root it, and plot it.
tre = toytree.tree(newick=newick)
tre.root(wildcard="prz")
tre.draw(node_labels=True, node_size=10);
## store rooted tree back into a newick string.
newick = tre.tree.write()
## create a baba object linked to a data file and newick tree
bb = ipa.baba(data=locifile, newick=newick)
## generate all possible abba-baba tests meeting a set of constraints
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno"],
})
## run all tests linked to bb
bb.run(ipyclient)
## save the results table to a csv file
bb.results_table.to_csv("bb.abba-baba.csv", sep="\t")
## show the results table in notebook
bb.results_table
## plot the results, showing here some plotting options.
bb.plot(height=900,
width=600,
pct_tree_y=0.1,
ewidth=2,
alpha=4.,
style_test_labels={"font-size":"10px"},
);
## create an initial object linked to your data in 'locifile'
aa = ipa.baba(data=locifile)
## create two other copies
bb = aa.copy()
cc = aa.copy()
## print these objects
print aa
print bb
print cc
aa.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["29154_superba"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
bb.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["30686_cyathophylla"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
cc.tests = [
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41954_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41478_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
]
## print params for object aa
aa.params
## set the mincov value as a dictionary for object bb
bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1}
bb.params
## run tests for each of our objects
aa.run(ipyclient)
bb.run(ipyclient)
cc.run(ipyclient)
## you can sort the results by Z-score
cc.results_table.sort_values(by="Z", ascending=False)
## save the table to a file
cc.results_table.to_csv("cc.abba-baba.csv")
## show the results in notebook
cc.results_table
## create a new 'copy' of your baba object and attach a treefile
dd = bb.copy()
dd.newick = newick
## generate all possible tests
dd.generate_tests_from_tree()
## a dict of constraints
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["40578_rex", "35855_rex"],
}
## generate tests with contraints
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=False,
)
## 'exact' contrainst are even more constrained
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=True,
)
## run the dd tests
dd.run(ipyclient)
dd.plot(height=500, pct_tree_y=0.2, alpha=4., tree_style='c');
dd.results_table
## path to a locifile created by ipyrad
locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci"
## path to an unrooted tree inferred with tetrad
newick = "./analysis-tetrad/tutorial.tree"
print cc.results_table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to cluster
Step2: Load in your .loci data file and a tree hypothesis
Step3: Short tutorial
Step4: Look at the results
Step5: Plotting and interpreting results
Step6: Full Tutorial
Step7: Linking tests to the baba object
Step8: Other parameters
Step9: Running the tests
Step10: The results table
Step11: Auto-generating tests
Step12: More about input file paths (i/o)
Step13: Interpreting results
|
6,668
|
<ASSISTANT_TASK:>
Python Code:
import math as mt
def findPermutation(string , k ) :
has =[False for i in range(26 ) ]
cnt = 0
for i in range(len(string ) ) :
if(has[ord(string[i ] ) - ord(' a ' ) ] == False ) :
cnt += 1
has[ord(string[i ] ) - ord(' a ' ) ] = True
ans = 1
for i in range(2 , cnt + 1 ) :
ans *= i
for i in range(cnt - k , 1 , - 1 ) :
ans //= i
return ans
string = "geeksforgeeks "
k = 4
print(findPermutation(string , k ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
6,669
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
x1 = tf.reshape(x1, )
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
|
6,670
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from pymt import models
cem, waves = models.Cem(), models.Waves()
waves.get_output_var_names()
cem.get_input_var_names()
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cem.initialize(*args)
args = waves.setup()
waves.initialize(*args)
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[1] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[0] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
grid_id = cem.get_var_grid('sea_water__depth')
spacing = cem.get_grid_spacing(grid_id)
shape = cem.get_grid_shape(grid_id)
z = np.empty(shape)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
qs = np.zeros_like(z)
qs[0, 100] = 750
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .3)
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
for time in range(3000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
qs[0, 150] = 500
for time in range(3750):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
qs.fill(0.)
for time in range(4000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
Step3: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
Step4: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
Step5: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
Step6: Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
Step7: The CSDMS Standard Name for this variable is
Step8: Set the bedload flux and run the model.
Step9: Let's add another sediment source with a different flux and update the model.
Step10: Here we shut off the sediment supply completely.
|
6,671
|
<ASSISTANT_TASK:>
Python Code:
from Frame2D import Frame2D
from Tables import Table, DataSource
import numpy as np
import pandas as pd
## NOTE: all units are kN and m
FD = {'storey_heights': [6.5] + [5.5]*20 + [7.0], # m
'bay_widths': [10.5,10,10,10,10,10.5], # m
'frame_spacing':8, # m, used only for load calculation
'specified_loads':{'live':2.4,'dead':4.0,'snow':2.5,'wind':2.5}, # kPa
'load_combinations':{'Case-2a':{'dead':1.25,'live':1.5,'snow':0.5},
'Case-2b':{'dead':1.25,'live':1.5,'wind':0.4},
'Case-3a':{'dead':1.25,'snow':1.5,'live':0.5},
'Case-3b':{'dead':1.25,'snow':1.5,'wind':0.4},
'Case-4a':{'dead':1.25,'wind':1.4,'live':0.5},
'Case-4b':{'dead':1.25,'wind':1.4,'snow':0.5},
},
'load_combo':'Case-2b',
'braced_bays':[0,2,3,5],
'support_fixity': ['fx,fy']*7,
'beam_size': 'W1000x222',
'column_size': 'W360x216',
}
SaveData = True
ShowResults = True
FD
def genframe(fd):
h = fd['storey_heights']
w = fd['bay_widths']
s = fd['frame_spacing']
nnodes = (len(h)+1)*(len(w)+1)
# names of column stacks and floor levels
bayline = [chr(ord('A')+i) for i in range(len(w)+1)]
floorlev = [str(i) for i in range(len(h)+1)]
# generate the nodes
nodelist = []
nidgrid = np.ndarray((len(h)+1,len(w)+1),dtype=np.object)
for i in range(len(h)+1):
y = sum(h[:i])*1000.
for j in range(len(w)+1):
x = sum(w[:j])*1000.
nid = bayline[j]+floorlev[i]
nodelist.append((nid,x,y))
nidgrid[i,j] = nid
nodes = pd.DataFrame(nodelist,columns=['NODEID','X','Y'])
# generate the supports
assert len(fd['support_fixity'])==nidgrid.shape[1]
supplist = []
for j,s in enumerate(fd['support_fixity']):
nid = nidgrid[0,j]
fix = s.strip().upper().split(',')
if len(fix) < 3:
fix += [np.nan] * (3-len(fix))
supplist.append([nid,]+fix)
supports = pd.DataFrame(supplist,columns=['NODEID','C0','C1','C2'])
# generate columns
columns = []
for i in range(nidgrid.shape[0]-1):
for j in range(nidgrid.shape[1]):
nidj = nidgrid[i,j]
nidk = nidgrid[i+1,j]
mid = 'C' + nidj + nidk
columns.append((mid,nidj,nidk))
# generate beams
beams = []
roofbeams = []
pinnedbeams = []
for i in range(1,nidgrid.shape[0]):
beamlist = beams if i < nidgrid.shape[0]-1 else roofbeams
for j in range(nidgrid.shape[1]-1):
nidj = nidgrid[i,j]
nidk = nidgrid[i,j+1]
mid = 'B' + nidj + nidk
beamlist.append((mid,nidj,nidk))
if j not in fd['braced_bays']:
pinnedbeams.append(mid)
members = pd.DataFrame(columns+beams+roofbeams,columns=['MEMBERID','NODEJ','NODEK'])
# generate releases
rellist = []
for mid in pinnedbeams:
rellist.append((mid,'MZJ'))
rellist.append((mid,'MZK'))
releases = pd.DataFrame(rellist,columns=['MEMBERID','RELEASE'])
# generate properties
proplist = []
size = fd['column_size']
for mid,j,k in columns:
proplist.append((mid,size,np.nan,np.nan))
size = np.nan
size = fd['beam_size']
for mid,j,k in beams+roofbeams:
proplist.append((mid,size,np.nan,np.nan))
size = np.nan
properties = pd.DataFrame(proplist,columns=['MEMBERID','SIZE','IX','A'])
# generate node loads (wind from left)
nloadlist = []
L = fd['specified_loads'] # area loads
for i in range(1,nidgrid.shape[0]+1):
H = (sum(h[:i+1])-sum(h[:i-1]))/2.
FL = H*fd['frame_spacing']*L['wind']
if FL != 0.:
nloadlist.append(('wind',nidgrid[i,0],'FX',FL*1000.))
node_loads = pd.DataFrame(nloadlist,columns=['LOAD','NODEID','DIRN','F'])
# generate member loads
mloadlist = []
UDL = -L['dead']*fd['frame_spacing']
mloadlist += [('dead',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in beams]
mloadlist += [('dead',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in roofbeams]
UDL = -L['live']*fd['frame_spacing']
mloadlist += [('live',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in beams]
UDL = -L['snow']*fd['frame_spacing']
mloadlist += [('snow',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in roofbeams]
member_loads = pd.DataFrame(mloadlist,columns='LOAD,MEMBERID,TYPE,W1,W2,A,B,C'.split(','))
# generate load combinations
lclist = []
for case,loads in fd['load_combinations'].items():
for load,factor in loads.items():
lclist.append((case,load,factor))
load_combinations = pd.DataFrame(lclist,columns=['CASE','LOAD','FACTOR'])
ds = DataSource
ds.set_source(None)
ds.set_table('nodes',nodes)
ds.set_table('supports',supports)
ds.set_table('members',members)
ds.set_table('releases',releases)
ds.set_table('properties',properties)
ds.set_table('node_loads',node_loads)
ds.set_table('member_loads',member_loads)
ds.set_table('load_combinations',load_combinations)
frame = Frame2D()
frame.input_all()
return frame
%time f = genframe(FD)
if SaveData:
NS = len(FD['storey_heights'])
NB = len(FD['bay_widths'])
name = 'l{}x{}'.format(NS,NB)
f.write_all(name,makedir=True)
%time rs = f.solve(FD['load_combo'])
if ShowResults:
f.print_input()
if ShowResults:
f.print_results(rs)
if SaveData:
f.write_results(name,rs)
%matplotlib inline
f.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On Flex 5
Step2: On Flex 5
|
6,672
|
<ASSISTANT_TASK:>
Python Code:
# exerices for section 8.1
# exerices for section 8.2
plt.figure(figsize=(5,8))
plt.imshow(plt.imread('./res/fig8_1.png'))
plt.figure(figsize=(8,8))
plt.imshow(plt.imread('./res/fig8_2.png'))
bipartite_graph = [('1', 'a'), ('1', 'c'), ('2', 'b'), ('3', 'b'), ('3', 'd'), ('4', 'a')]
bipartite_graph
logger.setLevel('WARN')
def greedy_maximal_matching(connections):
maximal_matches = np.array([connections[0]])
logger.debug('maximal_matches: \n{}'.format(maximal_matches))
for c in connections[1:]:
logger.debug('c: {}'.format(c))
if (c[0] not in maximal_matches[:,0]) and (c[1] not in maximal_matches[:,1]):
maximal_matches = np.append(maximal_matches, [c], axis=0)
logger.debug('maximal_matches: \n{}'.format(maximal_matches))
return maximal_matches
from random import sample
connections = sample(bipartite_graph, len(bipartite_graph))
print('connections: \n{}'.format(connections))
greedy_maximal_matching(bipartite_graph)
#(2)
from itertools import permutations
stat = []
for connections in permutations(bipartite_graph, len(bipartite_graph)):
stat.append(greedy_maximal_matching(connections).shape[0])
pd.Series(stat).value_counts()
plt.imshow(plt.imread('./res/fig8_3.png'))
plt.imshow(plt.imread('./res/fig8_4.png'))
class advertiser:
def __init__(self, name, bids):
self.name = name
self.bids = bids
def get_info(self):
return self.name, self.bids
advertisers = [
advertiser('David', ['Google', 'email', 'product']),
advertiser('Jim', ['SNS', 'Facebook', 'product']),
advertiser('Sun', ['product', 'Google', 'email']),
]
bids_hash_table = dict()
for ad in advertisers:
v, k = ad.get_info()
k = [x.lower() for x in k]
k = ' '.join(sorted(k))
if k not in bids_hash_table:
bids_hash_table[k] = [v]
else:
bids_hash_table[k].append(v)
bids_hash_table
queries = [
('EMAIL', 'google', 'Product'),
('google', 'facebook', 'Product')
]
def handle_query(query):
q = [x.lower() for x in query]
q = ' '.join(sorted(q))
print(q)
try:
print('Found: {}'.format(bids_hash_table[q]))
except KeyError:
print('No bids')
for query in queries:
handle_query(query)
print()
n_common_words = {'the': 0.9, 'and': 0.8, 'twas': 0.3}
def construct_document(doc):
doc = doc.replace(',','').lower().split(' ')
com = set(doc).intersection(set(n_common_words.keys()))
diff = set(doc).difference(set(n_common_words.keys()))
freq = [n_common_words[x] for x in com]
freq_sec = [x for (y,x) in sorted(zip(freq, com))]
rare_sec = sorted(diff)
sec = ' '.join(rare_sec + freq_sec)
print(sec)
doc = 'Twas brilling, and the slithy toves'
construct_document(doc)
plt.imshow(plt.imread('./res/fig8_5.png'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 8.2 On-Line Algorithms
Step2: 8.3 The Matching Problem
Step3: 8.3.1 Matches and Perfect Matches
Step4: 8.3.2 The Greedy Algorithm for Maximal Matching
Step5: 8.3.3 Competitive Ratio for Greedy Matching
Step6: 8.4 The Adwords Problem
Step7: In fig 8.3, observe that Balance must exhaust the budget of at least one of the advertisers, say $A_2$.
Step8: Lower-numbered advertisers cannot bid at first, and the budgets of hte higher-numbered advertisers will be exhausted eventually. All advertisers will end at $j$ round where
Step9: 8.5.2 More Complex Matching Problems
Step10: The bids are stored in a hash-table, whose hash key is the first word of the bid, in the order explained above.
|
6,673
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
from cs231n.layers import mean_forward, mean_backward
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
dout = np.random.randn(N, D) + 3
out, _ = mean_forward(x)
print 'Mean: ', out.mean(axis=0)
fx = lambda x: mean_forward(x)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dx = mean_backward(dout, None)
print 'dx error: ', rel_error(dx_num, dx)
from cs231n.layers import var_forward, var_backward
N, D = 4, 5
x = 5 * np.random.randn(N, D)
x -= x.mean(axis=0)
dout = np.random.randn(N, D)
out, cache = var_forward(x, 1e-5)
print 'Var: ', out.var(axis=0)
fx = lambda x: var_forward(x, 1e-5)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dx = var_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Batch Normalization
Step2: Batch normalization
Step3: Batch Normalization
Step4: Batch Normalization
Step5: Fully Connected Nets with Batch Normalization
Step6: Batchnorm for deep networks
Step7: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
Step8: Batch normalization and initialization
|
6,674
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
import shutil, time, warnings
from contextlib import redirect_stdout
import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.coordinates import SkyCoord
from astropy.table import Table, Column, vstack
from astropy.io import ascii
from PIL import Image, ImageDraw, ImageFont
#from astrometry.util.starutil_numpy import hmsstring2ra
from astrometry.util.util import Tan
from astrometry.util.fits import merge_tables
from legacypipe.survey import LegacySurveyData
from legacypipe.runbrick import run_brick
import multiprocessing
nproc = multiprocessing.cpu_count() // 2
%matplotlib inline
dr = 'dr6'
PIXSCALE = 0.262
gallerydir = os.path.join( os.getenv('SCRATCH'), dr, 'gallery' )
galleryfile = os.path.join(gallerydir, 'gallery-{}.fits'.format(dr))
jpgdir = os.path.join(gallerydir, 'jpg')
if not os.path.isdir(jpgdir):
os.mkdir(jpgdir)
pngdir = os.path.join(gallerydir, 'png')
if not os.path.isdir(pngdir):
os.mkdir(pngdir)
names = ('name', 'type', 'ra_hms', 'dec_dms', 'const', 'majax', 'minax',
'pa', 'bmag', 'vmag', 'jmag', 'hmag', 'kmag', 'sbrightn', 'hubble',
'cstarumag', 'cstarbmag', 'cstarvmag', 'messier', 'ngc', 'ic',
'cstarnames', 'identifiers', 'commonnames', 'nednotes', 'ongcnotes')
NGC = ascii.read(os.path.join(gallerydir, 'NGC.csv'), delimiter=';', names=names)
NGC.write(os.path.join(gallerydir, 'NGC.fits'), overwrite=True)
NGC
majax = ma.getdata(NGC['majax']) # arcmin
objtype = np.char.strip(ma.getdata(NGC['type']))
keeptype = ('G', 'PN', 'OCl', 'GCl', 'Cl+N') # Cl gives us GCl, OCl, and Cl+N
#keeptype = ('G', 'GPair', 'GTrpl', 'GGroup', 'PN', 'Cl') # Cl gives us GCl, OCl, and Cl+N
keep = np.zeros(len(NGC), dtype=bool)
for otype in keeptype:
print('Working on {}'.format(otype))
ww = [otype == tt for tt in objtype]
keep = np.logical_or(keep, ww)
galtoss = (objtype == 'G') * (majax < 3)
keep = np.logical_and(keep, (majax > 0.3) * (majax < 20))
keep = np.logical_and(keep, ~galtoss)
nobj = np.count_nonzero(keep)
print('Keeping {} / {} objects'.format(nobj, len(NGC)))
cat = NGC[keep]
cat
print(np.unique(ma.getdata(cat['type'])))
print(np.unique(ma.getdata(cat['hubble'])))
ww = (cat['type'] == 'G')
_ = plt.hist(cat['majax'][ww], bins=100)
coord = SkyCoord(ra=cat['ra_hms'], dec=cat['dec_dms'], unit=(u.hourangle, u.deg))
cat.add_column(Column(name='ra', unit='deg', length=nobj))
cat.add_column(Column(name='dec', unit='deg', length=nobj))
cat['ra'] = coord.ra.value
cat['dec'] = coord.dec.value
survey = LegacySurveyData()
survey.output_dir = gallerydir
def get_name(cat, nice=False):
name = np.atleast_1d(ma.getdata(cat['name']))
mess = np.atleast_1d(ma.getdata(cat['messier']))
comm = np.atleast_1d(ma.getdata(cat['commonnames']))
outname = []
if nice:
hubble_type = np.empty_like(name)
for nn, mm, cc in zip(name, mess, comm):
oo = nn.strip().replace('NED01', '').upper()
if mm != 0:
oo = '{} = M{}'.format(oo, mm)
if cc != 0:
oo = '{} = {}'.format(oo, str(cc).replace(',', ' = '))
outname.append(oo)
else:
for nn in name:
outname.append(nn.strip().replace(' ', '_').lower())
if len(outname) == 1:
outname = outname[0]
return outname
def simple_wcs(obj, diam):
Build a simple WCS object for a single object.
size = np.rint(diam / PIXSCALE).astype('int') # [pixels]
wcs = Tan(obj['ra'], obj['dec'], size/2+0.5, size/2+0.5,
-PIXSCALE/3600.0, 0.0, 0.0, PIXSCALE/3600.0,
float(size), float(size))
return wcs
def _build_sample_one(args):
Wrapper function for the multiprocessing.
return build_sample_one(*args)
def build_sample_one(obj, factor=0.5, verbose=False):
Wrapper function to find overlapping grz CCDs for a given object.
name = get_name(obj)
print('Working on {}...'.format(name))
diam = factor * ma.getdata(obj['majax']) * 60.0 # diameter [arcsec]
wcs = simple_wcs(obj, diam)
try:
ccds = survey.ccds_touching_wcs(wcs) # , ccdrad=2*diam/3600)
except:
return None
if ccds:
# Is there 3-band coverage?
if 'g' in ccds.filter and 'r' in ccds.filter and 'z' in ccds.filter:
if verbose:
print('For {} (type={}) found {} CCDs, RA = {:.5f}, Dec = {:.5f}, Diameter={:.4f} arcmin'.format(
obj['name'], obj['type'], len(ccds), obj['ra'], obj['dec'], obj['majax']))
return obj
return None
def build_sample(cat, factor=1.0):
Build the full sample with grz coverage in DR6.
sampleargs = list()
for cc in cat:
sampleargs.append( (cc, factor, True) ) # the False refers to verbose=False
if nproc > 1:
p = multiprocessing.Pool(nproc)
result = p.map(_build_sample_one, sampleargs)
p.close()
else:
result = list()
for args in sampleargs:
result.append(_build_sample_one(args))
# Remove non-matching objects and write out the sample
outcat = vstack(list(filter(None, result)))
print('Found {}/{} objects in the DR6 footprint.'.format(len(outcat), len(cat)))
return outcat
samplelogfile = os.path.join(gallerydir, 'build-sample.log')
print('Building the sample.')
print('Logging to {}'.format(samplelogfile))
t0 = time.time()
with open(samplelogfile, 'w') as log:
with redirect_stdout(log):
sample = build_sample(cat)
print('Found {}/{} objects in the DR6 footprint.'.format(len(sample), len(cat)))
print('Total time = {:.3f} seconds.'.format(time.time() - t0))
print('Writing {}'.format(galleryfile))
sample.write(galleryfile, overwrite=True)
sample
print(np.unique(sample['type']).data)
def qa_sample():
fig, ax = plt.subplots()
ax.scatter(cat['ra'], cat['dec'], alpha=0.5, s=10, label='Trimmed NGC Catalog')
ax.scatter(sample['ra'], sample['dec'], s=20, label='Objects in DR6 Footprint')
ax.set_xlabel('RA')
ax.set_ylabel('Dec')
ax.legend(loc='lower right')
qa_sample()
def custom_brickname(obj, prefix='custom-'):
brickname = 'custom-{:06d}{}{:05d}'.format(
int(1000*obj['ra']), 'm' if obj['dec'] < 0 else 'p',
int(1000*np.abs(obj['dec'])))
return brickname
def get_factor(objtype):
Scale factors for the mosaics.
ref = dict(
G = 2,
GCl = 2,
OCl = 2,
PN = 4,
)
return ref[objtype]
def make_coadds_one(obj, scale=PIXSCALE, clobber=False):
name = get_name(obj)
jpgfile = os.path.join(jpgdir, '{}.jpg'.format(name))
if os.path.isfile(jpgfile) and not clobber:
print('File {} exists...skipping.'.format(jpgfile))
else:
factor = get_factor(obj['type'])
diam = factor * ma.getdata(obj['majax']) * 60.0 # diameter [arcsec]
size = np.rint(diam / scale).astype('int') # [pixels]
print('Generating mosaic for {} (type={}) with width={} pixels.'.format(name, obj['type'], size))
with warnings.catch_warnings():
warnings.simplefilter("ignore")
run_brick(None, survey, radec=(obj['ra'], obj['dec']), pixscale=scale,
width=size, height=size, stages=['image_coadds'], splinesky=True,
early_coadds=True, pixPsf=True, hybridPsf=True, normalizePsf=True,
write_pickles=False, depth_cut=False, apodize=True, threads=nproc,
do_calibs=False, ceres=False)
sys.stdout.flush()
brickname = custom_brickname(obj, prefix='custom-')
_jpgfile = os.path.join(survey.output_dir, 'coadd', 'cus', brickname,
'legacysurvey-{}-image.jpg'.format(brickname))
shutil.copy(_jpgfile, jpgfile)
shutil.rmtree(os.path.join(survey.output_dir, 'coadd'))
def make_coadds(sample, clobber=False):
for obj in sample:
make_coadds_one(obj, clobber=clobber)
#make_coadds_one(sample[111], clobber=True)
coaddslogfile = os.path.join(gallerydir, 'make-coadds.log')
print('Generating the coadds.')
print('Logging to {}'.format(coaddslogfile))
t0 = time.time()
with open(coaddslogfile, 'w') as log:
with redirect_stdout(log):
make_coadds(sample, clobber=False)
print('Total time = {:.3f} minutes.'.format((time.time() - t0) / 60))
barlen = np.round(60.0 / PIXSCALE).astype('int')
fonttype = os.path.join(gallerydir, 'Georgia.ttf')
def _add_labels_one(args):
Wrapper function for the multiprocessing.
return add_labels_one(*args)
def add_labels_one(obj, verbose=False):
name = get_name(obj)
nicename = get_name(obj, nice=True)
jpgfile = os.path.join(jpgdir, '{}.jpg'.format(name))
pngfile = os.path.join(pngdir, '{}.png'.format(name))
thumbfile = os.path.join(pngdir, 'thumb-{}.png'.format(name))
im = Image.open(jpgfile)
sz = im.size
fntsize = np.round(sz[0]/28).astype('int')
width = np.round(sz[0]/175).astype('int')
font = ImageFont.truetype(fonttype, size=fntsize)
draw = ImageDraw.Draw(im)
# Label the object name--
draw.text((0+fntsize*2, 0+fntsize*2), nicename, font=font)
# Add a scale bar--
x0, x1, yy = sz[1]-fntsize*2-barlen, sz[1]-fntsize*2, sz[0]-fntsize*2
draw.line((x0, yy, x1, yy), fill='white', width=width)
im.save(pngfile)
# Generate a thumbnail
cmd = '/usr/bin/convert -thumbnail 300x300 {} {}'.format(pngfile, thumbfile)
os.system(cmd)
def add_labels(sample):
labelargs = list()
for obj in sample:
labelargs.append((obj, False))
if nproc > 1:
p = multiprocessing.Pool(nproc)
res = p.map(_add_labels_one, labelargs)
p.close()
else:
for args in labelargs:
res = _add_labels_one(args)
%time add_labels(sample)
def get_type(hubble):
Convert Hubble type to numerical type, for sorting purposes.
numtype = {
'E': 0,
'E-S0': 1,
'S0': 2,
'S0-a': 3,
'Sa': 4,
'SBa': 4,
'SABa': 4,
'Sab': 5,
'SBab': 5,
'Sb': 6,
'SABb': 6,
'SBb': 6,
'Sbc': 7,
'Sc': 8,
'SABc': 8,
'SBc': 8,
'Scd': 9,
'SBcd': 9,
'Sd': 10,
'Sm': 11,
'SBm': 11,
'I': 12,
'IAB': 12,
'IB': 12,
'0': -1
}
return np.array([numtype[hh] for hh in hubble])
reject = ['ngc3587', 'ngc6832', 'ngc5982', 'ngc2832', 'ngc2340', 'ngc5195',
'ngc5308', 'ngc4346', 'ngc4036', 'ngc2681', 'ngc3718', 'ngc5377',
'ngc2146', 'ngc3126', 'ngc2841', 'ngc2683', 'ngc4217', 'ngc4357',
'ngc5055', 'ngc4100', 'ngc5879', 'ngc5297', 'ngc4605', 'ngc6015',
'ngc4144', 'ngc3733', 'ngc3079', 'ngc3198', 'ngc3430', 'ngc3877',
'ngc4062', 'ngc4631', 'ngc4656_ned01', 'ngc4395']
toss = np.zeros(len(sample), dtype=bool)
name = get_name(sample)
for ii, nn in enumerate(name):
for rej in np.atleast_1d(reject):
toss[ii] = rej in nn.lower()
if toss[ii]:
break
print('Rejecting {} objects.'.format(np.sum(toss)))
pngkeep = sample[~toss]
if np.sum(toss) > 0:
pngrej = sample[toss]
else:
pngrej = []
htmlfile = os.path.join(gallerydir, 'index.html')
htmlfile_reject = os.path.join(gallerydir, 'index-reject.html')
baseurl = 'http://legacysurvey.org/viewer-dev'
def html_rows(pngkeep, nperrow=4):
nrow = np.ceil(len(pngkeep) / nperrow).astype('int')
pngsplit = list()
for ii in range(nrow):
i1 = nperrow*ii
i2 = nperrow*(ii+1)
if i2 > len(pngkeep):
i2 = len(pngkeep)
pngsplit.append(pngkeep[i1:i2])
#pngsplit = np.array_split(pngkeep, nrow)
print('Splitting the sample into {} rows with {} mosaics per row.'.format(nrow, nperrow))
html.write('<table class="ls-gallery">\n')
html.write('<tbody>\n')
for pngrow in pngsplit:
html.write('<tr>\n')
for obj in pngrow:
name = get_name(obj)
nicename = get_name(obj, nice=True)
pngfile = os.path.join('png', '{}.png'.format(name))
thumbfile = os.path.join('png', 'thumb-{}.png'.format(name))
img = 'src="{}" alt="{}"'.format(thumbfile, nicename)
#img = 'class="ls-gallery" src="{}" alt="{}"'.format(thumbfile, nicename)
html.write('<td><a href="{}"><img {}></a></td>\n'.format(pngfile, img))
html.write('</tr>\n')
html.write('<tr>\n')
for obj in pngrow:
nicename = get_name(obj, nice=True)
href = '{}/?layer=decals-{}&ra={:.8f}&dec={:.8f}&zoom=12'.format(baseurl, dr, obj['ra'], obj['dec'])
html.write('<td><a href="{}" target="_blank">{}</a></td>\n'.format(href, nicename))
html.write('</tr>\n')
html.write('</tbody>\n')
html.write('</table>\n')
objtype = ma.getdata(pngkeep['type'])
hubbletype = get_type(ma.getdata(pngkeep['hubble']))
with open(htmlfile, 'w') as html:
html.write('<html><head>\n')
html.write('<style type="text/css">\n')
html.write('table.ls-gallery {width: 90%;}\n')
#html.write('img.ls-gallery {display: block;}\n')
#html.write('td.ls-gallery {width: 100%; height: auto}\n')
#html.write('td.ls-gallery {width: 100%; word-wrap: break-word;}\n')
html.write('p.ls-gallery {width: 80%;}\n')
html.write('</style>\n')
html.write('</head><body>\n')
html.write('<h1>DR6 Image Gallery</h1>\n')
html.write(<p class="ls-gallery">This gallery highlights the exquisite image quality and diversity
of objects observed by the Legacy Survey, including planetary nebulae, globular clusters, and
large, nearby galaxies. Each thumbnail links to a larger image while the object name below each
thumbnail links to the
<a href="http://legacysurvey.org/viewer">Sky Viewer</a>. For reference, the horizontal white bar in
the lower-right corner of each image represents one arcminute.</p>\n)
html.write(<p>We gratefully acknowledge the <a href="https://github.com/mattiaverga/OpenNGC" target="_blank">
OpenNGC</a> catalog created by Mattia Verga, which was used to generate this sample.</p>\n)
html.write(<p>For more eye candy, please visit the gallery of galaxy groups highlighted in the
<a href="http://portal.nersc.gov/project/cosmo/data/legacysurvey/dr5/gallery/">DR5 Gallery.</a></p>\n)
# Split by object type
html.write('<h2>Planetary Nebulae, Open Clusters, and Globular Clusters</h2>\n')
these = np.logical_or( np.logical_or(objtype == 'PN', objtype == 'OCl'), objtype == 'GCl' )
srt = np.argsort(objtype[these])[::-1]
html_rows(pngkeep[these][srt])
html.write('<br />\n')
html.write('<h2>Spheroidal & Elliptical Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype <= 2)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('<h2>Early-Type Disk Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype >= 3) * (hubbletype <= 6)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('<h2>Late-Type Disk Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype >= 7) * (hubbletype <= 10)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('<h2>Irregular Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype >= 11)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('</body></html>\n')
if len(pngrej) > 0:
with open(htmlfile_reject, 'w') as html:
html.write('<html><head>\n')
html.write('<style type="text/css">\n')
html.write('img.ls-gallery {display: block;}\n')
html.write('td.ls-gallery {width: 20%; word-wrap: break-word;}\n')
html.write('</style>\n')
html.write('</head><body>\n')
html.write('<h1>DR6 Image Gallery - Rejected</h1>\n')
html_rows(pngrej)
html.write('</body></html>\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preliminaries
Step2: Read the Open NGC catalog created by Mattia Verga
Step3: Select the desired object types.
Step4: Require "big" objects, particularly the galaxies (to cut down the sample size).
Step5: Convert coordinates in decimal degrees.
Step10: Generate (find) the sample of objects in the DR6 footprint.
Step12: Generate the color mosaics for each object.
Step14: Add labels and a scale bar.
Step19: Finally, assemble the webpage of good and rejected gallery images.
|
6,675
|
<ASSISTANT_TASK:>
Python Code:
import graphlab as gl
sales = gl.SFrame('data/kc_house_data.gl/')
def calcRSS(model, features, output):
predict = model.predict(features)
error = output - predict
rss = np.sum(np.square(error))
return rss
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # add a constant column to an SFrame
# prepend variable 'constant' to the features list
features = ['constant'] + features
# select the columns of data_SFrame given by the ‘features’ list into the SFrame ‘features_sframe’
features_sframe = data_sframe[features]
# this will convert the features_sframe into a numpy matrix with GraphLab Create >= 1.7!!
features_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the target to the variable ‘output_sarray’
output_sarray['target']=output
# this will convert the SArray into a numpy array:
output_array = output_sarray.to_numpy() # GraphLab Create>= 1.7!!
return(features_matrix, output_array)
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
return derivative
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)
print np.sum(errors*example_features[:,1])*2+20.
print ''
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)
print np.sum(errors)*2.
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
print 'Starting gradient descent with l2_penalty = ' + str(l2_penalty)
weights = np.array(initial_weights) # make sure it's a numpy array
iteration = 0 # iteration counter
print_frequency = 1 # for adjusting frequency of debugging output
#while not reached maximum number of iterations:
iteration += 1 # increment iteration counter
### === code section for adjusting frequency of debugging output. ===
if iteration == 10:
print_frequency = 10
if iteration == 100:
print_frequency = 100
if iteration%print_frequency==0:
print('Iteration = ' + str(iteration))
### === end code section ===
# compute the predictions based on feature_matrix and weights using your predict_output() function
# compute the errors as predictions - output
# from time to time, print the value of the cost function
if iteration%print_frequency==0:
print 'Cost function = ', str(np.dot(errors,errors) + l2_penalty*(np.dot(weights,weights) - weights[0]**2))
for i in xrange(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
# subtract the step size times the derivative from the current weight
print 'Done with gradient descent at iteration ', iteration
print 'Learned weights = ', str(weights)
return weights
simple_features = ['sqft_living']
my_output = 'price'
train_data,test_data = sales.random_split(.8,seed=0)
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Step3: Import useful functions from previous notebook
Step4: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step5: To test your feature derivartive run the following
Step6: Gradient Descent
Step7: Visualizing effect of L2 penalty
Step8: Let us split the dataset into training set and test set. Make sure to use seed=0
Step9: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
Step10: Let's set the parameters for our optimization
Step11: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step12: Compute the RSS on the TEST data for the following three sets of weights
Step13: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
|
6,676
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (150, 170)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {w:i for i, w in enumerate(vocab)}
int_to_vocab = {i:w for i, w in enumerate(vocab)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {
'.' : '||period||',
',' : '||comma||',
'\"' : '||quotation_mark||',
';' : '||semicolon||',
'!' : '||esclamation_mark||',
'?' : '||question_mark||',
'(' : '||left_parentheses||',
')' : '||right_parentheses||',
'--': '||dash||',
'\n': '||return||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], 'input')
targets = tf.placeholder(tf.int32, [None, None], 'targets')
learning_rate = tf.placeholder(tf.float32, None, 'learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([cell])
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), 'initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity(state, 'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn = None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
batch_length = batch_size * seq_length
total_batches = len(int_text) // batch_length
batches = np.zeros([total_batches, 2, batch_size, seq_length], dtype = np.int32)
int_text_array = np.array(int_text[:total_batches * batch_length])
int_text_matrix = int_text_array.reshape((batch_size, -1))
batch = 0
for n in range(0, int_text_matrix.shape[1], seq_length):
x = int_text_matrix[:, n:n + seq_length]
y = np.zeros(x.shape)
if batch != total_batches - 1:
y[:,:] = int_text_matrix[:, n + 1 : n + seq_length + 1]
else:
for i in range(batch_size):
index = (n * (i + 1)) + ((i * seq_length) + 1)
y[i,:] = int_text_array.take(range(index, index + seq_length), mode = "wrap")
batches[batch, 0] = x
batches[batch, 1] = y
batch += 1
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 120
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = .01
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
with tf.device("/gpu:0"):
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'barney_gumble'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
6,677
|
<ASSISTANT_TASK:>
Python Code:
import numpy, scipy
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
%matplotlib inline
import matplotlib.pyplot
from sklearn import datasets
iris = datasets.load_iris()
print('Target names:', iris.target_names)
print('Features:', iris.feature_names)
print(iris.data)
first = iris.data[iris.target == 0]
second = iris.data[iris.target == 1]
third = iris.data[iris.target == 2]
print(len(first), len(second), len(third))
print("first average:", first.mean(axis=0))
print("second average:", second.mean(axis=0))
print("third average:", third.mean(axis=0))
print("sepal width and length: ", scipy.stats.pearsonr(iris.data[:, 0], iris.data[:, 1])[0])
print("petal width and length: ", scipy.stats.pearsonr(iris.data[:, 2], iris.data[:, 3])[0])
print("")
print("sepal width and length for first class: ", scipy.stats.pearsonr(first[:, 0], first[:, 1])[0])
print("sepal width and length for second class: ", scipy.stats.pearsonr(second[:, 0], second[:, 1])[0])
print("sepal width and length for third class: ", scipy.stats.pearsonr(third[:, 0], third[:, 1])[0])
As = scipy.sparse.diags([-0.5*numpy.ones(7), 0.5*numpy.ones(7)], [-1,1])
bs = numpy.ones(8)
print('[As|bs]:\n{}'.format(numpy.concatenate((As.toarray(), bs.reshape(-1,1)), axis=1)))
xs = scipy.sparse.linalg.spsolve(As.tocsr(), bs)
print(xs)
movie_descriptions = {}
vocab = {}
with open("movies.txt", "rb") as f:
for i, line in enumerate(f):
title, description = line.strip().split(b'\t')
movie_descriptions[title] = description.split(b' ')
for word in set(movie_descriptions[title]):
if word not in vocab:
new_id = len(vocab)
vocab[word] = new_id
print(len(vocab))
print(b" ".join(movie_descriptions[b"The Matrix"]))
movie_to_id = {k: i for i, k in enumerate(movie_descriptions.keys())}
id_to_movie = {i: k for k, i in movie_to_id.items()}
id_to_word = {i: w for w, i in vocab.items()}
print("The Matrix:", movie_to_id[b"The Matrix"])
print("0th movie:", id_to_movie[0])
print(len(movie_to_id)-1, "th movie:", id_to_movie[len(movie_to_id)-1])
print("word id of dog:", vocab[b"dog"])
print("0th word:", id_to_word[0])
from collections import Counter
i = []
j = []
k = []
for title, description in movie_descriptions.items():
words = Counter(description)
for w, c in words.items():
i.append(movie_to_id[title])
j.append(vocab[w])
k.append(c)
Matrix = scipy.sparse.csc_matrix((k, (i, j)), dtype="float32")
print(Matrix.shape)
U, d, Vh = scipy.sparse.linalg.svds(Matrix, k=40, )
U /= numpy.sqrt((U**2).sum(1))[:, None]
Vh /= numpy.sqrt((Vh**2).sum(0))[None, :]
print(U.shape)
print(Vh.shape)
def closests(v, k=1):
return numpy.argpartition(((U - v[None, :])**2).sum(1), k, axis=0)[:k]
closests(numpy.ones(len(Vh)), 3)
print([id_to_movie[i] for i in closests(U[movie_to_id[b"Monsters, Inc."]], 5)])
print([id_to_movie[i] for i in closests(U[movie_to_id[b"Popeye"]], 5)])
[id_to_movie[i] for i in closests(U[movie_to_id[b"Popeye"]] + U[movie_to_id[b"Monsters, Inc."]], 10)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Iris
Step2: A = 0.5*(numpy.diag(numpy.ones(7), k=1) - numpy.diag(numpy.ones(7), k=-1))
Step3: Document-term matrix decomposition
Step4: Task 2.
Step5: Task 3.
Step6: Task 4.
Step7: Now you can search similar movies!
Step8: Or even mixture of movies by adding "movie vectors"!
|
6,678
|
<ASSISTANT_TASK:>
Python Code:
%run db2.ipynb
%sql -sampledata
%%sql
SELECT DEPTNAME, LASTNAME FROM
DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E
ON D.DEPTNO = E.WORKDEPT
%%sql
SELECT DEPTNAME, LASTNAME FROM
DEPARTMENT D, EMPLOYEE E
WHERE D.DEPTNO = E.WORKDEPT (+)
%%sql -q
DROP TABLE LONGER_CHAR;
CREATE TABLE LONGER_CHAR
(
NAME CHAR(255)
);
%%sql -q
DROP TABLE HEXEY;
CREATE TABLE HEXEY
(
AUDIO_SHORT BINARY(255),
AUDIO_LONG VARBINARY(1024),
AUDIO_CHAR VARCHAR(255) FOR BIT DATA
);
%%sql
INSERT INTO HEXEY VALUES
(BINARY('Hello there'),
BX'2433A5D5C1',
VARCHAR_BIT_FORMAT(HEX('Hello there')));
SELECT * FROM HEXEY;
%%sql
UPDATE HEXEY
SET AUDIO_CHAR = AUDIO_SHORT
%%sql
SELECT COUNT(*) FROM HEXEY WHERE
AUDIO_SHORT = AUDIO_CHAR
%%sql -q
DROP TABLE TRUEFALSE;
CREATE TABLE TRUEFALSE (
EXAMPLE INT,
STATE BOOLEAN
);
%%sql
INSERT INTO TRUEFALSE VALUES
(1, TRUE),
(2, FALSE),
(3, 0),
(4, 't'),
(5, 'no')
%sql SELECT * FROM TRUEFALSE
%%sql
SELECT * FROM TRUEFALSE
WHERE STATE = TRUE OR STATE = 1 OR STATE = 'on' OR STATE IS TRUE
%%sql -q
DROP TABLE SYNONYM_EMPLOYEE;
CREATE TABLE SYNONYM_EMPLOYEE
(
NAME VARCHAR(20),
SALARY INT4,
BONUS INT2,
COMMISSION INT8,
COMMISSION_RATE FLOAT4,
BONUS_RATE FLOAT8
);
%%sql
SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME='SYNONYM_EMPLOYEE' AND TBCREATOR=CURRENT USER
%%sql -q
DROP TABLE XYCOORDS;
CREATE TABLE XYCOORDS
(
X INT,
Y INT
);
INSERT INTO XYCOORDS
WITH TEMP1(X) AS
(
VALUES (0)
UNION ALL
SELECT X+1 FROM TEMP1 WHERE X < 10
)
SELECT X, 2*X + 5
FROM TEMP1;
%%sql
SELECT 'COVAR_POP', COVAR_POP(X,Y) FROM XYCOORDS
UNION ALL
SELECT 'COVARIANCE', COVARIANCE(X,Y) FROM XYCOORDS
%%sql
SELECT 'STDDEV_POP', STDDEV_POP(X) FROM XYCOORDS
UNION ALL
SELECT 'STDDEV', STDDEV(X) FROM XYCOORDS
%%sql
SELECT 'VAR_SAMP', VAR_SAMP(X) FROM XYCOORDS
UNION ALL
SELECT 'VARIANCE_SAMP', VARIANCE_SAMP(X) FROM XYCOORDS
%%sql
WITH EMP(LASTNAME, WORKDEPT) AS
(
VALUES ('George','A01'),
('Fred',NULL),
('Katrina','B01'),
('Bob',NULL)
)
SELECT * FROM EMP WHERE
WORKDEPT ISNULL
%%sql
VALUES ('LOG',LOG(10))
UNION ALL
VALUES ('LN', LN(10))
%%sql
VALUES ('RANDOM', RANDOM())
UNION ALL
VALUES ('RAND', RAND())
%%sql
VALUES ('POSSTR',POSSTR('Hello There','There'))
UNION ALL
VALUES ('STRPOS',STRPOS('Hello There','There'))
%%sql
VALUES ('LEFT',LEFT('Hello There',5))
UNION ALL
VALUES ('STRLEFT',STRLEFT('Hello There',5))
%%sql
VALUES ('RIGHT',RIGHT('Hello There',5))
UNION ALL
VALUES ('STRRIGHT',STRRIGHT('Hello There',5))
%%sql
WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS
(
VALUES
(' | ','OR ', '2 | 3 ', 2 | 3),
(' & ','AND ', '2 & 3 ', 2 & 3),
(' ^ ','XOR ', '2 ^ 3 ', 2 ^ 3),
(' ~ ','COMPLEMENT', '~2 ', ~2),
(' # ','NONE ', ' ',0)
)
SELECT * FROM SPECIAL
%%sql
SET SQL_COMPAT = 'NPS';
WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS
(
VALUES
(' | ','OR ', '2 | 3 ', 2 | 3),
(' & ','AND ', '2 & 3 ', 2 & 3),
(' ^ ','POWER ', '2 ^ 3 ', 2 ^ 3),
(' ~ ','COMPLIMENT', '~2 ', ~2),
(' # ','XOR ', '2 # 3 ', 2 # 3)
)
SELECT * FROM SPECIAL;
%%sql
SET SQL_COMPAT='DB2';
SELECT WORKDEPT,INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY WORKDEPT;
%%sql
SELECT WORKDEPT, INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY 1;
%%sql
SET SQL_COMPAT='NPS';
SELECT WORKDEPT, INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY 1;
%%sql
SET SQL_COMPAT = 'NPS';
VALUES TRANSLATE('Hello');
%sql VALUES TRANSLATE('Hello','o','1')
%sql VALUES TRANSLATE('Hello','oe','12')
%sql VALUES TRANSLATE('Hello','oel','12')
%sql SET SQL_COMPAT='DB2'
%%sql
SELECT LASTNAME FROM EMPLOYEE
FETCH FIRST 5 ROWS ONLY
%%sql
SELECT LASTNAME FROM EMPLOYEE
ORDER BY LASTNAME
FETCH FIRST 5 ROWS ONLY
%%sql
SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY WORKDEPT
%%sql
SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY WORKDEPT
FETCH FIRST 5 ROWS ONLY
%%sql
SELECT LASTNAME FROM EMPLOYEE
FETCH FIRST 10 ROWS ONLY
%%sql
SELECT LASTNAME FROM EMPLOYEE
OFFSET 0 ROWS
FETCH FIRST 10 ROWS ONLY
%%sql
SELECT LASTNAME FROM EMPLOYEE
OFFSET 5 ROWS
FETCH FIRST 5 ROWS ONLY
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC;
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
OFFSET 2 ROWS FETCH FIRST 1 ROWS ONLY
%%sql
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE
SALARY > (
SELECT AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
OFFSET 2 ROWS FETCH FIRST 1 ROW ONLY
)
ORDER BY SALARY
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
LIMIT 1 OFFSET 2
%%sql
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE
SALARY > (
SELECT AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
LIMIT 2,1
)
ORDER BY SALARY
%%sql -q
DROP VARIABLE XINT2;
DROP VARIABLE YINT2;
DROP VARIABLE XINT4;
DROP VARIABLE YINT4;
DROP VARIABLE XINT8;
DROP VARIABLE YINT8;
CREATE VARIABLE XINT2 INT2 DEFAULT(1);
CREATE VARIABLE YINT2 INT2 DEFAULT(3);
CREATE VARIABLE XINT4 INT4 DEFAULT(1);
CREATE VARIABLE YINT4 INT4 DEFAULT(3);
CREATE VARIABLE XINT8 INT8 DEFAULT(1);
CREATE VARIABLE YINT8 INT8 DEFAULT(3);
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT2AND(X,Y)',XINT2,YINT2,INT2AND(XINT2,YINT2)),
('INT2OR(X,Y) ',XINT2,YINT2,INT2OR(XINT2,YINT2)),
('INT2XOR(X,Y)',XINT2,YINT2,INT2XOR(XINT2,YINT2)),
('INT2NOT(X) ',XINT2,YINT2,INT2NOT(XINT2))
)
SELECT * FROM LOGIC
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT4AND(X,Y)',XINT4,YINT4,INT4AND(XINT4,YINT4)),
('INT4OR(X,Y) ',XINT4,YINT4,INT4OR(XINT4,YINT4)),
('INT4XOR(X,Y)',XINT4,YINT4,INT4XOR(XINT4,YINT4)),
('INT4NOT(X) ',XINT4,YINT4,INT4NOT(XINT4))
)
SELECT * FROM LOGIC
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT8AND(X,Y)',XINT8,YINT8,INT8AND(XINT8,YINT8)),
('INT8OR(X,Y) ',XINT8,YINT8,INT8OR(XINT8,YINT8)),
('INT8XOR(X,Y)',XINT8,YINT8,INT8XOR(XINT8,YINT8)),
('INT8NOT(X) ',XINT8,YINT8,INT8NOT(XINT8))
)
SELECT * FROM LOGIC
%sql VALUES TO_HEX(255)
%sql VALUES RAWTOHEX('Hello')
%sql VALUES TO_HEX(12336)
%sql VALUES RAWTOHEX('00');
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) DEFINITION ONLY;
%%sql
SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME='AS_EMP' AND TBCREATOR=CURRENT USER
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) WITH DATA;
%%sql -q
DROP TABLE AS_EMP;
CREATE TABLE AS_EMP(LAST,PAY) AS
(
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE WORKDEPT='D11'
FETCH FIRST 3 ROWS ONLY
) WITH DATA;
%%sql -q
DROP TABLE AS_EMP;
CREATE TABLE AS_EMP(DEPARTMENT, LASTNAME) AS
(SELECT WORKDEPT, LASTNAME FROM EMPLOYEE
OFFSET 5 ROWS
FETCH FIRST 10 ROWS ONLY
) WITH DATA;
SELECT * FROM AS_EMP;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.
Step2: Table of Contents
Step3: This example works in the same manner as the last one, but uses
Step4: Back to Top
Step5: Back to Top
Step6: Inserting data into a binary column can be done through the use of BINARY functions, or the use of X'xxxx' modifiers when using the VALUE clause. For fixed strings you use the X'00' format to specify a binary value and BX'00' for variable length binary strings. For instance, the following SQL will insert data into the previous table that was created.
Step7: Handling binary data with a FOR BIT DATA column was sometimes tedious, so the BINARY columns will make coding a little simpler. You can compare and assign values between any of these types of columns. The next SQL statement will update the AUDIO_CHAR column with the contents of the AUDIO_SHORT column. Then the SQL will test to make sure they are the same value.
Step8: We should have one record that is equal.
Step9: Back to Top
Step10: The keywords for a true value are TRUE, 'true', 't', 'yes', 'y', 'on', and '1'. For false the values are
Step11: Now we can check to see what has been inserted into the table.
Step12: Retrieving the data in a SELECT statement will return an integer value for display purposes.
Step13: Back to Top
Step14: When you create a table with these other data types, Db2 does not use these "types" in the catalog. What Db2 will do is use the Db2 type instead of these synonym types. What this means is that if you describe the contents of a table,
Step15: Back to Top
Step16: COVAR_POP is an alias for COVARIANCE
Step17: VAR_POP is an alias for VARIANCE
Step18: VAR_SAMP is an alias for VARIANCE_SAMP
Step19: ISNULL, NOTNULL is an alias for IS NULL, IS NOT NULL
Step20: LOG is an alias for LN
Step21: RANDOM is an alias for RAND
Step22: STRPOS is an alias for POSSTR
Step23: STRLEFT is an alias for LEFT
Step24: STRRIGHT is an alias for RIGHT
Step25: Additional Synonyms
Step26: If we turn on NPS compatibility, you see a couple of special characters change behavior. Specifically the
Step27: GROUP BY Ordinal Location
Step28: If you try using the ordinal location (similar to an ORDER BY clause), you will
Step29: If NPS compatibility is turned on then then you use the GROUP BY clause with an ordinal location.
Step30: TRANSLATE Function
Step31: In this example, the letter 'o' will be replaced with an '1'.
Step32: Note that you could replace more than one character by expanding both the "to" and "from" strings. This
Step33: Translate will also remove a character if it is not in the "to" list.
Step34: Reset the behavior back to Db2 mode.
Step35: Back to Top
Step36: You can also add ORDER BY and GROUP BY clauses in the SELECT statement. Note that
Step37: Here is an example with the GROUP BY statement. This first SQL statement gives us the total
Step38: Adding the FETCH FIRST clause only reduces the rows returned, not the rows that
Step39: OFFSET Extension
Step40: You can specify a zero offset to begin from the beginning.
Step41: Now we can move the answer set ahead by 5 rows and get the remaining
Step42: FETCH FIRST and OFFSET in SUBSELECTs
Step43: We only want one record from this list (the third one), so we can use the FETCH FIRST clause with
Step44: And here is the list of employees that make more than the average salary of the 3rd highest department in the
Step45: Alternate Syntax for FETCH FIRST
Step46: Here is the list of employees that make more than the average salary of the 3rd highest department in the
Step47: Back to Top
Step48: This example will show the four functions used against SMALLINT (INT2) data types.
Step49: This example will use the 4 byte (INT4) data type.
Step50: Finally, the INT8 data type is used in the SQL. Note that you can mix and match the INT2, INT4, and INT8 values
Step51: TO_HEX Function
Step52: RAWTOHEX Function
Step53: The string "00" converts to a hex representation of x'3030' which is 12336 in Decimal.
Step54: The string that is returned by the RAWTOHEX function should be the same.
Step55: Back to Top
Step56: You can name a column in the SELECT list or place it in the table definition.
Step57: You can check the SYSTEM catalog to see the table definition.
Step58: The DEFINITION ONLY clause will create the table but not load any data into it. Adding the WITH DATA
Step59: The SELECT statement can be very sophisticated. It can do any type of calculation or limit the
Step60: You can also use the OFFSET clause as part of the FETCH FIRST ONLY to get chunks of data from the
|
6,679
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import time
from scipy.sparse import csr_matrix
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
wiki = graphlab.SFrame('people_wiki.gl/')
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
from em_utilities import sframe_to_scipy # converter
# This will take about a minute or two.
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
def bipartition(cluster, maxiter=400, num_runs=4, seed=None):
'''cluster: should be a dictionary containing the following keys
* dataframe: original dataframe
* matrix: same data, in matrix format
* centroid: centroid for this particular cluster'''
data_matrix = cluster['matrix']
dataframe = cluster['dataframe']
# Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.
kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=1)
kmeans_model.fit(data_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
# Divide the data matrix into two parts using the cluster assignments.
data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \
data_matrix[cluster_assignment==1]
# Divide the dataframe into two parts, again using the cluster assignments.
cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion
dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \
dataframe[cluster_assignment_sa==1]
# Package relevant variables for the child clusters
cluster_left_child = {'matrix': data_matrix_left_child,
'dataframe': dataframe_left_child,
'centroid': centroids[0]}
cluster_right_child = {'matrix': data_matrix_right_child,
'dataframe': dataframe_right_child,
'centroid': centroids[1]}
return (cluster_left_child, cluster_right_child)
wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster
left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=6, seed=1)
left_child
right_child
def display_single_tf_idf_cluster(cluster, map_index_to_word):
'''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''
wiki_subset = cluster['dataframe']
tf_idf_subset = cluster['matrix']
centroid = cluster['centroid']
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroid.argsort()[::-1]
for i in xrange(5):
print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])),
print('')
# Compute distances from the centroid to all data points in the cluster.
distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()
# compute nearest neighbors of the centroid within the cluster.
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in xrange(8):
text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('')
display_single_tf_idf_cluster(left_child, map_index_to_word)
display_single_tf_idf_cluster(right_child, map_index_to_word)
athletes = left_child
non_athletes = right_child
# Bipartition the cluster of athletes
left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_athletes, map_index_to_word)
display_single_tf_idf_cluster(right_child_athletes, map_index_to_word)
baseball = left_child_athletes
ice_hockey_football = right_child_athletes
left_child_ihs, right_child_ihs = bipartition(ice_hockey_football, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_ihs, map_index_to_word)
display_single_tf_idf_cluster(right_child_ihs, map_index_to_word)
# Bipartition the cluster of non-athletes
left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word)
display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word)
male_non_athletes = left_child_non_athletes
female_non_athletes = right_child_non_athletes
# Bipartition the cluster of males
left_child_males, right_child_males = bipartition(male_non_athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_males, map_index_to_word)
display_single_tf_idf_cluster(right_child_males, map_index_to_word)
# Bipartition the cluster of females
left_child_female, right_child_female = bipartition(female_non_athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_female, map_index_to_word)
display_single_tf_idf_cluster(right_child_female, map_index_to_word)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Wikipedia dataset
Step2: As we did in previous assignments, let's extract the TF-IDF features
Step3: To run k-means on this dataset, we should convert the data matrix into a sparse matrix.
Step4: To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.
Step5: Bipartition the Wikipedia dataset using k-means
Step6: The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish.
Step7: Let's examine the contents of one of the two clusters, which we call the left_child, referring to the tree visualization above.
Step8: And here is the content of the other cluster we named right_child.
Step9: Visualize the bipartition
Step10: Let's visualize the two child clusters
Step11: The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows
Step12: Using the bipartition function, we produce two child clusters of the athlete cluster
Step13: The left child cluster mainly consists of baseball players
Step14: On the other hand, the right child cluster is a mix of players in association football, Austrailian rules football and ice hockey
Step15: Our hierarchy of clusters now looks like this
Step16: Cluster of ice hockey players and football players
Step17: Quiz Question. Which diagram best describes the hierarchy right after splitting the ice_hockey_football cluster? Refer to the quiz form for the diagrams.
Step18: Neither of the clusters show clear topics, apart from the genders. Let us divide them further.
Step19: Quiz Question. Let us bipartition the clusters male_non_athletes and female_non_athletes. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams.
|
6,680
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
dfA = pd.DataFrame({'A':[1,1,1,1,1,1,1,1],
'B':[2,2,2,2,2,2,2,2]})
dfA
dfB = pd.DataFrame({'C':[3,3,3,3,3,3,3,3],
'D':[4,4,4,4,4,4,4,4]})
pd.merge(dfA, dfB, left_index=True, right_index=True)
import string
dfA.index = list(string.ascii_lowercase[:8]) # new index, of letters instead
dfA
dfB.index = list(string.ascii_lowercase[5:8+5]) # overlapping letters
dfB
pd.merge(dfA, dfB, left_index=True, right_index=True) # intersection, not the union
pd.merge(dfA, dfB, left_index=True, right_index=True, how="left") # left side governs
pd.merge(dfA, dfB, left_index=True, right_index=True, how="right") # right side governs
pd.merge(dfA, dfB, left_index=True, right_index=True, how="outer") # the full union
pd.merge(dfA, dfB, left_index=True, right_index=True, how="inner") # same as intersection
dfA = pd.DataFrame({'A':[1,2,3,4,5,6,7,8],
'B':[2,2,2,2,2,2,2,2],
'key':['dog', 'pig', 'rooster', 'monkey',
'hen', 'cat', 'slug', 'human']})
from numpy.random import shuffle
keys = dfA.key.values.copy() # copy or dfA key will also reorder
shuffle(keys) # in place
dfB = pd.DataFrame({'C':[1,2,3,4,5,6,7,8],
'D':[4,4,4,4,4,4,4,4],
'key': keys})
keys
dfA
dfB
pd.merge(dfA, dfB, on='key') # like "zipping together" on a common column
dfB.rename({"C":"A", "D":"B"}, axis=1, inplace = True)
dfB
dfA
pd.merge(dfA, dfB, on='key', sort=False) # sort on key if sort is True
pd.merge(dfA, dfB, on='key', sort=True) # sort on key if sort is True
pd.merge(dfA, dfB, on='A')
pd.merge(dfA, dfB, left_index=True, right_on="A")
merged = pd.merge(dfA, dfB, left_index=True, right_on="A")
merged.to_json("merged.json") # save for later
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You get to choose which columns, left and right, serve as "gear teeth" for synchronizing rows (sewing them together). Or choose the index, not a column.
|
6,681
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
rcParams['figure.figsize'] = (10,7)
!sequana_coverage --download-reference FN433596
import subprocess
for DP in [200, 100, 80, 60, 40, 20, 10]:
print(DP)
# Creating the simulated data with expected depth of coverage
cmd = "art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f {}"
cmd = cmd.format(DP)
subprocess.call(cmd.split())
# Creating the BAM files (deletes previous ones)
# This command uses bwa and samtools behind the scene.
cmd = "sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq"
subprocess.call(cmd.split())
# creating the BED file once for all
# Here, we use bioconvert (http://bioconvert.readthedocs.io) that uses bedtools behind the scene.
cmd = "bioconvert FN433596.fa.sorted.bam simulated_{}X.bed -f".format(DP)
subprocess.call(cmd.split())
from sequana import *
b = GenomeCov("simulated_100X.bed")
c = b.chr_list[0]
c.run(20001, circular=True)
data20000 = c.df['cov'] / c.df['rm']
c.run(10001, circular=True)
data10000 = c.df['cov'] / c.df['rm']
c.run(40001, circular=True)
data40000 = c.df['cov'] / c.df['rm']
c.run(2001, circular=True)
data2000 = c.df['cov'] / c.df['rm']
c.run(80001, circular=True)
data80000 = c.df['cov'] / c.df['rm']
#_ = hist(data20000, bins=50, alpha=0.5)
_ = hist(data40000, bins=50, alpha=0.5)
_ = hist(data80000, bins=50, alpha=0.5)
_ = hist(data2000, bins=50, alpha=0.5)
xlabel("normalised coverage")
#_ = hist(data20000, bins=50, alpha=0.5)
_ = hist(data40000, bins=50, alpha=0.5)
_ = hist(data80000, bins=50, alpha=0.5)
_ = hist(data2000, bins=50, alpha=0.5)
xlabel("normalised coverage")
semilogy()
_ = hist(data40000, bins=50, alpha=0.5, normed=True, label="based on simulated data (100X)")
xlabel("normalised coverage")
semilogy()
datanorm = [normal()/10+1 for x in range(1000000)]
_ = hist(datanorm, bins=50, alpha=0.5, normed=True, label="theorethical gaussian distribution")
legend()
from sequana import *
b = GenomeCov("simulated_10X.bed")
c = b.chr_list[0]
c.run(20001, circular=True)
data = c.df["cov"]/c.df['rm']
_ = hist(data, bins=30, alpha=0.5, normed=True, label="based on simulated data (10X)")
xlabel("normalised coverage")
semilogy()
datanorm = [normal()/sqrt(10)+1 for x in range(1000000)]
_ = hist(datanorm, bins=50, alpha=0.5, normed=True, label="theorethical gaussian distribution")
legend()
ylim([ylim()[0], 10])
DOC = [4, 6, 8, 10, 20, 40, 60, 80, 100, 200,]
STDs = [2, 2.44, 2.82, 3.17, 4.46, 6.31, 7.76, 8.95, 10.08, 14.27]
CVs = [0.5, 0.41, 0.35, 0.32, 0.22, 0.16, 0.13, 0.11, 0.10, 0.07]
stds = [0.51, 0.41, 0.35, 0.32, 0.225, 0.158, 0.129, 0.111, 0.10, 0.07]
def get_metrics(DOC):
b = GenomeCov("simulated_{}X.bed".format(DOC))
c = b.chr_list[0]
c.run(20001, circular=True)
normed = c.df['cov']/c.df['rm']
DOC = c.df['cov'].mean()
STD = c.df['cov'].std()
return DOC, STD, STD/DOC , std(normed)
get_metrics(20)
plot(DOC, CVs, "o-")
plot(DOC, 1/np.array(DOC)**0.5, "x--")
xlim([0,250])
axvline(10, color="r", ls="--")
def get_rm_metrics(DOC, W):
b = GenomeCov("simulated_{}X.bed".format(DOC))
c = b.chr_list[0]
c.run(W, circular=True)
return c.df.copy()
df100 = get_rm_metrics(100, 100)
df1000 = get_rm_metrics(100, 1000)
df10000 = get_rm_metrics(100, 10000)
df100000 = get_rm_metrics(100, 100000)
_ = hist(df100['rm'], normed=True, bins=range(150), alpha=0.5)
_ = hist(df1000['rm'], normed=True, bins=range(150), alpha=0.5)
_ = hist(df10000['rm'], normed=True, bins=range(150), alpha=0.5)
#_ = hist(df100000['rm'], normed=True, bins=range(150), alpha=0.5)
legend(["W=100", "W=1000", "W=10000", "W=100,000"])
xlim([60,140])
_ = hist(df100000['rm'], bins=range(150))
xlim([80,140])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the reference
Step2: Simulated FastQ data
Step3: Impact of the window parameter on the normalised coverage distribution (100X case)
Step4: Let us run the running median / normalisation / zscore computation using several window parameter (e.g 20001, 80001...)
Step5: Window parameter does not seem to have any impact on the distribution of the normalised coverage, which is centered around 1 and same standard deviation.
Step6: Note that if we look at the distribution on a log scale (on Y axis), the distributions are not Gaussian. This is because the mapped data exhibits a mix of distributions. However, the central distribution looks gaussian.
Step7: For lower DOC, the gaussian distribution assumption is not True anymore. You have skewed distribution. Events below the mean DOC may be missed. Events above the lean DOC may be over detected. This means that the thresholds should be ajusted. For isntance instead of the default pair (-4,4), one could use (-4, 6).
Step8: Impact of DOC on the normalised distribution standard deviation
Step9: To obtain the number above, you can use the following function. Note that DOC is depth of coverage, STDs is the standard deviation of the genome coverage. CVs is the coefficient of variation and stds is the standard deviation of the normalized genome coverage.
Step10: We can see that the standard distribution of the normalised coverage is equal to the coefficient of variation (CV) of the original coverage
Step11: Distribution of the running median
Step12: For very large W, the distribution standard deviation tend to be small and more importantly discrete.
|
6,682
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import calin.math.hex_array
import calin.provenance.system_info
import calin.simulation.vs_optics
import calin.simulation.geant4_shower_generator
import calin.simulation.ray_processor
import calin.simulation.tracker
import calin.simulation.detector_efficiency
import calin.simulation.atmosphere
import calin.simulation.world_magnetic_model
import calin.simulation.pmt
from IPython.display import clear_output
from ipywidgets.widgets import *
# Column 1 : Telescope position N [cm]
# Column 2 : Telescope position W [cm]
# Column 3 : Telescope position UP offset from 2147m [cm]
scope_pos = array([[-21065., 5051., 6130., 700.],
[-17906., 22302., 4210., 700.],
[ 2796., 24356., 2320., 700.],
[ 12421., -13456., 3960., 700.],
[ 17627., 12790., 1930., 700.],
[ -7473., -14416., 5320., 700.],
[-21479., -12252., 7410., 700.],
[ -9811., 37645., 2070., 700.],
[ 2108., -30761., 7330., 700.],
[-19640., -29024., 8810., 700.],
[-30285., 38460., 3330., 700.],
[-34500., 17500., 6900., 700.],
[-37499., -3339., 9260., 700.],
[ 272., 4342., 4000., 700.],
[ 27765., -3780., 1800., 700.]])
scope_pos_x = -scope_pos[:,1]
scope_pos_y = scope_pos[:,0]
scope_pos_z = scope_pos[:,2]
scope_pos_x -= 0.5*(max(scope_pos_x) + min(scope_pos_x))
scope_pos_y -= 0.5*(max(scope_pos_y) + min(scope_pos_y))
#polar(arctan2(scope_pos_y,scope_pos_x), sqrt(scope_pos_x**2+scope_pos_y**2)/100,'o')
plot(scope_pos_x/100, scope_pos_y/100, 'o')
xlabel('X coordinate [m]')
ylabel('Y coordinate [m]')
axis('square')
xmax = 400
axis([-xmax,xmax,-xmax,xmax])
xticks(frange(-xmax,xmax,100))
yticks(frange(-xmax,xmax,100))
grid()
def dms(d,m,s):
# Note this function fails for "negative" d=0 (e.g. -00:30:00)
sign = 1
if(d<0):
sign = -1
d = abs(d)
return sign * (d + m/60.0 + s/3600.0)
mst = calin.ix.simulation.vs_optics.IsotropicDCArrayParameters()
mst.mutable_array_origin().set_latitude(dms(28, 45, 47.36))
mst.mutable_array_origin().set_longitude(dms(-17, 53, 23.93))
mst.mutable_array_origin().set_elevation(2147 * 100.0)
for i in range(len(scope_pos_x)):
scope = mst.mutable_prescribed_array_layout().add_scope_positions();
scope.set_x(scope_pos_x[i])
scope.set_y(scope_pos_y[i])
scope.set_z(scope_pos_z[i] + mst.array_origin().elevation())
mst.mutable_reflector_frame().set_optic_axis_rotation(-90);
dc = mst.mutable_reflector()
dc.set_curvature_radius(1920)
dc.set_aperture(1230)
dc.set_facet_num_hex_rings(5)
dc.mutable_psf_align().set_object_plane(inf) # 10 * 1e5);
dc.set_alignment_image_plane(1600)
dc.set_facet_spacing(122)
dc.set_facet_size(120)
dc.set_facet_focal_length(1607)
dc.set_facet_focal_length_dispersion(1)
dc.set_facet_spot_size_probability(0.8)
dc.set_facet_spot_size(0.5 * 2.8) # Spot size of 28mm at 2F
dc.set_facet_spot_size_dispersion(0.5 * 0.02)
dc.set_facet_labeling_parity(True)
dc.set_weathering_factor(1.0)
for id in [1,62,67,72,77,82,87]: dc.add_facet_missing_list(id-1)
mst.mutable_focal_plane().set_camera_diameter(235)
mst.mutable_focal_plane().mutable_translation().set_y(1/(1.0/dc.alignment_image_plane()-1/(10 * 1e5)))
mst.mutable_pixel().set_spacing(5)
mst.mutable_pixel().set_cone_inner_diameter(5)
mst.mutable_pixel().set_cone_survival_prob(1)
mst.mutable_pixel().set_hex_module_size(1)
mst.mutable_pixel().set_module_num_hex_rings(9)
u1,v1 = calin.math.hex_array.cluster_hexid_to_center_uv(1,1)
x1,y1 = calin.math.hex_array.uv_to_xy(u1,v1)
rot = arctan2(-y1,x1)/pi*180 - 30
mst.mutable_pixel().set_grid_rotation(rot)
obs_camera_box = mst.add_obscurations()
obs_camera_box.aligned_box().max_corner().set_x(150)
obs_camera_box.aligned_box().max_corner().set_y(mst.focal_plane().translation().y()+150)
obs_camera_box.aligned_box().max_corner().set_z(150)
obs_camera_box.aligned_box().min_corner().set_x(-150)
obs_camera_box.aligned_box().min_corner().set_y(mst.focal_plane().translation().y())
obs_camera_box.aligned_box().min_corner().set_z(-150)
obs_camera_box.aligned_box().set_incoming_only(True)
rng = calin.math.rng.RNG()
cta = calin.simulation.vs_optics.VSOArray()
cta.generateFromArrayParameters(mst, rng)
cta.pointTelescopesAzEl(0,90.0/180.0*pi);
data_dir = calin.provenance.system_info.build_info().data_install_dir() + "/simulation/"
det_eff = calin.simulation.detector_efficiency.DetectionEfficiency()
det_eff.scaleEffFromFile(data_dir + 'qe_R12992-100-05.dat')
det_eff.scaleEffFromFile(data_dir + 'ref_AlSiO2HfO2.dat')
det_eff.scaleEffFromFile(data_dir + 'Aclylite8_tra_v2013ref.dat')
det_eff.scaleEffByConst(0.9)
cone_eff = calin.simulation.detector_efficiency.AngularEfficiency(data_dir + 'CTA-LST_lightguide_eff.dat')
atm = calin.simulation.atmosphere.LayeredAtmosphere(data_dir + 'atmprof36.dat')
atm_abs = calin.simulation.detector_efficiency.AtmosphericAbsorption(data_dir + 'atm_trans_2147_1_10_0_0_2147.dat')
wmm = calin.simulation.world_magnetic_model.WMM()
bfield = wmm.field_vs_elevation(mst.array_origin().latitude(), mst.array_origin().longitude())
#bfield = None
pe_imager = calin.simulation.ray_processor.SimpleImagePEProcessor(cta.numTelescopes(),cta.telescope(0).numPixels())
dx = 10.0
qcfg = calin.ix.simulation.tracker.QuadratureIACTArrayIntegrationConfig();
qcfg.set_ray_spacing_linear(dx)
qcfg.set_ray_spacing_angular(2)
quad = calin.simulation.tracker.VSO_QuadratureIACTArrayIntegration(qcfg, cta, pe_imager)
#quad.add_trace_visitor(diag)
quad.set_detection_efficiencies(det_eff, atm_abs, cta.telescope(0).opticalAxis()[2], cone_eff)
iact = calin.simulation.tracker.IACTDetectorSphereCherenkovConeIntersectionFinder(quad)
act = calin.simulation.tracker.AirCherenkovParameterCalculatorTrackVisitor(iact, atm)
limiter_lo = calin.simulation.tracker.LengthLimitingTrackVisitor(act, dx,
mst.array_origin().elevation() + 40*100)
limiter_hi = calin.simulation.tracker.LengthLimitingTrackVisitor(limiter_lo, 50.0*dx)
generator = calin.simulation.geant4_shower_generator.\
Geant4ShowerGenerator(limiter_hi, atm, 1000, mst.array_origin().elevation(), atm.top_of_atmosphere(), bfield,
calin.simulation.geant4_shower_generator.VerbosityLevel_SUPRESSED_STDOUT);
generator.set_minimum_energy_cut(20);
ballistic_generator = calin.simulation.tracker.\
StraightTrackGenerator(limiter_hi, mst.array_origin().elevation())
pmt_cfg = calin.simulation.pmt.PMTSimPolya.cta_model_3()
pmt_cfg.set_signal_in_pe(True)
pmt = calin.simulation.pmt.PMTSimPolya(pmt_cfg,rng)
pmt_gain = mean(pmt.rvs(1000000))
noise_cfg = calin.ix.simulation.pmt.PoissonSignalSimConfig();
noise_gen = calin.simulation.pmt.PoissonSignalSim(pmt, noise_cfg, rng)
def gen_image(theta=0, phi=0, bx=0, by=0, e=1000, pt='proton', threshold=20, nsb=4, noise=True):
theta *= pi/180
phi *= pi/180
bx *= 100
by *= 100
e *= 1000
u = asarray([sin(theta)*cos(phi), sin(theta)*sin(phi), -cos(theta)])
x0 = asarray([bx,by,mst.array_origin().elevation()])+u/u[2]*100*1e5
if(pt=='proton'):
pt = calin.simulation.tracker.ParticleType_PROTON
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='gamma'):
pt = calin.simulation.tracker.ParticleType_GAMMA
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='electron'):
pt = calin.simulation.tracker.ParticleType_ELECTRON
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='muon'):
pt = calin.simulation.tracker.ParticleType_MUON
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='muon (ballistic)'):
pt = calin.simulation.tracker.ParticleType_MUON
ballistic_generator.generate_showers(1, pt, e, x0, u)
elif(pt=='nsb'):
return asarray(pe_imager.scope_image(0))*0
pix_data = asarray(pe_imager.scope_image(0))
for i in range(1,cta.numTelescopes()):
pix_data = maximum(pix_data, asarray(pe_imager.scope_image(i)))
return pix_data
def gentle_clip(x, C=4000, D=1200):
# Hyperbola with offset and non-linearity cancelled
C = sqrt(C**2 - D**2) # C is interpreted as max value
return (x-sqrt((x-C)**2 + D**2)+sqrt(C**2 + D**2)) / (1+C/sqrt(C**2 + D**2))
def plot_image(pix_data):
s = cta.telescope(0)
# figure(figsize=(10,8))
figure(figsize=(7,6))
pix = []
idx = []
for pix_id in range(len(pix_data)):
# if(pix_data[pix_id] == 0.0):
# continue
pix_hexid = s.pixel(pix_id).hexID()
vx,vy = calin.math.hex_array.hexid_to_vertexes_xy_trans(pix_hexid,
s.cosPixelRotation(), s.sinPixelRotation(), s.pixelSpacing()/s.focalPlanePosition()[1]/pi*180.0)
vv = zeros((len(vx),2))
vv[:,0] = vx
vv[:,1] = vy
pix.append(Polygon(vv,closed=True))
idx.append(pix_id)
pc = matplotlib.collections.PatchCollection(pix, cmap=matplotlib.cm.jet)
pc.set_array(asarray(pix_data)[idx])
pc.set_linewidths(0)
clo = 0
if(min(pix_data)<0):
clo = -5
pc.set_clim(clo,max(80,ceil(max(pix_data)/10)*10))
gca().add_collection(pc)
axis('square')
axis(4.5*asarray([-1,1,-1,1]))
xlabel('X coordinate [deg]')
ylabel('Y coordinate [deg]')
colorbar(pc)
grid(color='w')
def run_sim(theta=0, phi=0, bx=0, by=0, loge=3, pt='proton', threshold=20, nsb=4.0, noise=True, clip=True):
e = 10**loge
sim_image = gen_image(theta=theta,phi=phi,bx=bx,by=by,e=e,pt=pt,threshold=threshold,nsb=nsb,noise=noise)
im = sim_image
if(noise):
im += nsb
im = noise_gen.rvs(im)
im -= noise_gen.pedestal_mean()
im /= pmt_gain
im -= nsb
if(clip):
im = gentle_clip(im)
plot_image(im)
text(0.025,0.975,'Energy: %s$\\,$GeV\nType: %s'%("{:,.1f}".format(e),pt),
transform=gca().transAxes,va='top',ha='left')
text(0.025,0.025,'$\\hat{u}$ : %g$^\\circ$, %g$^\\circ$\n$\\vec{b}$ : %g$\\,$m, %g$\\,$m'%(theta,phi,bx,by),
transform=gca().transAxes,va='bottom',ha='left')
text(0.975,0.975,'Size: %s$\\,$PE\nN(>%g$\\,$PE): %g'%("{:,.1f}".format(sum(im)), threshold, sum(im>threshold)),
transform=gca().transAxes,va='top',ha='right')
try:
gcf().savefig('/CTA/event.pdf')
except:
pass
wbx = FloatSlider(min=-500.0, max=500.0, step=1, value=0, description="X impact point [m]")
wby = FloatSlider(min=-500.0, max=500.0, step=1, value=0, description="Y impact point [m]")
wenergy = FloatSlider(min=1.0, max=5.0, step=0.0625, value=3, description="Log10 E/GeV", readout_format='.3f',)
wtheta = FloatSlider(min=0.0,max=8.0,step=0.1,value=0.0, description="Theta [deg]")
wphi = FloatSlider(min=0.0, max=360.0, step=5.0, value=0.0, description="Phi [deg]")
wtype = Dropdown(options=['gamma', 'proton', 'electron', 'muon', 'muon (ballistic)', 'nsb'],
value='gamma', description='Paricle type')
wnsb = FloatSlider(min=0.0, max=20.0, step=0.2, value=4.0, description="Mean NSB [PE]")
wnoise = Checkbox(value=True, description='Ph & PMT noise')
wclip = Checkbox(value=True, description='Clipping')
button = Button(description="Run simulation")
wbox = VBox([HBox([wbx,wby,wenergy]),HBox([wtheta,wphi,wtype]),
HBox([wnsb,wnoise,wclip,button])])
display(wbox)
def on_button_clicked(b):
clear_output()
run_sim(theta=wtheta.value, phi=wphi.value, bx=wbx.value, by=wby.value, loge=wenergy.value, pt=wtype.value,
threshold=30, nsb=wnsb.value, noise=wnoise.value, clip=wclip.value)
button.on_click(on_button_clicked)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Define telescope properties for ray tracer and construct array
Step2: 3. Construct detection efficiency curve, cone efficiency and atmosphere
Step3: 4. Use world magnetic model to calculate field vs height at IAC
Step4: 5. Construct the hierarchy of actions to take for each track
Step5: 6. Track generators
Step6: 7. PMT and Poisson noise generators
Step7: 8. Function to run simulation and return image
Step8: 9. Clipping function based on hyperbola
Step9: 10. Plot image using matplotlib (using a PatchCollection)
Step10: 11. Put it all togerther
Step11: 12. Set up widgets and connect them run simulation when button clicked
|
6,683
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'A': [1, 0, 0, 0, 1, 0],
'B': [0, 1, 0, 0, 0, 1],
'C': [0, 0, 1, 0, 0, 0],
'D': [0, 0, 0, 1, 0, 0]})
df["category"] = df.idxmax(axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
6,684
|
<ASSISTANT_TASK:>
Python Code:
import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'Day1_rep10_justOverlap')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'
targetFile = '/home/nick/notebook/SIPSim/dev/fullCyc/CD-HIT/target_taxa.txt'
physeqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq_bulkCore = 'bulk-core'
physeq_SIP_core = 'SIP-core_unk'
nreps = 10
prefrac_comm_abundance = '1e9'
seq_per_fraction = ['lognormal', 9.432, 0.5, 10000, 30000] # dist, mean, scale, min, max
bulk_days = [1]
nprocs = 14
# building tree structure
nest = nestly.Nest()
## varying params
nest.add('rep', [x + 1 for x in xrange(nreps)])
## set params
nest.add('bulk_day', bulk_days, create_dir=False)
nest.add('abs', [prefrac_comm_abundance], create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [nprocs], create_dir=False)
nest.add('subsample_dist', [seq_per_fraction[0]], create_dir=False)
nest.add('subsample_mean', [seq_per_fraction[1]], create_dir=False)
nest.add('subsample_scale', [seq_per_fraction[2]], create_dir=False)
nest.add('subsample_min', [seq_per_fraction[3]], create_dir=False)
nest.add('subsample_max', [seq_per_fraction[4]], create_dir=False)
nest.add('bandwidth', [0.6], create_dir=False)
nest.add('cmd', ['"print unless /NA$/"'], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('targetFile', [targetFile], create_dir=False)
nest.add('physeqDir', [physeqDir], create_dir=False)
nest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
#-- making DNA pool similar to gradient of interest
echo '# Creating comm file from phyloseq'
phyloseq2comm.r {physeqDir}{physeq_bulkCore} -s 12C-Con -d {bulk_day} > {physeq_bulkCore}_comm.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm.txt
echo '## Adding target taxa to comm file'
comm_add_target.r {physeq_bulkCore}_comm.txt {targetFile} > {physeq_bulkCore}_comm_target.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm_target.txt
echo '## Selecting just target taxa'
perl -ne {cmd} {physeq_bulkCore}_comm_target.txt | comm_set_abund.r - > tmp.txt
rm -f {physeq_bulkCore}_comm_target.txt
mv tmp.txt {physeq_bulkCore}_comm_target.txt
echo '## parsing out genome fragments to make simulated DNA pool resembling the gradient of interest'
## all OTUs without an associated reference genome will be assigned a random reference (of the reference genome pool)
### this is done through --NA-random
SIPSim fragment_KDE_parse {fragFile} {physeq_bulkCore}_comm_target.txt \
--rename taxon_name --NA-random > fragsParsed.pkl
echo '#-- SIPSim pipeline --#'
echo '# converting fragments to KDE'
SIPSim fragment_KDE \
fragsParsed.pkl \
> fragsParsed_KDE.pkl
echo '# adding diffusion'
SIPSim diffusion \
fragsParsed_KDE.pkl \
--bw {bandwidth} \
--np {np} \
> fragsParsed_KDE_dif.pkl
echo '# adding DBL contamination'
SIPSim DBL \
fragsParsed_KDE_dif.pkl \
--bw {bandwidth} \
--np {np} \
> fragsParsed_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
fragsParsed_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
--comm {physeq_bulkCore}_comm_target.txt \
--bw {bandwidth} \
--np {np} \
> fragsParsed_KDE_dif_DBL_inc.pkl
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
{physeq_bulkCore}_comm_target.txt \
> fracs.txt
echo '# simulating an OTU table'
SIPSim OTU_table \
fragsParsed_KDE_dif_DBL_inc.pkl \
{physeq_bulkCore}_comm_target.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#echo '# simulating PCR'
#SIPSim OTU_PCR \
# OTU_abs{abs}.txt \
# > OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}.txt \
> OTU_abs{abs}_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_meta.txt
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d Day1_rep10_justOverlap --log-file log.txt -j 2
%pushnote Day1_rep10_justOverlap complete
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
%%R
# simulated OTU table file
OTU.table.dir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/Day1_default_run/1e9/'
OTU.table.file = 'OTU_abs1e9_PCR_sub.txt'
#OTU.table.file = 'OTU_abs1e9_sub.txt'
#OTU.table.file = 'OTU_abs1e9.txt'
%%R -i physeqDir -i physeq_SIP_core -i bulk_days
# bulk core samples
F = file.path(physeqDir, physeq_SIP_core)
physeq.SIP.core = readRDS(F)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core = prune_samples(physeq.SIP.core.m$Substrate == '12C-Con' &
physeq.SIP.core.m$Day %in% bulk_days,
physeq.SIP.core) %>%
filter_taxa(function(x) sum(x) > 0, TRUE)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core
%%R
## dataframe
df.EMP = physeq.SIP.core %>% otu_table %>%
as.matrix %>% as.data.frame
df.EMP$OTU = rownames(df.EMP)
df.EMP = df.EMP %>%
gather(sample, abundance, 1:(ncol(df.EMP)-1))
df.EMP = inner_join(df.EMP, physeq.SIP.core.m, c('sample' = 'X.Sample'))
df.EMP.nt = df.EMP %>%
group_by(sample) %>%
mutate(n_taxa = sum(abundance > 0)) %>%
ungroup() %>%
distinct(sample) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.EMP.nt %>% head(n=3)
%%R
physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq.bulk = 'bulk-core'
physeq.file = file.path(physeq.dir, physeq.bulk)
physeq.bulk = readRDS(physeq.file)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &
physeq.bulk.m$Day %in% bulk_days, physeq.bulk)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
physeq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))
physeq.bulk.n
%%R
# making long format of each bulk table
bulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame
ncol = ncol(bulk.otu)
bulk.otu$OTU = rownames(bulk.otu)
bulk.otu = bulk.otu %>%
gather(sample, abundance, 1:ncol)
bulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%
dplyr::select(OTU, abundance) %>%
rename('bulk_abund' = abundance)
bulk.otu %>% head(n=3)
%%R
# joining tables
df.EMP.j = inner_join(df.EMP, bulk.otu, c('OTU' = 'OTU')) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.EMP.j %>% head(n=3)
OTU_files = !find $buildDir -name "OTU_abs1e9_sub.txt"
OTU_files
%%R -i OTU_files
# loading files
df.SIM = list()
for (x in OTU_files){
SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_rep10_justOverlap/', '', x)
SIM_rep = gsub('/OTU_abs1e9_sub.txt', '', SIM_rep)
df.SIM[[SIM_rep]] = read.delim(x, sep='\t')
}
df.SIM = do.call('rbind', df.SIM)
df.SIM$SIM_rep = gsub('\\.[0-9]+$', '', rownames(df.SIM))
rownames(df.SIM) = 1:nrow(df.SIM)
df.SIM %>% head
%%R
## edit table
df.SIM.nt = df.SIM %>%
filter(count > 0) %>%
group_by(SIM_rep, library, BD_mid) %>%
summarize(n_taxa = n()) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.nt %>% head
# loading comm files
comm_files = !find $buildDir -name "bulk-core_comm_target.txt"
comm_files
%%R -i comm_files
df.comm = list()
for (f in comm_files){
rep = gsub('.+/Day1_rep10_justOverlap/([0-9]+)/.+', '\\1', f)
df.comm[[rep]] = read.delim(f, sep='\t') %>%
dplyr::select(library, taxon_name, rel_abund_perc) %>%
rename('bulk_abund' = rel_abund_perc) %>%
mutate(bulk_abund = bulk_abund / 100)
}
df.comm = do.call('rbind', df.comm)
df.comm$SIM_rep = gsub('\\.[0-9]+$', '', rownames(df.comm))
rownames(df.comm) = 1:nrow(df.comm)
df.comm %>% head(n=3)
%%R
## joining tables
df.SIM.j = inner_join(df.SIM, df.comm, c('SIM_rep' = 'SIM_rep',
'library' = 'library',
'taxon' = 'taxon_name')) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.j %>% head(n=3)
%%R
# filtering & combining emperical w/ simulated data
## emperical
max_BD_range = max(df.EMP.j$Buoyant_density) - min(df.EMP.j$Buoyant_density)
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
group_by(OTU) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(Buoyant_density),
max_BD = max(Buoyant_density),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
mutate(dataset = 'emperical',
SIM_rep = NA)
## simulated
max_BD_range = max(df.SIM.j$BD_mid) - min(df.SIM.j$BD_mid)
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
group_by(SIM_rep, taxon) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(BD_mid),
max_BD = max(BD_mid),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
rename('OTU' = taxon) %>%
mutate(dataset = 'simulated')
## join
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
filter(BD_range_perc > 0,
mean_rel_abund > 0)
df.j$SIM_rep = reorder(df.j$SIM_rep, df.j$SIM_rep %>% as.numeric)
df.j %>% head(n=3)
%%R -h 400
## plotting
ggplot(df.j, aes(mean_rel_abund, BD_range_perc, color=SIM_rep)) +
geom_point(alpha=0.3) +
scale_x_log10() +
scale_y_continuous() +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank()#,
#legend.position = 'none'
)
%%R -i targetFile
df.target = read.delim(targetFile, sep='\t')
df.target %>% nrow %>% print
df.target %>% head(n=3)
%%R
# filtering to just target taxa
df.j.t = df.j %>%
filter(OTU %in% df.target$OTU)
df.j %>% nrow %>% print
df.j.t %>% nrow %>% print
## plotting
ggplot(df.j.t, aes(mean_rel_abund, BD_range_perc, color=SIM_rep)) +
geom_point(alpha=0.5, shape='O') +
scale_x_log10() +
scale_y_continuous() +
#scale_color_manual(values=c('blue', 'red')) +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank()#,
#legend.position = 'none'
)
%%R -w 600 -h 500
# formatting data
df.1 = df.j.t %>%
filter(dataset == 'simulated') %>%
select(SIM_rep, OTU, mean_rel_abund, BD_range, BD_range_perc)
df.2 = df.j.t %>%
filter(dataset == 'emperical') %>%
select(SIM_rep, OTU, mean_rel_abund, BD_range, BD_range_perc)
df.12 = inner_join(df.1, df.2, c('OTU' = 'OTU')) %>%
mutate(BD_diff_perc = BD_range_perc.y - BD_range_perc.x)
df.12$SIM_rep.x = reorder(df.12$SIM_rep.x, df.12$SIM_rep.x %>% as.numeric)
## plotting
p1 = ggplot(df.12, aes(mean_rel_abund.x, mean_rel_abund.y)) +
geom_point(alpha=0.5) +
scale_x_log10() +
scale_y_log10() +
labs(x='Relative abundance (simulated)', y='Relative abundance (emperical)') +
facet_wrap(~ SIM_rep.x)
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
p1
%%R -w 800 -h 500
ggplot(df.12, aes(mean_rel_abund.x, BD_diff_perc)) +
geom_point(alpha=0.5) +
scale_x_log10() +
labs(x='Pre-fractionation relative abundance',
y='Difference in % of gradient spanned\n(emperical - simulated)',
title='Overlapping taxa') +
facet_wrap(~ SIM_rep.x) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
%%R
join_abund_dists = function(df.EMP.j, df.SIM.j, df.target){
## emperical
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
dplyr::select(OTU, sample, abundance, Buoyant_density, bulk_abund) %>%
mutate(dataset = 'emperical', SIM_rep = NA) %>%
filter(OTU %in% df.target$OTU)
## simulated
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
dplyr::select(taxon, fraction, count, BD_mid, bulk_abund, SIM_rep) %>%
rename('OTU' = taxon,
'sample' = fraction,
'Buoyant_density' = BD_mid,
'abundance' = count) %>%
mutate(dataset = 'simulated') %>%
filter(OTU %in% df.target$OTU)
## getting just intersecting OTUs
OTUs.int = intersect(df.EMP.j.f$OTU, df.SIM.j.f$OTU)
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
filter(OTU %in% OTUs.int) %>%
group_by(sample) %>%
mutate(rel_abund = abundance / sum(abundance))
cat('Number of overlapping OTUs between emperical & simulated:',
df.j$OTU %>% unique %>% length, '\n\n')
return(df.j)
}
df.j = join_abund_dists(df.EMP.j, df.SIM.j, df.target)
df.j %>% head(n=3) %>% as.data.frame
%%R
# closure operation
df.j = df.j %>%
ungroup() %>%
mutate(SIM_rep = SIM_rep %>% as.numeric) %>%
group_by(dataset, SIM_rep, sample) %>%
mutate(rel_abund_c = rel_abund / sum(rel_abund)) %>%
ungroup()
df.j %>% head(n=3) %>% as.data.frame
%%R -h 1500 -w 800
# plotting
plot_abunds = function(df){
p = ggplot(df, aes(Buoyant_density, rel_abund_c, fill=OTU)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
labs(x='Buoyant density',
y='Subsampled community\n(relative abundance for subset taxa)') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
return(p)
}
# simulations
df.j.f = df.j %>%
filter(dataset == 'simulated')
p.SIM = plot_abunds(df.j.f)
p.SIM = p.SIM + facet_grid(SIM_rep ~ .)
# emperical
df.j.f = df.j %>%
filter(dataset == 'emperical')
p.EMP = plot_abunds(df.j.f)
# make figure
grid.arrange(p.EMP, p.SIM, ncol=1, heights=c(1,5))
OTU_files = !find $buildDir -name "OTU_abs1e9.txt"
OTU_files
%%R -i OTU_files
# loading files
df.SIM.abs = list()
for (x in OTU_files){
SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_rep10/', '', x)
SIM_rep = gsub('/OTU_abs1e9.txt', '', SIM_rep)
df.SIM.abs[[SIM_rep]] = read.delim(x, sep='\t')
}
df.SIM.abs = do.call('rbind', df.SIM.abs)
df.SIM.abs$SIM_rep = gsub('\\.[0-9]+$', '', rownames(df.SIM.abs))
rownames(df.SIM.abs) = 1:nrow(df.SIM.abs)
df.SIM.abs %>% head
%%R
# subset just overlapping taxa
# & closure operation
df.SIM.abs.t = df.SIM.abs %>%
filter(taxon %in% df.target$OTU) %>%
group_by(SIM_rep, fraction) %>%
mutate(rel_abund_c = count / sum(count)) %>%
rename('Buoyant_density' = BD_mid,
'OTU' = taxon)
df.SIM.abs.t %>% head(n=3) %>% as.data.frame
%%R -w 800 -h 1200
# plotting
p.abs = plot_abunds(df.SIM.abs.t)
p.abs + facet_grid(SIM_rep ~ .)
%%R
center_mass = function(df){
df = df %>%
group_by(dataset, SIM_rep, OTU) %>%
summarize(center_mass = weighted.mean(Buoyant_density, rel_abund_c, na.rm=T)) %>%
ungroup()
return(df)
}
df.j.cm = center_mass(df.j)
%%R
# getting mean cm for all SIM_reps
df.j.cm.s = df.j.cm %>%
group_by(dataset, OTU) %>%
summarize(mean_cm = mean(center_mass, na.rm=T),
stdev_cm = sd(center_mass)) %>%
ungroup() %>%
spread(dataset, mean_cm) %>%
group_by(OTU) %>%
summarize(stdev_cm = mean(stdev_cm, na.rm=T),
emperical = mean(emperical, na.rm=T),
simulated = mean(simulated, na.rm=T)) %>%
ungroup()
# check
cat('Number of OTUs:', df.j.cm.s$OTU %>% unique %>% length, '\n')
# plotting
ggplot(df.j.cm.s, aes(emperical, simulated,
ymin = simulated - stdev_cm,
ymax = simulated + stdev_cm)) +
geom_pointrange() +
stat_function(fun = function(x) x, linetype='dashed', alpha=0.5, color='red') +
scale_x_continuous(limits=c(1.69, 1.74)) +
scale_y_continuous(limits=c(1.7, 1.75)) +
labs(title='Center of mass') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
BD_MIN = df.j$Buoyant_density %>% min
BD_MAX = df.j$Buoyant_density %>% max
BD_AVE = mean(c(BD_MIN, BD_MAX))
print(c(BD_MIN, BD_AVE, BD_MAX))
%%R
# formatting table
df.j.cm.j = inner_join(df.j.cm %>%
filter(dataset == 'simulated') %>%
rename('cm_SIM' = center_mass),
df.j.cm %>%
filter(dataset == 'emperical') %>%
rename('cm_EMP' = center_mass),
c('OTU' = 'OTU')) %>%
select(-starts_with('dataset'))
df.j.cm.j %>% head
%%R -w 300 -h 400
# lm()
df.j.cm.j.lm = df.j.cm.j %>%
group_by(SIM_rep.x) %>%
do(fit = lm(cm_EMP ~ cm_SIM, data = .)) %>%
mutate(R2 = summary(fit)$coeff[2],
data = 'simulated')
#df.j.cm.j.lm %>% head
# plotting
ggplot(df.j.cm.j.lm, aes(data, R2)) +
geom_boxplot() +
geom_jitter(height=0, width=0.2, color='red') +
labs(y='R^2', title='simulated ~ emperical') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -h 1100 -w 800
# cutoff on which OTU are major outliers (varying between simulated and emperical)
BD.diff.cut = 0.02
# which OTU to plot?
df.j.cm.s.f = df.j.cm.s %>%
mutate(cm_diff = abs(emperical - simulated)) %>%
filter(cm_diff > BD.diff.cut, ! is.na(simulated))
print('OTUs:')
print(df.j.cm.s.f$OTU)
# filtering to just target taxon
## Simulated
df.j.f = df.j %>%
filter(dataset == 'simulated',
OTU %in% df.j.cm.s.f$OTU)
p.SIM = plot_abunds(df.j.f)
p.SIM = p.SIM + facet_grid(SIM_rep ~ .)
## Emperical
df.j.f = df.j %>%
filter(dataset == 'emperical',
OTU %in% df.j.cm.s.f$OTU)
p.EMP = plot_abunds(df.j.f)
# make figure
grid.arrange(p.EMP, p.SIM, ncol=1, heights=c(1,5))
%%R
# subset outliers
df.SIM.abs.t = df.SIM.abs %>%
filter(taxon %in% df.target$OTU) %>%
group_by(SIM_rep, fraction) %>%
mutate(rel_abund_c = count / sum(count)) %>%
rename('Buoyant_density' = BD_mid,
'OTU' = taxon) %>%
filter(OTU %in% df.j.cm.s.f$OTU)
df.SIM.abs.t %>% head(n=3) %>% as.data.frame
%%R -w 800 -h 1200
# plotting
p.abs = plot_abunds(df.SIM.abs.t)
p.abs + facet_grid(SIM_rep ~ .)
%%R -w 800
genomes = df.target %>%
filter(OTU %in% df.SIM.abs.t$OTU)
df.genInfo = read.delim('/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/genome_info.txt')
df.genInfo.f = df.genInfo %>%
filter(seq_name %in% genomes$genome_seqID) %>%
mutate(genome_ID = gsub('\\.fna', '', file_name))
df.genInfo.f$genome_ID = reorder(df.genInfo.f$genome_ID, df.genInfo.f$total_GC)
# plotting
ggplot(df.genInfo.f, aes(genome_ID, total_GC)) +
geom_point() +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=50, hjust=1)
)
%%R -w 800
df.genInfo = read.delim('/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/genome_info.txt')
df.genInfo.f = df.genInfo %>%
filter(seq_name %in% df.target$genome_seqID) %>%
mutate(genome_ID = gsub('\\.fna', '', file_name))
df.genInfo.f$genome_ID = reorder(df.genInfo.f$genome_ID, df.genInfo.f$total_GC)
# plotting
ggplot(df.genInfo.f, aes(genome_ID, total_GC)) +
geom_point() +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nestly
Step2: BD min/max
Step3: Loading data
Step4: bulk soil samples
Step5: Simulated
Step6: 'bulk soil' community files
Step7: BD span of just overlapping taxa (redundant; but consistent wiht other notebooks)
Step8: Check
Step9: Correlation between relative abundance and BD_range diff
Step10: Notes
Step11: Check
Step12: Notes
Step13: Notes
Step14: R^2 for each SIM_rep
Step15: Notes
Step16: Notes
Step17: Notes
Step18: Notes
|
6,685
|
<ASSISTANT_TASK:>
Python Code:
def Solar_Power_Calculator(Day_Of_Year,Lattitude,Hour_of_Day):
'''This function will tell you how much power the sun is '''
# Calculating Theta D
ThetaD = (2*np.pi*Day_Of_Year)/365
# Calculating distance
# Constants for calculating distance
Dis_n = [0,1,2]
Dis_an = [1.00011,0.034221,0.000719]
Dis_bn = [0,0.00128,0.000077]
Dis1 = Dis_an[0]*np.cos(Dis_n[0]*ThetaD)+Dis_bn[0]*np.sin(Dis_n[0]*ThetaD)
Dis2 = Dis_an[1]*np.cos(Dis_n[1]*ThetaD)+Dis_bn[1]*np.sin(Dis_n[1]*ThetaD)
Dis3 = Dis_an[2]*np.cos(Dis_n[2]*ThetaD)+Dis_bn[2]*np.sin(Dis_n[2]*ThetaD)
# Calculate Distance
Distance = Dis1+Dis2+Dis3
# Constants for calculating declination
Dec_n = [0,1,2,3]
Dec_an = [0.006918,-0.399912,-0.006758,-0.002697]
Dec_bn = [0,0.070257,0.000907,0.00148]
Dec1 = Dec_an[0]*np.cos(Dec_n[0]*ThetaD)+Dec_bn[0]*np.sin(Dec_n[0]*ThetaD)
Dec2 = Dec_an[1]*np.cos(Dec_n[1]*ThetaD)+Dec_bn[1]*np.sin(Dec_n[1]*ThetaD)
Dec3 = Dec_an[2]*np.cos(Dec_n[2]*ThetaD)+Dec_bn[2]*np.sin(Dec_n[2]*ThetaD)
Dec4 = Dec_an[3]*np.cos(Dec_n[3]*ThetaD)+Dec_bn[3]*np.sin(Dec_n[3]*ThetaD)
# Calculate Dec_radians
Dec_radians = Dec1+Dec2+Dec3+Dec4
Dec_degrees = np.degrees(Dec_radians)
# For Hour Angle
Hour_angle = np.radians(Hour_of_Day*15)
# For Radians and Cos Solar Zenith Angle
radians = np.pi/180*Lattitude
CSZA = np.sin(radians)*np.sin(Dec_radians)+np.cos(radians)*np.cos(Dec_radians)*np.cos(Hour_angle)# Cos Solar Zenith Angle
# Calculate Energy/Area (W/m^2)
Watts_Per_SqMeter = S0*Distance*CSZA*Atm
return(Watts_Per_SqMeter)
Solar_Power_Calculator(17,49.7,0)
# Making a list called total of Theta D for every day of the year
year = list(range(1,366))
ThetaD_list = []
for i in year:
ThetaD_list.append((2*np.pi*i)/365)
len(ThetaD_list)
def Solar_Energy_Calculator(lattitude, panel_efficiency, area):
'''This function calculates the energy that can be generated in any given place in the
world over one year sans clouds.
Inputs: lattitude, panel_efficiency (a number between 0 and 1), and area (of solar panels
in square meters).'''
# Making Distance and Dec_radians lists for each day of the year
radians = np.pi/180*lattitude
Hours = [12,11,10,9,8,7,6,5,4,3,2,1,0,1,2,3,4,5,6,7,8,9,10,11] # A list of all the hours of the day
Solar_Flux = 0 # Energy generated from given area of solar panels in one hour
Watts_Every_Hour = [] # A list that will become the Wh/m^2 every hour for a year
kWh = 0 # A number that will become the total kWh in one place in one year.
for i in ThetaD_list:
# Calculate the Distance
Dis1 = Dis_an[0]*np.cos(Dis_n[0]*i)+Dis_bn[0]*np.sin(Dis_n[0]*i)
Dis2 = Dis_an[1]*np.cos(Dis_n[1]*i)+Dis_bn[1]*np.sin(Dis_n[1]*i)
Dis3 = Dis_an[2]*np.cos(Dis_n[2]*i)+Dis_bn[2]*np.sin(Dis_n[2]*i)
Distance = Dis1+Dis2+Dis3
# Calculate the Declination
Dec1 = Dec_an[0]*np.cos(Dec_n[0]*i)+Dec_bn[0]*np.sin(Dec_n[0]*i)
Dec2 = Dec_an[1]*np.cos(Dec_n[1]*i)+Dec_bn[1]*np.sin(Dec_n[1]*i)
Dec3 = Dec_an[2]*np.cos(Dec_n[2]*i)+Dec_bn[2]*np.sin(Dec_n[2]*i)
Dec4 = Dec_an[3]*np.cos(Dec_n[3]*i)+Dec_bn[3]*np.sin(Dec_n[3]*i)
Dec_radians = Dec1+Dec2+Dec3+Dec4
Dec_degrees = (np.degrees(Dec_radians))
for i in Hours:
Hour_angle = np.radians(i*15)
CSZA = (np.sin(radians)*np.sin(Dec_radians)) + (np.cos(radians)*np.cos(Dec_radians)*np.cos(Hour_angle))
if CSZA < 0:
CSZA = 0
Solar_Flux = (S0)*Distance*CSZA*Atm*panel_efficiency*area
Watts_Every_Hour.append(Solar_Flux)
kWh = sum(Watts_Every_Hour)/1000
return(Watts_Every_Hour)
# First, I'm loading the raw cloud cover data.
cloud_dat = pd.read_csv('../data/weather.txt',sep='\s+')
cloud_dat
# Right now the data is in 1 row and 10368 columns, so it requires some
# cleaning up
cloud_dat.shape
# After transposing, the data is in 1 column and 10368 rows
cloud_dat = cloud_dat.transpose()
cloud_dat.shape
# Now I will change the name of the column of data and reset the index
cloud_dat = cloud_dat.reset_index()
cloud_dat.columns=['cloud_ratio']
# Here is a glimpse of what the data looks like now
cloud_dat
# Next, I load a dataframe that I created in excel with three columns
# (month, lattitude, and longitude) that have been filled in to line up
# with the 'data' object.
clouds = pd.read_excel('../../data/blank_weather.xlsx',sep='\s+')
clouds
# Now, we will add a fourth column to 'clouds' that is our data
clouds['cloud_ratio'] = cloud_dat['cloud_ratio']
clouds
Watts = Solar_Energy_Calculator(49.7,.16,1.68)
def find_sun(lat,long):
'''This function finds the ratio of clouds for any lattitude and longitude and converts
it into the ratio of radiation that reaches the earth.
inputs: lattitude, longitude
output: radiation ratio'''
x = clouds.loc[(clouds['lattitude'] <= lat) & (clouds['lattitude'] > (lat-2.5)) & (clouds['longitude'] <= long) & (clouds['longitude'] > (long-2.5))]
radiation_ratio = 1-((float(x.iloc[0,2])*0.6)/100)
return(radiation_ratio)
radiation = find_sun(49,-123)
radiation
def apply_clouds(watts,radiation):
'''This function takes a list of watts without clouds and radiation ratio due to clouds
and gives you a list of the real solar generation prediction.'''
energy = []
for i in watts:
energy.append(i*radiation)
return(energy)
final = apply_clouds(Watts,radiation)
sum(final)/1000
final = pd.DataFrame(final)
final = final.reset_index()
final.columns=['Day','Power']
final['Day'] = final['Day']/24
final
jan = sum(final.Power[0:1000])
plt.plot(final)
# change figure size
plt.figure(figsize=(12,9))
# add data to plot (x-axis, y-axis, )
plt.plot(final['Day'],final['Power'],color='b',linestyle='-')
# add title
plt.title('Power Output',fontsize=24)
# modify axis limits
plt.xlim(0,365)
# add axis labels
plt.ylabel('Average Power Generation (Watts)',fontsize=16)
plt.xlabel('Day of Year',fontsize=16)
# save figure to graphs directory
plt.savefig('TEST.pdf')
pylab.savefig("/home/username/Desktop/myfig.png")
# show plot
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now I'm going to take the above function and do the same thing except make it print the number of Wh in one square meter for a year.
Step2: Loading Cloud Data
Step3: Now, 'clouds' is a nice looking dataframe that includes lattitude, longitude, and average sun that gets through the clouds for every month and the entire world
Step4: Making a Plot of the Data
|
6,686
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
from IPython.display import Image
assert True # leave this to grade the import statements
Image(url='http://i.imgur.com/h3YTC.jpg',embed=True,width=600,height=600)
assert True # leave this to grade the image display
%%html
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge(e)</th>
<th>Mass(MeV/c^2</th>
<tr>
<td>up</td>
<td>u</td>
<td>$\bar{u}$</td>
<td>+2/3</td>
<td>1.5-3.3</td>
<tr>
<td>down</td>
<td>d</td>
<td>$\bar{d}$</td>
<td>-1/3</td>
<td>3.5-6.0</td>
<tr>
<td>charm</td>
<td>c</td>
<td>$\bar{c}$</td>
<td>+2/3</td>
<td>1,160-1,340</td>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>-1/3</td>
<td>70-130</td>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>+2/3</td>
<td>169,100-173,300</td>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>-1/3</td>
<td>4,130-4,370</td>
</table>
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
6,687
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-2', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
6,688
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
%matplotlib inline
known_behaviors = pd.read_csv("https://raw.githubusercontent.com/vberaudi/utwt/master/known_behaviors2.csv")
known_behaviors.head()
a = known_behaviors[known_behaviors.Mortgage == 1]
b = known_behaviors[known_behaviors.Pension == 1]
c = known_behaviors[known_behaviors.Savings == 1]
print("Number of clients: %d" %len(known_behaviors))
print("Number of clients predicted to buy mortgage accounts: %d" %len(a))
print("Number of clients predicted to buy pension accounts: %d" %len(b))
print("Number of clients predicted to buy savings accounts: %d" %len(c))
known_behaviors["nb_products"] = known_behaviors.Mortgage + known_behaviors.Pension + known_behaviors.Savings
abc = known_behaviors[known_behaviors.nb_products > 1]
print("We have %d clients who bought several products" %len(abc))
abc = known_behaviors[known_behaviors.nb_products == 3]
print("We have %d clients who bought all the products" %len(abc))
products = ["Savings", "Mortgage", "Pension"]
def plot_cloud_points(df):
figure = plt.figure(figsize=(20, 5))
my_cm = ListedColormap(['#bb0000', '#00FF00'])
axes = {p : ('age', 'income') if p != "Mortgage"else ('members_in_household', 'loan_accounts') for p in products}
for product in products:
ax = plt.subplot(1, len(products), products.index(product)+1)
ax.set_title(product)
axe = axes[product]
plt.xlabel(axe[0])
plt.ylabel(axe[1])
ax.scatter(df[axe[0]], df[axe[1]], c=df[product], cmap=my_cm, alpha=0.5)
plot_cloud_points(known_behaviors)
known_behaviors.columns
cols = ['age', 'income', 'members_in_household', 'loan_accounts']
X = known_behaviors[cols]
ys = [known_behaviors[p] for p in products]
X.head()
from sklearn import svm
from sklearn import ensemble
classifiers = []
for i,p in enumerate(products):
clf = ensemble.GradientBoostingClassifier()
clf.fit(X, ys[i])
classifiers.append(clf)
unknown_behaviors = pd.read_csv("https://raw.githubusercontent.com/vberaudi/utwt/master/unknown_behaviors.csv")
for c in unknown_behaviors.columns:
assert c in known_behaviors.columns
to_predict = unknown_behaviors[cols]
print("Number of new customers: %d" %len(unknown_behaviors))
import warnings
warnings.filterwarnings('ignore')
predicted = [classifiers[i].predict(to_predict) for i in range(len(products))]
for i,p in enumerate(products):
to_predict[p] = predicted[i]
to_predict["id"] = unknown_behaviors["customer_id"]
offers = to_predict
offers.head()
plot_cloud_points(offers)
a = offers[offers.Mortgage == 1]
b = offers[offers.Pension == 1]
c = offers[offers.Savings == 1]
print("Number of new customers: %d" %len(offers))
print("Number of customers predicted to buy mortgages: %d" %len(a))
print("Number of customers predicted to buy pensions: %d" %len(b))
print("Number of customers predicted to buy savings: %d" %len(c))
to_predict["nb_products"] = to_predict.Mortgage + to_predict.Pension + to_predict.Savings
abc = to_predict[to_predict.nb_products > 1]
print("We predicted that %d clients would buy more than one product" %len(abc))
abc = to_predict[to_predict.nb_products == 3]
print("We predicted that %d clients would buy all three products" %len(abc))
offers.reset_index(inplace=True)
# How much revenue is earned when selling each product
productValue = [200, 300, 400]
value_per_product = {products[i] : productValue[i] for i in range(len(products))}
# Total available budget
availableBudget = 25000
# For each channel, cost of making a marketing action and success factor
channels = pd.DataFrame(data=[("gift", 20.0, 0.20),
("newsletter", 15.0, 0.05),
("seminar", 23.0, 0.30)], columns=["name", "cost", "factor"])
offersR = range(0, len(offers))
productsR = range(0, len(products))
channelsR = range(0, len(channels))
gsol = pd.DataFrame()
gsol['id'] = offers['id']
budget = 0
revenue = 0
for product in products:
gsol[product] = 0
noffers = len(offers)
# ensure the 10% per channel by choosing the most promising per channel
for c in channelsR: #, channel in channels.iterrows():
i = 0;
while (i< ( noffers // 10 ) ):
# find a possible offer in this channel for a customer not yet done
added = False
for o in offersR:
already = False
for product in products:
if gsol.get_value(index=o, col=product) == 1:
already = True
break
if already:
continue
possible = False
possibleProduct = None
for product in products:
if offers.get_value(index=o, col=product) == 1:
possible = True
possibleProduct = product
break
if not possible:
continue
#print "Assigning customer ", offers.get_value(index=o, col="id"), " with product ", product, " and channel ", channel['name']
gsol.set_value(index=o, col=possibleProduct, value=1)
i = i+1
added = True
budget = budget + channels.get_value(index=c, col="cost")
revenue = revenue + channels.get_value(index=c, col="factor")*value_per_product[product]
break
if not added:
print("NOT FEASIBLE")
break
# add more to complete budget
while (True):
added = False
for c, channel in channels.iterrows():
if (budget + channel.cost > availableBudget):
continue
# find a possible offer in this channel for a customer not yet done
for o in offersR:
already = False
for product in products:
if gsol.get_value(index=o, col=product) == 1:
already = True
break
if already:
continue
possible = False
possibleProduct = None
for product in products:
if offers.get_value(index=o, col=product) == 1:
possible = True
possibleProduct = product
break
if not possible:
continue
#print "Assigning customer ", offers.get_value(index=o, col="id"), " with product ", product, " and channel ", channel['name']
gsol.set_value(index=o, col=possibleProduct, value=1)
i = i+1
added = True
budget = budget + channel.cost
revenue = revenue + channel.factor*value_per_product[product]
break
if not added:
print("FINISH BUDGET")
break
print(gsol.head())
a = gsol[gsol.Mortgage == 1]
b = gsol[gsol.Pension == 1]
c = gsol[gsol.Savings == 1]
abc = gsol[(gsol.Mortgage == 1) | (gsol.Pension == 1) | (gsol.Savings == 1)]
print("Number of clients: %d" %len(abc))
print("Numbers of Mortgage offers: %d" %len(a))
print("Numbers of Pension offers: %d" %len(b))
print("Numbers of Savings offers: %d" %len(c))
print("Total Budget Spent: %d" %budget)
print("Total revenue: %d" %revenue)
comp1_df = pd.DataFrame(data=[["Greedy", revenue, len(abc), len(a), len(b), len(c), budget]], columns=["Algorithm","Revenue","Number of clients","Mortgage offers","Pension offers","Savings offers","Budget Spent"])
import sys
import docplex.mp
url = None
key = None
from docplex.mp.context import Context
context = Context.make_default_context()
context.solver.docloud.url = url
context.solver.docloud.key = key
context.solver.agent = 'docloud'
from docplex.mp.model import Model
mdl = Model(name="marketing_campaign", checker='on', context=context)
channelVars = mdl.binary_var_cube(offersR, productsR, channelsR)
# At most 1 product is offered to each customer
mdl.add_constraints( mdl.sum(channelVars[o,p,c] for p in productsR for c in channelsR) <=1
for o in offersR)
# Do not exceed the budget
mdl.add_constraint( mdl.sum(channelVars[o,p,c]*channels.get_value(index=c, col="cost")
for o in offersR
for p in productsR
for c in channelsR) <= availableBudget, "budget")
# At least 10% offers per channel
for c in channelsR:
mdl.add_constraint(mdl.sum(channelVars[o,p,c] for p in productsR for o in offersR) >= len(offers) // 10)
mdl.print_information()
obj = 0
for c in channelsR:
for p in productsR:
product=products[p]
coef = channels.get_value(index=c, col="factor") * value_per_product[product]
obj += mdl.sum(channelVars[o,p,c] * coef* offers.get_value(index=o, col=product) for o in offersR)
mdl.maximize(obj)
mdl.parameters.timelimit = 30
s = mdl.solve()
assert s, "No Solution !!!"
print(mdl.get_solve_status())
print(mdl.get_solve_details())
totaloffers = mdl.sum(channelVars[o,p,c]
for o in offersR
for p in productsR
for c in channelsR)
mdl.add_kpi(totaloffers, "nb_offers")
budgetSpent = mdl.sum(channelVars[o,p,c]*channels.get_value(index=c, col="cost")
for o in offersR
for p in productsR
for c in channelsR)
mdl.add_kpi(budgetSpent, "budgetSpent")
for c in channelsR:
channel = channels.get_value(index=c, col="name")
kpi = mdl.sum(channelVars[o,p,c] for p in productsR for o in offersR)
mdl.add_kpi(kpi, channel)
for p in productsR:
product = products[p]
kpi = mdl.sum(channelVars[o,p,c] for c in channelsR for o in offersR)
mdl.add_kpi(kpi, product)
mdl.report()
comp2_df = pd.DataFrame(data=[["CPLEX", mdl.objective_value, mdl.kpi_value_by_name('nb_offers'), mdl.kpi_value_by_name('Mortgage'), mdl.kpi_value_by_name('Pension'), mdl.kpi_value_by_name('Savings'), mdl.kpi_value_by_name('budgetSpent')]], columns=["Algorithm","Revenue","Number of clients","Mortgage offers","Pension offers","Savings offers","Budget Spent"])
comp_df = comp1_df.append(comp2_df, ignore_index=True)
comp_df
comp_df.set_index("Algorithm", inplace=True)
my_plot = comp_df['Revenue'].plot(kind='bar')
#get the hand on the budget constraint
ct = mdl.get_constraint_by_name("budget")
res = []
for i in range(20):
ct.rhs = availableBudget+1000*i
s = mdl.solve()
assert s, "No Solution !!!"
res.append((availableBudget+1000*i, mdl.objective_value, mdl.kpi_value_by_name("nb_offers"), mdl.kpi_value_by_name("budgetSpent")))
mdl.report()
pd.DataFrame(res, columns=["budget", "revenue", "nb_offers", "budgetSpent"])
ct.rhs = 0
s = mdl.solve()
if not s:
#rename the constraint with a "low" prefix to automatically put a low priority on it.
ct.name = "low_budget"
#setting all bool vars to 0 is an easy relaxation, so let's refuse it and force to offer something to 1/3 of the clients
mdl.add_constraint(totaloffers >= len(offers)//20, ctname="high")
# solve has failed, we try relaxation, based on constraint names
# constraints are prioritized according to their names
# if a name contains "low", it has priority LOW
# if a ct name contains "medium" it has priority MEDIUM
# same for HIGH
# if a constraint has no name or does not match any, it is not relaxable.
from docplex.mp.relaxer import Relaxer
relaxer = Relaxer(prioritizer='match', verbose=True)
relaxed_sol = relaxer.relax(mdl)
relaxed_ok = relaxed_sol is not None
assert relaxed_ok, "relaxation failed"
relaxer.print_information()
mdl.report()
print(mdl.get_solve_status())
print(mdl.get_solve_details())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Understand the historical data
Step2: Check the 2016 customers
Step3: Do some visual analysis of the historical data
Step4: In the following visualization, you can see the behavior of the 2016 customers for the three products.
Step5: Understanding the 2016 customers
Step6: Let's use the following columns as machine-learning features
Step7: We use a standard basic support gradient boosting algorithm to predict whether a customer might by product A, B, or C.
Step8: New customer data and predictions
Step9: Predict behaviors of the new customers
Step10: Package new data with predictions for optimization
Step11: Do some visual analysis of the predicted data
Step12: The predicted data has the same semantic as the base data, with even more clear frontiers
Step13: Remarks on the prediction
Step14: Get business decisions on the 2017 data
Step15: Using a greedy algorithm
Step16: The greedy algorithm only gives a revenue of \$50.8K.
Step17: Set up the prescriptive model
Step18: docplex solve methods take various arguments
Step19: Define the decision variables
Step20: Set up the constraints
Step21: Express the objective
Step22: Solve with the Decision Optimization solve service
Step23: Analyze the solution
Step24: With the mathematical optimization, we made a better selection of customers.
Step25: The following cell takes a relatively long term to run because the jobs are run sequentially. The standard subscriptions to DOcplexcloud solve service only allow one job at a time, but you can buy special subscriptions with parallel solves. If you have such a subscription, modify the following cell to benefit from it.
Step26: Due to the business constraints, we can address a maximum of 1680 customers with a \$35615 budget.
|
6,689
|
<ASSISTANT_TASK:>
Python Code:
edge_list =
krogan_graph =
krogan_graph.summary()
res =
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make an igraph graph and print its summary
Step2: Run mcode on your graph with vwp=0.1
|
6,690
|
<ASSISTANT_TASK:>
Python Code:
import graphlab as gl
import numpy as np
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = gl.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# assign poly_sframe[name] to be feature^power
#poly_sframe[name]= feature.apply(lambda x: x**power)
poly_sframe[name]= feature**power # can use this as well
return poly_sframe
import matplotlib.pyplot as plt
%matplotlib inline
sales = gl.SFrame('data/kc_house_data.gl/')
sales = sales.sort(['sqft_living','price'])
l2_small_penalty = 1e-5
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
poly15_columns = poly15_data.column_names()
poly15_data['price'] = sales['price']
model = gl.linear_regression.create(poly15_data, target='price', features=poly15_columns,
l2_penalty=1e-5,
validation_set=None, verbose=False)
model.coefficients
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
poly15_set_1 = polynomial_sframe(set_1['sqft_living'], 15)
poly15_set_1_names = poly15_set_1.column_names()
poly15_set_1['price'] = set_1['price']
#print(poly15_set_1.head(2))
model15_set_1 = gl.linear_regression.create(poly15_set_1, target = 'price', l2_penalty=l2_small_penalty ,
features = poly15_set_1_names, validation_set = None,
verbose=False)
plt.plot(poly15_set_1['power_15'], poly15_set_1['price'], '.',
poly15_set_1['power_15'], model15_set_1.predict(poly15_set_1), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_1.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_2 = polynomial_sframe(set_2['sqft_living'], 15)
poly15_set_2_names = poly15_set_2.column_names()
poly15_set_2['price'] = set_2['price']
model15_set_2 = gl.linear_regression.create(poly15_set_2, target = 'price', l2_penalty=l2_small_penalty,
features=poly15_set_2_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_2['power_15'], poly15_set_2['price'], '.',
poly15_set_2['power_15'], model15_set_2.predict(poly15_set_2), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_2.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_3 = polynomial_sframe(set_3['sqft_living'], 15)
poly15_set_3_names = poly15_set_3.column_names()
poly15_set_3['price'] = set_3['price']
model15_set_3 = gl.linear_regression.create(poly15_set_3, target = 'price', l2_penalty=l2_small_penalty,
features = poly15_set_3_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_3['power_15'], poly15_set_3['price'], '.',
poly15_set_3['power_15'], model15_set_3.predict(poly15_set_3), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_3.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_4 = polynomial_sframe(set_4['sqft_living'], 15)
poly15_set_4_names = poly15_set_4.column_names()
poly15_set_4['price'] = set_4['price']
model15_set_4 = gl.linear_regression.create(poly15_set_4, target = 'price', l2_penalty=l2_small_penalty,
features = poly15_set_4_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_4['power_15'], poly15_set_4['price'], '.',
poly15_set_4['power_15'], model15_set_4.predict(poly15_set_4), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_4.get('coefficients')
print(coeff[coeff['name']=='power_1'])
l2_penalty=1e5
poly15_set_1 = polynomial_sframe(set_1['sqft_living'], 15)
poly15_set_1_names = poly15_set_1.column_names()
poly15_set_1['price'] = set_1['price']
#print(poly15_set_1.head(2))
model15_set_1 = gl.linear_regression.create(poly15_set_1, target = 'price', l2_penalty=l2_penalty ,
features = poly15_set_1_names, validation_set = None,
verbose=False)
plt.plot(poly15_set_1['power_15'], poly15_set_1['price'], '.',
poly15_set_1['power_15'], model15_set_1.predict(poly15_set_1), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_1.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_2 = polynomial_sframe(set_2['sqft_living'], 15)
poly15_set_2_names = poly15_set_2.column_names()
poly15_set_2['price'] = set_2['price']
model15_set_2 = gl.linear_regression.create(poly15_set_2, target = 'price', l2_penalty=l2_penalty,
features=poly15_set_2_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_2['power_15'], poly15_set_2['price'], '.',
poly15_set_2['power_15'], model15_set_2.predict(poly15_set_2), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_2.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_3 = polynomial_sframe(set_3['sqft_living'], 15)
poly15_set_3_names = poly15_set_3.column_names()
poly15_set_3['price'] = set_3['price']
model15_set_3 = gl.linear_regression.create(poly15_set_3, target = 'price', l2_penalty=l2_penalty,
features = poly15_set_3_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_3['power_15'], poly15_set_3['price'], '.',
poly15_set_3['power_15'], model15_set_3.predict(poly15_set_3), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_3.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_4 = polynomial_sframe(set_4['sqft_living'], 15)
poly15_set_4_names = poly15_set_4.column_names()
poly15_set_4['price'] = set_4['price']
model15_set_4 = gl.linear_regression.create(poly15_set_4, target = 'price', l2_penalty=l2_penalty,
features = poly15_set_4_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_4['power_15'], poly15_set_4['price'], '.',
poly15_set_4['power_15'], model15_set_4.predict(poly15_set_4), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_4.get('coefficients')
print(coeff[coeff['name']=='power_1'])
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = gl.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
train_valid_shuffled[0:10] # rows 0 to 9
validation4=train_valid_shuffled[5818:7758]
print int(round(validation4['price'].mean(), 0))
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
# train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])
# segment3 indices - (5818, 7757)
n = len(train_valid_shuffled)
train4 = train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:n])
#train4 = part1.copy()
print int(round(train4['price'].mean(), 0))
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
validation_error=[]
for i in range(k):
n = len(data)
start = (n*i)/k
end = (n*(i+1))/k-1
validation_set = data[start:end+1]
training_set = data[0:start].append(data[end+1:n])
model = gl.linear_regression.create(training_set, target=output_name, features=features_list,
l2_penalty=l2_penalty,
validation_set=None,
verbose=False)
err = np.sum(np.square(validation_set[output_name] - model.predict(validation_set)))
validation_error.append(err)
if i==k-1:
rss = np.mean(validation_error)
return rss
k = 10
data = polynomial_sframe(train_valid_shuffled['sqft_living'],15)
features_name = data.column_names()
data['price'] = train_valid_shuffled['price']
min_err=None
best_l2_penalty=None
l2_penalty_list=np.logspace(1, 7, num=13)
l2_error=[]
for l2_penalty in l2_penalty_list:
error = k_fold_cross_validation(k, l2_penalty, data, 'price', features_list=features_name)
l2_error.append(error)
if min_err is None or min_err > error:
min_err = error
best_l2_penalty = l2_penalty
print min_err, best_l2_penalty
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(l2_penalty_list, l2_error, 'r-')
plt.xscale('log')
#plt.yscale('log')
plt.xlabel('l2 penalty')
plt.ylabel('RSS')
plt.grid(True)
best_l2_penalty = 1000
data = polynomial_sframe(train_valid_shuffled['sqft_living'],15)
features_name = data.column_names()
data['price'] = train_valid_shuffled['price']
model = gl.linear_regression.create(data, target='price', l2_penalty=best_l2_penalty,
validation_set=None, features=features_name,
verbose=None)
test_data = polynomial_sframe(test['sqft_living'],15)
test_data['price'] = test['price']
RSS = np.sum(np.square(test['price'] - model.predict(test_data)))
print RSS
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Polynomial regression, revisited
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: QUIZ QUESTION
Step7: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Step8: The four curves should differ from one another a lot, as should the coefficients you learned.
Step9: These curves should vary a lot less, now that you applied a high degree of regularization.
Step10: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
Step11: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step12: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Step13: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step14: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step15: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step16: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step17: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
Step18: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step19: QUIZ QUESTIONS
Step20: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
Step21: QUIZ QUESTION
|
6,691
|
<ASSISTANT_TASK:>
Python Code:
def task1():
cursor = db.cursor()
cursor.execute('''
select distinct ar.Name
from tracks t
inner join albums al
on t.albumid = al.albumid
inner join artists ar
on al.artistid = ar.artistid
inner join genres g
on t.genreid = g.genreid
where g.name = 'Rock'
''')
ar = cursor.fetchall()
return [x[0] for x in ar]
task1()
def task2():
cursor=db.cursor()
cursor.execute('''
DROP TABLE IF EXISTS students''')
cursor.execute('''
CREATE TABLE Students(id INTEGERE PRIMARY KEY, name TEXT, gpa NUMBER(10,2))''')
db.commit()
task2()
%%sql
select *
from students
%%sql
select coalesce(max(id)+1, 1) as new_id from students
def task3(l_students):
cursor = db.cursor()
cursor.execute( '''
SELECT COALESCE(MAX(ID)+1, 1) AS new_id FROM students''')
new_id = cursor.fetchone()[0]
for i, student in enumerate(l_students):
cursor.execute('''
INSERT INTO Students(id, name, gpa) VALUES(?,?,?)''', (new_id + i, student[0], student[1]))
db.commit()
task3([['Ivanov', 3.2], ['Petrov', 4.2]])
%%sql
SELECT *
FROM Students
def task4():
cursor = db.cursor()
cursor.execute('''DROP TABLE IF EXISTS faculties''')
cursor.execute('''CREATE TABLE faculties(fac_id INTEGER PRIMARY KEY, name TEXT)''')
cursor.execute('''ALTER TABLE students ADD fac_id INTEGER REFERENCES faculties(fac_id)''')
db.commit()
task4()
%%sql
select *
from faculties
%%sql
select *
from Students
%%sql
INSERT INTO faculties(fac_id, name)
VALUES (1, 'IT'), (2, 'KIB'), (3, 'Math')
%%sql
select *
from faculties
a = input('1 {}', '2')
def task5():
cursor = db.cursor()
cursor.execute('Select id, name, gpa from Students')
a = cursor.fetchall()
for x in a:
print("Введите факультет для студента {} с id = {} и gpa = {}".format(x[1], x[0], x[2]))
fac_name = input()
cursor.execute("SELECT fac_id from faculties where name = ?", (fac_name, ))
# Проверяем есть ли такая запись
try:
fac_id = cursor.fetchone()[0]
except TypeError:
continue
cursor.execute("Update students set fac_id = ? where id = ?", (fac_id, x[0],))
db.commit()
task5()
%%sql
SELECT *
FROM students
task5()
%%sql
SELECT *
FROM Students
def task6(fac_name, l_id):
cursor = db.cursor()
cursor.execute( '''
SELECT COALESCE(MAX(fac_id)+1, 1) AS new_fac_id FROM faculties''')
new_id = cursor.fetchone()[0]
cursor.execute('''
INSERT INTO faculties(fac_id, name) VALUES(?,?)''', (new_id, fac_name,))
for x in l_id:
cursor.execute('''
Update students set fac_id = ? where id = ?''', (new_id, x, ))
db.commit()
task6('Hist', [1])
%%sql
select *
from students
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Задание 2
Step2: Проверим, что таблица создана
Step3: Задание 3
Step4: Задание 4
Step5: Задание 5
Step6: Задание 6
|
6,692
|
<ASSISTANT_TASK:>
Python Code:
# import the maps
from marvin.tools.maps import Maps
# Load a MPL-5 map
mapfile = '/Users/Brian/Work/Manga/analysis/v2_0_1/2.0.2/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'
# Let's get a default map of
maps = Maps(filename=mapfile)
print(maps)
# Let's grab the H-alpha flux emission line map
haflux = maps.getMap('emline_gflux', channel='ha_6564')
print(haflux)
# turn on interactive plotting
%matplotlib notebook
# let's plot it
haflux.plot()
haflux.value, haflux.mask
import numpy as np
# select the locations where the mask is non-zero
badvals = np.where(haflux.mask > 0)
# set those values to a numpy nan.
haflux.value[badvals] = np.nan
# check the min and max
print('min', np.nanmin(haflux.value), 'max', np.nanmax(haflux.value))
haflux.plot()
# Let's look at the NII-to-Halpha emission-line ratio map
niiha = maps.getMapRatio('emline_gflux', 'nii_6585', 'ha_6564')
print(niiha)
niiha.plot()
# import the Cube tool
from marvin.tools.cube import Cube
# point to your file
filename ='/Users/Brian/Work/Manga/redux/v2_0_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
# get a cube
cube = Cube(filename=filename)
print(cube)
maps = cube.getMaps()
print(maps)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once you have a maps object, you can access the raw maps file and header and extensions via maps.header and maps.data. Alternatively, you can access individual maps using the getMap method. getMap works by specifying a parameter and a channel. The parameter and channels names are equivalent to those found in the MAPS FITS extensions and headers, albeit lowercased.
Step2: We can easily plot the map using the internal plot function. Currently maps are plotted using some default Matplotlib color schemes and scaling.
Step3: Try Yourself Now try grabbing and plotting the map for stellar velocity in the cell below.
Step4: Let's replot the Halpha flux map but exclude all regions that have a non-zero mask. We need the numpy Python package for this.
Step5: From the maps object, we can also easily plot the ratio between two maps, e.g. emission-line ratios, using the getMapRatio method. Map ratios are Map objects the same as any other, so you can access their array values or plot them
Step6: Try Yourself Modify the above to display the map for the emission-line ratio OIII/Hbeta
Step7: Once we have a cube, we can get its maps using the getMaps method. getMaps is just a wrapper to the Marvin Maps Tool. Once we have the maps, we can do all the same things as before.
|
6,693
|
<ASSISTANT_TASK:>
Python Code:
!wget -q https://raw.githubusercontent.com/sayantanauddy/vae_lightning/main/data.py
!wget -q https://raw.githubusercontent.com/probml/pyprobml/master/scripts/mfa_celeba_helpers.py
!pip install kaggle
from google.colab import files
uploaded = files.upload()
!mkdir /root/.kaggle
!cp kaggle.json /root/.kaggle/kaggle.json
!chmod 600 /root/.kaggle/kaggle.json
!pip install torchvision
!pip install pytorch-lightning
import sys, os
import torch
from torchvision.datasets import CelebA, MNIST
import torchvision.transforms as transforms
from pytorch_lightning import LightningDataModule, LightningModule, Trainer
from torch.utils.data import DataLoader, random_split
import numpy as np
from matplotlib import pyplot as plt
from imageio import imwrite
from packaging import version
from mfa_celeba_helpers import *
from data import CelebADataset, CelebADataModule
MFA model training (data fitting) example.
Note that actual EM (and SGD) training code are part of the MFA class itself.
def main(argv):
assert version.parse(torch.__version__) >= version.parse("1.2.0")
dataset = argv[1] if len(argv) == 2 else "celeba"
print("Preparing dataset and parameters for", dataset, "...")
if dataset == "celeba":
image_shape = [64, 64, 3] # The input image shape
n_components = 300 # Number of components in the mixture model
n_factors = 10 # Number of factors - the latent dimension (same for all components)
batch_size = 1000 # The EM batch size
num_iterations = 30 # Number of EM iterations (=epochs)
feature_sampling = 0.2 # For faster responsibilities calculation, randomly sample the coordinates (or False)
mfa_sgd_epochs = 0 # Perform additional training with diagonal (per-pixel) covariance, using SGD
init_method = "rnd_samples" # Initialize each component from few random samples using PPCA
trans = transforms.Compose(
[
CropTransform((25, 50, 25 + 128, 50 + 128)),
transforms.Resize(image_shape[0]),
transforms.ToTensor(),
ReshapeTransform([-1]),
]
)
train_set = CelebADataset(root="./data", split="train", transform=trans, download=True)
test_set = CelebADataset(root="./data", split="test", transform=trans, download=True)
elif dataset == "mnist":
image_shape = [28, 28] # The input image shape
n_components = 50 # Number of components in the mixture model
n_factors = 6 # Number of factors - the latent dimension (same for all components)
batch_size = 1000 # The EM batch size
num_iterations = 30 # Number of EM iterations (=epochs)
feature_sampling = False # For faster responsibilities calculation, randomly sample the coordinates (or False)
mfa_sgd_epochs = 0 # Perform additional training with diagonal (per-pixel) covariance, using SGD
init_method = "kmeans" # Initialize by using k-means clustering
trans = transforms.Compose([transforms.ToTensor(), ReshapeTransform([-1])])
train_set = MNIST(root="./data", train=True, transform=trans, download=True)
test_set = MNIST(root="./data", train=False, transform=trans, download=True)
else:
assert False, "Unknown dataset: " + dataset
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_dir = "./models/" + dataset
os.makedirs(model_dir, exist_ok=True)
figures_dir = "./figures/" + dataset
os.makedirs(figures_dir, exist_ok=True)
model_name = "c_{}_l_{}_init_{}".format(n_components, n_factors, init_method)
print("Defining the MFA model...")
model = MFA(
n_components=n_components, n_features=np.prod(image_shape), n_factors=n_factors, init_method=init_method
).to(device=device)
print("EM fitting: {} components / {} factors / batch size {} ...".format(n_components, n_factors, batch_size))
ll_log = model.batch_fit(
train_set, test_set, batch_size=batch_size, max_iterations=num_iterations, feature_sampling=feature_sampling
)
if mfa_sgd_epochs > 0:
print("Continuing training using SGD with diagonal (instead of isotropic) noise covariance...")
model.isotropic_noise = False
ll_log_sgd = model.sgd_mfa_train(
train_set, test_size=256, max_epochs=mfa_sgd_epochs, feature_sampling=feature_sampling
)
ll_log += ll_log_sgd
print("Saving the model...")
torch.save(model.state_dict(), os.path.join(model_dir, "model_" + model_name + ".pth"))
print("Visualizing the trained model...")
model_image = visualize_model(model, image_shape=image_shape, end_component=10)
imwrite(os.path.join(figures_dir, "model_" + model_name + ".jpg"), model_image)
print("Generating random samples...")
rnd_samples, _ = model.sample(100, with_noise=False)
mosaic = samples_to_mosaic(rnd_samples, image_shape=image_shape)
imwrite(os.path.join(figures_dir, "samples_" + model_name + ".jpg"), mosaic)
print("Plotting test log-likelihood graph...")
plt.plot(ll_log, label="c{}_l{}_b{}".format(n_components, n_factors, batch_size))
plt.grid(True)
plt.savefig(os.path.join(figures_dir, "training_graph_" + model_name + ".jpg"))
print("Done")
if __name__ == "__main__":
main(sys.argv)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting helper functions
Step2: Get the Kaggle api token and upload it to colab. Follow the instructions here.
Step4: Train and saving the checkpoint
|
6,694
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
# Define the Keras model to add callbacks to
def get_model():
model = keras.Sequential()
model.add(keras.layers.Dense(1, input_dim=784))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
loss="mean_squared_error",
metrics=["mean_absolute_error"],
)
return model
# Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0
# Limit the data to 1000 samples
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000]
class CustomCallback(keras.callbacks.Callback):
def on_train_begin(self, logs=None):
keys = list(logs.keys())
print("Starting training; got log keys: {}".format(keys))
def on_train_end(self, logs=None):
keys = list(logs.keys())
print("Stop training; got log keys: {}".format(keys))
def on_epoch_begin(self, epoch, logs=None):
keys = list(logs.keys())
print("Start epoch {} of training; got log keys: {}".format(epoch, keys))
def on_epoch_end(self, epoch, logs=None):
keys = list(logs.keys())
print("End epoch {} of training; got log keys: {}".format(epoch, keys))
def on_test_begin(self, logs=None):
keys = list(logs.keys())
print("Start testing; got log keys: {}".format(keys))
def on_test_end(self, logs=None):
keys = list(logs.keys())
print("Stop testing; got log keys: {}".format(keys))
def on_predict_begin(self, logs=None):
keys = list(logs.keys())
print("Start predicting; got log keys: {}".format(keys))
def on_predict_end(self, logs=None):
keys = list(logs.keys())
print("Stop predicting; got log keys: {}".format(keys))
def on_train_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: start of batch {}; got log keys: {}".format(batch, keys))
def on_train_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: end of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys))
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=1,
verbose=0,
validation_split=0.5,
callbacks=[CustomCallback()],
)
res = model.evaluate(
x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]
)
res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])
class LossAndErrorPrintingCallback(keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_test_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_epoch_end(self, epoch, logs=None):
print(
"The average loss for epoch {} is {:7.2f} "
"and mean absolute error is {:7.2f}.".format(
epoch, logs["loss"], logs["mean_absolute_error"]
)
)
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=2,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
res = model.evaluate(
x_test,
y_test,
batch_size=128,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
import numpy as np
class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
Stop training when the loss is at its min, i.e. the loss stops decreasing.
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("loss")
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
)
class CustomLearningRateScheduler(keras.callbacks.Callback):
Learning rate scheduler which sets the learning rate according to schedule.
Arguments:
schedule: a function that takes an epoch index
(integer, indexed from 0) and current learning rate
as inputs and returns a new learning rate as output (float).
def __init__(self, schedule):
super(CustomLearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, "lr"):
raise ValueError('Optimizer must have a "lr" attribute.')
# Get the current learning rate from model's optimizer.
lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))
# Call schedule function to get the scheduled learning rate.
scheduled_lr = self.schedule(epoch, lr)
# Set the value back to the optimizer before this epoch starts
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr))
LR_SCHEDULE = [
# (epoch to start, learning rate) tuples
(3, 0.05),
(6, 0.01),
(9, 0.005),
(12, 0.001),
]
def lr_schedule(epoch, lr):
Helper function to retrieve the scheduled learning rate based on epoch.
if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
return lr
for i in range(len(LR_SCHEDULE)):
if epoch == LR_SCHEDULE[i][0]:
return LR_SCHEDULE[i][1]
return lr
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=15,
verbose=0,
callbacks=[
LossAndErrorPrintingCallback(),
CustomLearningRateScheduler(lr_schedule),
],
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 编写自己的回调函数
Step2: Keras 回调函数概述
Step3: 然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试:
Step4: 接下来,定义一个简单的自定义回调函数来记录以下内容:
Step5: 我们来试一下:
Step6: logs 字典的用法
Step8: self.model 属性的用法
Step11: 学习率规划
|
6,695
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def sobel(f):
from pconv import pconv
Sx = np.array([[1.,2.,1.],
[0.,0.,0.],
[-1.,-2.,-1.]])
Sy = np.array([[1.,0.,-1.],
[2.,0.,-2.],
[1.,0.,-1.]])
fx = pconv(f, Sx)
fy = pconv(f, Sy)
mag = np.abs(fx + fy*1j)
theta = np.arctan2(fy,fx)
return mag,theta
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python sobel.ipynb
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
import matplotlib.image as mpimg
if testing:
f = np.array([[0,1,0,0],
[0,0,0,0],
[0,0,0,0]],dtype='uint8')
m,t = ia.sobel(f)
print('m:\n',m)
print('t:\n',t)
if testing:
f = mpimg.imread('../data/cameraman.tif')
(g,a) = ia.sobel(f)
nb = ia.nbshow(2)
nb.nbshow(ia.normalize(g),title='Sobel')
nb.nbshow(ia.normalize(np.log(g+1)),title='Log of sobel')
nb.nbshow()
if testing:
f = ia.circle([200,300], 90, [100,150])
m,t = ia.sobel(f)
dt = np.select([m > 2], [t])
nb = ia.nbshow(3)
nb.nbshow(f,title='Image f')
nb.nbshow(ia.normalize(m), title='Magnitude of Sobel filtering')
nb.nbshow(ia.normalize(dt), title='Angle of edges with magnitude above 2')
nb.nbshow()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: Numerical Example
Step3: Image examples
Step4: Example 2.
|
6,696
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
words = list(set(text))
vocab_to_int = {word: i for i, word in enumerate(words)}
int_to_vocab = {i: word for i, word in enumerate(words)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {
".": "||period||",
",": "||comma||",
"\"": "||quotation_mark||",
";": "||semi_colon||",
"!": "||exclamation_mark||",
"?": "||question_mark||",
"(": "||left_parentheses||",
")": "||right_parentheses||",
"--": "||dash||",
"\n": "||return||",
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(dtype=tf.int32, shape=(None, None), name='input')
targets = tf.placeholder(dtype=tf.int32, shape=(None, None))
learning = tf.placeholder(dtype=tf.float32)
return input, targets, learning
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embeddings = tf.Variable(tf.truncated_normal([vocab_size, embed_dim], stddev=.1))
return tf.nn.embedding_lookup(embeddings, input_data)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
return outputs, tf.identity(state, name="final_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 200
inputs = get_embed(input_data, vocab_size, 200)
outputs, state = build_rnn(cell, inputs)
logits = tf.contrib.layers.fully_connected(
outputs,
vocab_size,
activation_fn=None,
biases_initializer=tf.contrib.layers.xavier_initializer()
)
return logits, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
num_blocs = len(int_text) // batch_size // seq_length
total_len = batch_size * seq_length * num_blocs
inputs = np.array(int_text[:total_len])
targets = np.array(int_text[1:total_len+1])
arr = np.stack((inputs, targets)) # (2 x total_len)
batches = np.array(np.split(arr, batch_size, axis=1)) # (batch_size, 2, num_blocs)
ret_arr = np.array([batches[:,:,i*seq_length:(i+1)*seq_length] for i in range(num_blocs)])
return np.swapaxes(ret_arr, 1, 2)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 90
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 12
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 89
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
names = ["input:0", "initial_state:0", "final_state:0", "probs:0"]
return (loaded_graph.get_tensor_by_name(name) for name in names)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
# Making sure that we are fetching the words in the order of the keys
words = [int_to_vocab[i] for i in range(len(int_to_vocab))]
return np.random.choice(words, p=probabilities)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
6,697
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("ud_frfkt1t0")
# Import libraries
from __future__ import division
from scipy.stats import binom
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
# Initialize random seed
np.random.seed(1)
def genABK(nTrials,int_min,int_max):
'''
Generate the random numbers for the simulation with a uniform distribution
between 2 integers
Parameters
----------
nTrials : int
Number of trials in the game
int_min : int
Minimum integer to be generated
int_max : int
Maximum integer to be generated
Returns
-------
A, B, K : array-like (nTrials x 1)
Randomly generated integers for each of the three numbers in the game
'''
A = np.random.randint(int_min,int_max+1,nTrials)
B = np.random.randint(int_min,int_max+1,nTrials)
K = np.random.randint(int_min,int_max+1,nTrials)
# Generate a new random number for B and K if it is the same as A
for t in range(nTrials):
while B[t] == A[t]:
B[t] = np.random.randint(int_min,int_max+1)
while K[t] == A[t]:
K[t] = np.random.randint(int_min,int_max+1)
return A, B, K
# Set simulation parameters
nTrials = 100
int_min = 10
int_max = 50
int_range = int_max - int_min + 1
# Perform guessing game
trialsWon = 0
A, B, K = genABK(nTrials, int_min, int_max)
for t in range(nTrials):
res = np.logical_xor(A[t]>K[t],A[t]<B[t])
trialsWon += res
fracWon = trialsWon / nTrials
# Statistics: binomial distribution
cum_p = binom.cdf(trialsWon,nTrials,0.5)
print 'p =' , 1 - cum_p
# Visualize performance relative to the binomial distribution
binom_pmf = binom.pmf(range(nTrials+1),nTrials,0.5)
plt.figure()
plt.plot(range(nTrials+1),binom_pmf,'k-',label='binomial distribution')
plt.plot([trialsWon,trialsWon],[0,np.max(binom_pmf)],'r--',label='guesser performance')
plt.xlabel('Number of trials won')
plt.ylabel('Probability')
plt.legend(loc='best')
nTrials = 10000
A, B, K = genABK(nTrials, int_min, int_max)
Ks = range(int_min,int_max+1)
trialsWon_k = np.zeros((int_range,1))
trialsWon_rand = 0
for t in range(nTrials):
res = np.logical_xor(A[t]>K[t],A[t]<B[t])
trialsWon_rand += res
for k in range(int_range):
# Note: If flipped over card equals that number, then count it as being lower
res = np.logical_xor(A[t]>Ks[k],A[t]<B[t])
trialsWon_k[k] += res
fracWon_k = trialsWon_k / nTrials
fracWon_rand = trialsWon_rand / nTrials
# Visualize results
plt.figure()
plt.plot(Ks, fracWon_k, 'k.',label='constant K')
plt.plot([int_min,int_max],[fracWon_rand,fracWon_rand],'r--', label='randomly generated K')
plt.xlabel('Choice of K')
plt.ylabel('Fraction of winning trials')
plt.legend(loc='best')
# Self-generated 100 random numbers
randHuman100 = [7, 66, 5.5, 444, 10, -1000, 0, 0.1, 0.2, 33, -33, -5, -12, 6,
1.2, 333, 42, -44, 5.213, 44.44, 30.3, 829.3, 1, 1.44, 99, -9,
0.001, 3.4, -2.5, 9.3, -33, -6, -0.9, 111, -473, 2, 93, 85,
67.32, 7, -5, -1.8, 9.343, 15.2, 5.4, -3.777, 99.2, 100, 0.39,
65, 22, -49, 38, 1.33,4.01,17,55,0.3,-283,-893,-777,910,762,482,
109,192,75,988,762,983,492,-291,-432,-753,77,-37,8.3,0.36,-94,
6,28,-46,-389,-0.3,48,222,8.38,-95,-63,-154,83,94.6,193.5,882,
-3,-82,9.4,33,555,82]
# Expand self-generated 100 random numbers to 25600 (for >10,000 trials)
randHuman = np.hstack((randHuman100,[x * 1.5 for x in randHuman100]))
randHuman = np.hstack((randHuman,[x / 2 for x in randHuman]))
randHuman = np.hstack((randHuman,[x * 2.5 for x in randHuman]))
randHuman = np.hstack((randHuman,[x / 3 for x in randHuman]))
randHuman = np.hstack((randHuman,[x * 3.5 for x in randHuman]))
randHuman = np.hstack((randHuman,[x / 4 for x in randHuman]))
randHuman = np.hstack((randHuman,[x * 4.5 for x in randHuman]))
randHuman = np.hstack((randHuman,[x / 5 for x in randHuman]))
nHuman = len(randHuman)
# Generate A and B
nTrials = np.int(np.floor(nHuman / 2))
A = np.random.permutation(randHuman)
B = A[nTrials:]
A = A[:nTrials]
# Generate Ks for both strategies
K_dist_idx = np.random.randint(0,nHuman,nTrials)
K_dist = np.zeros((nTrials,1))
K_median = np.zeros((nTrials,1))
for t in range(nTrials):
if t == 0:
K_median[t] = 0
else:
cum_numbers = np.hstack((A[:t].tolist(),B[:t].tolist()))
K_median[t] = np.median(cum_numbers)
K_dist[t] = randHuman[K_dist_idx[t]]
# Simulation
trialRes_dist = np.zeros((nTrials,1))
trialRes_median = np.zeros((nTrials,1))
for t in range(nTrials):
trialRes_dist[t] = np.logical_xor(A[t]>K_dist[t],A[t]<B[t])[0]
trialRes_median[t] = np.logical_xor(A[t]>K_median[t],A[t]<B[t])[0]
# Simulation results
trialsWon_dist = np.sum(trialRes_dist)
trialsWon_median = np.sum(trialRes_median)
fracWon_dist = trialsWon_dist / nTrials
fracWon_median = trialsWon_median / nTrials
print 'Fraction of trials won:'
print 'K = random sample from distribution:', round(fracWon_dist,3)
print 'K = cumulative median:', round(fracWon_median,3)
# Cumulative accuracy over trials
cum_accuracy_dist = np.cumsum(trialRes_dist)
cum_accuracy_dist = cum_accuracy_dist / np.arange(1,nTrials+1)
cum_accuracy_median = np.cumsum(trialRes_median)
cum_accuracy_median = cum_accuracy_median / np.arange(1,nTrials+1)
plt.figure()
plt.plot(np.arange(1,nTrials+1),cum_accuracy_median,'k-',label='median')
plt.plot(np.arange(1,nTrials+1),cum_accuracy_dist,'r-',label='random')
plt.xlabel('Trial #')
plt.ylabel('Cumulative accuracy')
plt.ylim([0.5,1.01])
plt.xlim([0,200])
plt.legend(loc='best',title='K')
# Visualize my random number distribution
plt.figure(figsize=(8,1))
plt.plot(randHuman100,np.zeros((100,1)),'.',ms=4)
plt.xlim([-1100,1100])
plt.yticks([])
plt.title('Human-generated random numbers')
plt.figure(figsize=(8,1))
plt.plot(randHuman,np.zeros((nHuman,1)),'.',ms=1)
plt.xlim([-60000,60000])
plt.yticks([])
plt.title('Expanded human-generated random numbers')
# Generate new samples for B
AminusB = np.random.randint(0,2,nTrials)
AminusB[AminusB==0] = -1
B = A - AminusB
# Simulation
trialRes_dist = np.zeros((nTrials,1))
trialRes_median = np.zeros((nTrials,1))
for t in range(nTrials):
trialRes_dist[t] = np.logical_xor(A[t]>K_dist[t],A[t]<B[t])[0]
trialRes_median[t] = np.logical_xor(A[t]>K_median[t],A[t]<B[t])[0]
# Simulation results
trialsWon_dist = np.sum(trialRes_dist)
trialsWon_median = np.sum(trialRes_median)
fracWon_dist = trialsWon_dist / nTrials
fracWon_median = trialsWon_median / nTrials
print 'Fraction of trials won:'
print 'K = random sample from distribution:', round(fracWon_dist,3), 'p =', 1 - binom.cdf(trialsWon_dist,nTrials,0.5)
print 'K = cumulative median:', round(fracWon_median,3), 'p =', 1 - binom.cdf(trialsWon_median,nTrials,0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulation 1
Step2: Simulation 2
Step3: Simulation 3
Step4: Simulation 4
|
6,698
|
<ASSISTANT_TASK:>
Python Code:
import os, glob
import pandas as pd
import numpy as np
from support import data_dir
fileimport = glob.glob(os.path.join(data_dir, 'BN','*.txt'))
data = {}
names = [(f.rpartition('/')[2]).partition('.')[0] for f in fileimport]
for n, f in zip(names, fileimport):
print(n)
data[n] = pd.read_table(f, index_col=0)
tbl1 = data['AMPS2013bErvenElectrificationOffset']
tbl1['ElectrifiedDwellings'] = tbl1['EstErven']*tbl1['ElectrificationOffset']
tbl1['P_LSM|electrified'] = tbl1['ElectrifiedDwellings']/sum(tbl1['ElectrifiedDwellings'])
tbl1
tbl2 = data['LSMmakeupAssumptions']
t2 = tbl2.iloc[:, 0:10]
t2.columns = range(1,11)
tbl2['P_class|electrified'] = t2.dot(tbl1['P_LSM|electrified']).values
tbl2
data['HHtoIncomeByLSM'].head()
tbl3 = data['HHtoIncomeByLSM'].iloc[:,1:9]
count = [100/(tbl3.index[i+1]-tbl3.index[i]) for i in range(0, len(tbl3)-1)]+[100/(240700 - 50000)]
t3 = tbl3.multiply(count, axis = 0)
ix = np.arange(0, 240800, 100)
bins = [0, 1800, 3200, 7800, 11600, 19116, 24500, 65500, 240700]
tbl3x = t3.reindex(ix, method = 'ffill')
tbl3_binned = tbl3x.groupby(pd.cut(tbl3x.index,bins)).sum()
tbl3_binned
tbl3_totals = tbl3_binned.sum(axis=0)
Pincome_lsm = tbl3_binned/tbl3_totals
Pincome_lsm
tbl4 = data['LSMmakeupHighDetail']
Plsm_class = tbl4.divide(tbl4.sum(axis=1), axis=0)
Plsm_class
Pincome_class = Plsm_class.dot(Pincome_lsm.T)
Pincome_class
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Customer class marginal probability distribution
Step2: P ( class | electrified household )
Step3: Derivation of monthly income by customer class conditional probability distribution
Step4: Number of households per DLR compatible income bin per LSM
Step5: P ( LSM | class )
Step6: P ( income | class )
|
6,699
|
<ASSISTANT_TASK:>
Python Code:
# YOUR CODE HERE
from IPython.display import display
from IPython.display import Image
assert True # leave this to grade the import statements
# YOUR CODE HERE
Image(url="http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/imgqua/h21.gif", width=600, height=600)
assert True # leave this to grade the image display
%%html
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass (MeV/$c^2$)</th>
<tr>
<td>up</td>
<td>u</td>
<td>$\bar{u}$</td>
<td>+$\frac{2}{3}$</td>
<td>1.5 - 3.3</td>
<tr>
<td>down</td>
<td>d</td>
<td>$\bar{d}$</td>
<td>-$\frac{1}{3}$</td>
<td>3.5 - 6.0</td>
<tr>
<td>charm</td>
<td>c</td>
<td>$\bar{c}$</td>
<td>+$\frac{2}{3}$</td>
<td>1,160 - 1,340</td>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>-$\frac{1}{3}$</td>
<td>70 - 130</td>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>+$\frac{2}{3}$</td>
<td>169,000 - 173,000</td>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>-$\frac{1}{3}$</td>
<td>4,130 - 4,370</td>
<tr>
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.