Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
5,600
|
<ASSISTANT_TASK:>
Python Code:
import warnings
import numpy as np
import pandas as pd
from scipy.sparse import hstack
from sklearn.cross_validation import cross_val_predict
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from utils.categorize_demographics import recategorize
from utils.clean_up import clean_up, col_to_data_matrix
from utils.distinctive_tokens import log_odds_ratio
from utils.happyfuntokenizing import Tokenizer
from utils.nonnegative_matrix_factorization import nmf_labels
warnings.filterwarnings('ignore')
essay_dict = {'essay0' : 'My self summary',
'essay1' : 'What I\'m doing with my life',
'essay2' : 'I\'m really good at',
'essay3' : 'The first thing people notice about me',
'essay4' : 'Favorite books, movies, tv, food',
'essay5' : 'The six things I could never do without',
'essay6' : 'I spend a lot of time thinking about',
'essay7' : 'On a typical Friday night I am',
'essay8' : 'The most private thing I am willing to admit',
'essay9' : 'You should message me if'}
df = pd.read_csv('data/profiles.20120630.csv')
essay_list = ['essay4']
df_4 = clean_up(df, essay_list)
df_4 = recategorize(df_4)
df_4_y = df_4[df_4.drugs == 'yes'] #take only users with yes/no drug status
df_4_n = df_4[df_4.drugs == 'no']
df_4_y = df_4_y.sample(6500, random_state=42) #subsample data for both y and no
df_4_n = df_4_n.sample(6500, random_state=42)
drugs = df_4_y.append(df_4_n) #combine dfs
drugs['y'] = drugs['drugs'].apply(lambda x: 1 if x == 'yes' else 0) #add column for 1/0 if drug use
K = 25
count_matrix, tfidf_matrix, vocab = col_to_data_matrix(drugs, 'essay4', min_df=0.001)
drugs['group'] = nmf_labels(tfidf_matrix, K) #group assignment per user (group with maximum weight)
y = drugs.y.values #1/0 vector
X = tfidf_matrix.copy()
count_0 = count_matrix[np.array(drugs.drugs=='yes'), :].sum(axis=0)
count_1 = count_matrix[np.array(drugs.drugs=='no'), :].sum(axis=0)
counts = np.array(np.vstack((count_0, count_1)))
log_odds = log_odds_ratio(counts, vocab, use_variance=True)
n = 2000
top = log_odds.sort('log_odds_ratio', ascending=False)['features'].tolist()[:n]
bottom = log_odds.sort('log_odds_ratio', ascending=False)['features'].tolist()[-n:]
log_odds_features = top + bottom
log_odds_mask = np.array([t in log_odds_features for t in vocab])
X = X[:,log_odds_mask]
# nmf = pd.get_dummies(drugs.group, prefix='nmf').values
# X = hstack([X, nmf], format='csr')
clf0 = LogisticRegression()
clf1 = MultinomialNB()
clf2 = LinearSVC()
clf3 = RandomForestClassifier()
for clf, name in zip([clf0, clf1, clf2, clf3],
['Logistic Regression', 'naive Bayes', 'SVM', 'Random Forest']):
yhat = cross_val_predict(clf, X, y, cv=10)
print("Accuracy: %0.4f [%s]" % (accuracy_score(y, yhat), name))
print(Without feature selection:
Accuracy: 0.6715 [Logistic Regression]
Accuracy: 0.6738 [naive Bayes]
Accuracy: 0.6387 [SVM]
Accuracy: 0.6305 [Random Forest])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Cleaning
Step2: Subsample
Step3: Clustering
Step4: Models
Step5: Log Odds Ratio features
Step6: NMF features
Step8: Cross-Validated Estimates
|
5,601
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# install dependencies
!pip install gensim --upgrade
!pip install git+https://github.com/tensorflow/privacy
from IPython.display import clear_output
clear_output()
# imports
import smart_open
import random
import gensim.utils
import os
import bz2
import multiprocessing
import logging
import tqdm
import xml
import numpy as np
from gensim.models import Word2Vec
from six import raise_from
from gensim.corpora.wikicorpus import WikiCorpus, init_to_ignore_interrupt, \
ARTICLE_MIN_WORDS, _process_article, IGNORED_NAMESPACES, get_namespace
from pickle import PicklingError
from xml.etree.cElementTree import iterparse, ParseError
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import membership_inference_attack as mia
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import data_structures as mia_data_structures
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import plotting as mia_plotting
from tensorflow_privacy.privacy.privacy_tests.secret_sharer.exposures import compute_exposure_interpolation, compute_exposure_extrapolation
# all the functions we need to get data and canary it
# we will use google drive to store data models to be able to reuse them
# you can change this to local directories by changing DATA_DIR and MODEL_DIR
# make sure to copy the data locally, otherwise training will be very slow
# code in this cell originates from https://github.com/google/embedding-tests
# some edits were made to allow saving to google drive, and to add canaries
from google.colab import drive
drive.mount('/content/drive/')
LOCAL_DATA_DIR = 'data_dir'
LOCAL_MODEL_DIR = 'model_dir'
DATA_DIR = '/content/drive/MyDrive/w2v/data_dir/'
MODEL_DIR = '/content/drive/MyDrive/w2v/model_dir/'
# made up words will be used for canaries
MADE_UP_WORDS = []
for i in range(20):
MADE_UP_WORDS.append("o"*i + "oongaboonga")
# deterministic dataset partitioning
def gen_seed(idx, n=10000):
random.seed(12345)
seeds = []
for i in range(n):
s = random.random()
seeds.append(s)
return seeds[idx]
def make_wiki9_dirs(data_dir):
# makes all the directories we'll need to store data
wiki9_path = os.path.join(data_dir, 'wiki9', 'enwik9.bz2')
wiki9_dir = os.path.join(data_dir, 'wiki9', 'articles')
wiki9_split_dir = os.path.join(data_dir, 'wiki9', 'split')
for d in [wiki9_dir, wiki9_split_dir]:
if not os.path.exists(d):
os.makedirs(d)
return wiki9_path, wiki9_dir, wiki9_split_dir
def extract_pages(f, filter_namespaces=False, filter_articles=None):
try:
elems = (elem for _, elem in iterparse(f, events=("end",)))
except ParseError:
yield None, "", None
elem = next(elems)
namespace = get_namespace(elem.tag)
ns_mapping = {"ns": namespace}
page_tag = "{%(ns)s}page" % ns_mapping
text_path = "./{%(ns)s}revision/{%(ns)s}text" % ns_mapping
title_path = "./{%(ns)s}title" % ns_mapping
ns_path = "./{%(ns)s}ns" % ns_mapping
pageid_path = "./{%(ns)s}id" % ns_mapping
try:
for elem in elems:
if elem.tag == page_tag:
title = elem.find(title_path).text
text = elem.find(text_path).text
if filter_namespaces:
ns = elem.find(ns_path).text
if ns not in filter_namespaces:
text = None
if filter_articles is not None:
if not filter_articles(
elem, namespace=namespace, title=title,
text=text, page_tag=page_tag,
text_path=text_path, title_path=title_path,
ns_path=ns_path, pageid_path=pageid_path):
text = None
pageid = elem.find(pageid_path).text
yield title, text or "", pageid # empty page will yield None
elem.clear()
except ParseError:
yield None, "", None
return
class MyWikiCorpus(WikiCorpus):
def get_texts(self):
logger = logging.getLogger(__name__)
articles, articles_all = 0, 0
positions, positions_all = 0, 0
tokenization_params = (
self.tokenizer_func, self.token_min_len, self.token_max_len, self.lower)
texts = ((text, title, pageid, tokenization_params)
for title, text, pageid in extract_pages(bz2.BZ2File(self.fname),
self.filter_namespaces,
self.filter_articles))
print("got texts")
pool = multiprocessing.Pool(self.processes, init_to_ignore_interrupt)
try:
# process the corpus in smaller chunks of docs,
# because multiprocessing.Pool
# is dumb and would load the entire input into RAM at once...
for group in gensim.utils.chunkize(texts, chunksize=10 * self.processes,
maxsize=1):
for tokens, title, pageid in pool.imap(_process_article, group):
articles_all += 1
positions_all += len(tokens)
# article redirects and short stubs are pruned here
if len(tokens) < self.article_min_tokens or \
any(title.startswith(ignore + ':') for ignore in
IGNORED_NAMESPACES):
continue
articles += 1
positions += len(tokens)
yield (tokens, (pageid, title))
except KeyboardInterrupt:
logger.warn(
"user terminated iteration over Wikipedia corpus after %i"
" documents with %i positions "
"(total %i articles, %i positions before pruning articles"
" shorter than %i words)",
articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS
)
except PicklingError as exc:
raise_from(
PicklingError('Can not send filtering function {} to multiprocessing, '
'make sure the function can be pickled.'.format(
self.filter_articles)), exc)
else:
logger.info(
"finished iterating over Wikipedia corpus of %i "
"documents with %i positions "
"(total %i articles, %i positions before pruning articles"
" shorter than %i words)",
articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS
)
self.length = articles # cache corpus length
finally:
pool.terminate()
def write_wiki9_articles(data_dir):
wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir)
wiki = MyWikiCorpus(wiki9_path, dictionary={},
filter_namespaces=False)
i = 0
for text, (p_id, title) in tqdm.tqdm(wiki.get_texts()):
i += 1
if title is None:
continue
article_path = os.path.join(wiki9_dir, p_id)
if os.path.exists(article_path):
continue
with open(article_path, 'wb') as f:
f.write(' '.join(text).encode("utf-8"))
print("done", i)
def split_wiki9_articles(data_dir, exp_id=0):
wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir)
all_docs = list(os.listdir(wiki9_dir))
print("wiki9 len", len(all_docs))
print(wiki9_dir)
s = gen_seed(exp_id)
random.seed(s)
random.shuffle(all_docs)
random.seed()
n = len(all_docs) // 2
return all_docs[:n], all_docs[n:]
def read_wiki9_train_split(data_dir, exp_id=0):
wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir)
split_path = os.path.join(wiki9_split_dir, 'split{}.train'.format(exp_id))
if not os.path.exists(split_path):
train_docs, _ = split_wiki9_articles(exp_id=exp_id)
with open(split_path, 'w') as f:
for doc in tqdm.tqdm(train_docs):
with open(os.path.join(wiki9_dir, doc), 'r') as fd:
f.write(fd.read())
f.write(' ')
return split_path
def build_vocab(word2vec_model):
vocab = word2vec_model.wv.index_to_key
counts = [word2vec_model.wv.get_vecattr(word, "count") for word in vocab]
sorted_inds = np.argsort(counts)
sorted_vocab = [vocab[ind] for ind in sorted_inds]
return sorted_vocab
def sample_words(vocab, count, rng):
inds = rng.choice(len(vocab), count, replace=False)
return [vocab[ind] for ind in inds], rng
def gen_canaries(num_canaries, canary_repeat, vocab_model_path, seed=0):
# create canaries, injecting made up words into the corpus
existing_w2v = Word2Vec.load(vocab_model_path)
existing_vocab = build_vocab(existing_w2v)
rng = np.random.RandomState(seed)
all_canaries = []
for i in range(num_canaries):
new_word = MADE_UP_WORDS[i%len(MADE_UP_WORDS)]
assert new_word not in existing_vocab
canary_words, rng = sample_words(existing_vocab, 4, rng)
canary = canary_words[:2] + [new_word] + canary_words[2:]
all_canaries.append(canary)
all_canaries = all_canaries * canary_repeat
return all_canaries
# iterator for training documents, with an option to canary
class WIKI9Articles:
def __init__(self, docs, data_dir, verbose=0, ssharer=False, num_canaries=0,
canary_repeat=0, canary_seed=0, vocab_model_path=None):
self.docs = [(0, doc) for doc in docs]
if ssharer:
all_canaries = gen_canaries(
num_canaries, canary_repeat, vocab_model_path, canary_seed)
self.docs.extend([(1, canary) for canary in all_canaries])
np.random.RandomState(0).shuffle(self.docs)
wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir)
self.dirname = wiki9_dir
self.verbose = verbose
def __iter__(self):
for is_canary, fname in tqdm.tqdm(self.docs) if self.verbose else self.docs:
if not is_canary:
for line in smart_open.open(os.path.join(self.dirname, fname),
'r', encoding='utf-8'):
yield line.split()
else:
yield fname
def train_word_embedding(data_dir, model_dir, exp_id=0, use_secret_sharer=False,
num_canaries=0, canary_repeat=1, canary_seed=0,
vocab_model_path=None):
# this function trains the word2vec model, after setting up the training set
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',
level=logging.INFO)
params = {
'sg': 1,
'negative': 25,
'alpha': 0.05,
'sample': 1e-4,
'workers': 48,
'epochs': 5,
'window': 5,
}
train_docs, test_docs = split_wiki9_articles(data_dir, exp_id)
print(len(train_docs), len(test_docs))
wiki9_articles = WIKI9Articles(
train_docs, data_dir, ssharer=use_secret_sharer, num_canaries=num_canaries,
canary_repeat=canary_repeat, canary_seed=canary_seed, vocab_model_path=vocab_model_path)
if not os.path.exists(model_dir):
os.makedirs(model_dir)
model = Word2Vec(wiki9_articles, **params)
if not use_secret_sharer:
model_path = os.path.join(model_dir, 'wiki9_w2v_{}.model'.format(exp_id))
else:
model_path = os.path.join(model_dir, 'wiki9_w2v_{}_{}_{}_{}.model'.format(
exp_id, num_canaries, canary_repeat, canary_seed
))
model.save(model_path)
return model_path, train_docs, test_docs
# setup directories
wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(DATA_DIR)
local_wiki9_path, local_wiki9_dir, local_wiki9_splitdir = make_wiki9_dirs(LOCAL_DATA_DIR)
# download and format documents
!wget http://mattmahoney.net/dc/enwik9.zip
!unzip enwik9.zip
!bzip2 enwik9
!cp enwik9.bz2 $wiki9_path
!cp $wiki9_path $local_wiki9_path
write_wiki9_articles(LOCAL_DATA_DIR) # need local data for fast training
for i in range(10):
if os.path.exists(os.path.join(MODEL_DIR, f"wiki9_w2v_{i}.model")):
print("done", i)
continue
model_path, train_docs, test_docs = train_word_embedding(LOCAL_DATA_DIR, MODEL_DIR, exp_id=i)
print(model_path)
from re import split
def loss(model, window):
# compute loss for a single window of 5 tokens
try:
sum_embedding = np.array([model.wv[word] for word in window]).sum(axis=0)
except:
return np.nan
middle_embedding = model.wv[window[2]]
context_embedding = 0.25*(sum_embedding - middle_embedding)
return np.linalg.norm(middle_embedding - context_embedding)
def loss_per_article(model, article):
# compute loss for a full document
losses = []
article = article.split(' ')
embs = [model.wv[word] if word in model.wv else np.nan for word in article]
for i in range(len(article) - 4):
middle_embedding = embs[i+2]
context_embedding = 0.25*(np.mean(embs[i:i+2] + embs[i+3:i+5]))
losses.append(np.linalg.norm(middle_embedding - context_embedding))
return np.nanmean(losses)
all_models = []
for i in range(1000, 1020):
model_path = os.path.join(MODEL_DIR, f"wiki9_w2v_{i}.model")
if not os.path.exists(model_path):
continue
all_models.append(Word2Vec.load(model_path))
train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, 0)
all_docs = sorted(train_docs + test_docs)
all_losses = np.zeros((len(all_docs), len(all_models)))
for i, doc in tqdm.tqdm(enumerate(all_docs)):
if i > 1000:
continue
with open(os.path.join(local_wiki9_dir, doc), 'r') as fd:
doc_text = fd.read()
for j, model in enumerate(all_models):
all_losses[i,j] = loss_per_article(model, doc_text)
all_losses = all_losses[:500, :]
doc_lookup = {doc: i for i, doc in enumerate(all_docs)}
def compute_scores_in_out(losses, seeds):
in_scores = [[] for _ in range(losses.shape[0])]
out_scores = [[] for _ in range(losses.shape[0])]
for seed in seeds:
train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, seed)
for train_doc in train_docs:
ind = doc_lookup[train_doc]
if ind >= all_losses.shape[0]:
continue
in_scores[ind].append([all_losses[ind, seed-1000]])
for test_doc in test_docs:
ind = doc_lookup[test_doc]
if ind >= all_losses.shape[0]:
continue
out_scores[ind].append([all_losses[ind, seed-1000]])
in_scores = [np.array(s) for s in in_scores]
out_scores = [np.array(s) for s in out_scores]
print(in_scores[0].shape)
return in_scores, out_scores
# we will do MI on model 0
in_scores, out_scores = compute_scores_in_out(all_losses, list(range(1001, 1020)))
# global threshold MIA attack
train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, 1000)
train_losses, test_losses = [], []
for train_doc in train_docs:
ind = doc_lookup[train_doc]
if ind >= all_losses.shape[0]:
continue
train_losses.append(all_losses[ind, 0])
for test_doc in test_docs:
ind = doc_lookup[test_doc]
if ind >= all_losses.shape[0]:
continue
test_losses.append(all_losses[ind, 0])
attacks_result_baseline = mia.run_attacks(
mia_data_structures.AttackInputData(
loss_train = -np.nan_to_num(train_losses),
loss_test = -np.nan_to_num(test_losses))).single_attack_results[0]
print('Global Threshold MIA attack:',
f'auc = {attacks_result_baseline.get_auc():.4f}',
f'adv = {attacks_result_baseline.get_attacker_advantage():.4f}')
# run LiRA
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import advanced_mia as amia
good_inds = []
for i, (in_s, out_s) in enumerate(zip(in_scores, out_scores)):
if len(in_s) > 0 and len(out_s) > 0:
good_inds.append(i)
for i in good_inds:
assert len(in_scores[i]) > 0
assert len(in_scores[i]) > 0
scores = amia.compute_score_lira(all_losses[good_inds, 0],
[in_scores[i] for i in good_inds],
[out_scores[i] for i in good_inds],
fix_variance=True)
train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, 1000)
in_mask = np.zeros(len(good_inds), dtype=bool)
for doc in train_docs:
ind = doc_lookup[doc]
if ind >= all_losses.shape[0]:
continue
if ind in good_inds:
in_mask[good_inds.index(ind)] = True
attacks_result_baseline = mia.run_attacks(
mia_data_structures.AttackInputData(
loss_train = scores[in_mask],
loss_test = scores[~in_mask])).single_attack_results[0]
print('Advanced MIA attack with Gaussian:',
f'auc = {attacks_result_baseline.get_auc():.4f}',
f'adv = {attacks_result_baseline.get_attacker_advantage():.4f}')
vocab_model_path = os.path.join(MODEL_DIR, 'wiki9_w2v_1.model')
interp_exposures = {}
extrap_exposures = {}
all_canaries = gen_canaries(10000, 1, vocab_model_path, 0)
for repeat_count in [5, 10, 20]:
model_path = os.path.join(MODEL_DIR, 'wiki9_w2v_0_20_{}_0.model'.format(repeat_count))
print(os.path.exists(model_path))
model_path, _, _ = train_word_embedding(
LOCAL_DATA_DIR, MODEL_DIR, exp_id=0, use_secret_sharer=True, num_canaries=20,
canary_repeat=repeat_count, canary_seed=0, vocab_model_path=vocab_model_path)
canaried_model = Word2Vec.load(model_path)
canary_losses = [loss(canaried_model, canary) for canary in all_canaries]
loss_secrets = np.array(canary_losses[:20])
loss_ref = np.array(canary_losses[20:])
loss_secrets = {1: loss_secrets[~np.isnan(loss_secrets)]}
loss_ref = loss_ref[~np.isnan(loss_ref)]
exposure_interpolation = compute_exposure_interpolation(loss_secrets, loss_ref)
exposure_extrapolation = compute_exposure_extrapolation(loss_secrets, loss_ref)
interp_exposures[repeat_count] = exposure_interpolation[1]
extrap_exposures[repeat_count] = exposure_extrapolation[1]
for key in interp_exposures:
print(f"Repeats: {key}, Interpolation Exposure: {np.median(interp_exposures[key])}, Extrapolation Exposure: {np.median(extrap_exposures[key])}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use Membership Inference and Secret Sharer to Test Word Embedding Models
Step2: Membership Inference Attacks
Step3: We now define our loss function. We follow https
Step4: Let's now get the losses of all models on all documents. This also takes a while, so we'll only get a subset.
Step5: We're going to be running the LiRA attack, so, for each document, we get the document's losses when it is in the model, and the losses when it is not in the model.
Step6: Now let's run the global threshold membership inference attack. It gets an advantage of around 0.07.
Step7: And now we run LiRA. First we need to compute LiRA scores.
Step8: And now we threshold on LiRA scores, as before. Advantage goes from .07 to .13, it almost doubled!
Step9: Secret Sharer
Step10: And now let's run secret sharer! Exposure is quite high!
|
5,602
|
<ASSISTANT_TASK:>
Python Code:
sample_points_filepath = ""
DEM_filepath = ""
elevation_filepath = ""
import rasterio
import fiona
import pandas
import numpy
from pyproj import Proj, transform
from fiona.crs import from_epsg
with fiona.open(sample_points_filepath, 'r') as source_points:
points = [f['geometry']['coordinates'] for f in source_points]
original = Proj(source_points.crs)
destination = Proj(from_epsg(4326))
#destination = Proj(' +proj=latlong +ellps=bessel')
with rasterio.drivers():
with rasterio.open(DEM_filepath) as source_dem:
s = source_dem.sample(points)
elevs = numpy.array([n[0] for n in s])
source_dem.close
source_points.close
points_projected = []
for p in points:
x, y = p
lat, long = transform(original, destination, x, y)
points_projected.append((long,lat))
points_projected_pd = pandas.DataFrame(points_projected, columns=["lat", "long"])
with fiona.open(sample_points_filepath, 'r') as source_points:
names = numpy.array([p['properties']['NAME'] for p in source_points])
IDs = numpy.array([p['properties']['ID'] for p in source_points])
source_points.close
elevs_names = [{"ID":IDs[i],"elevation":elevs[i], "name":names[i], "latitude":points_projected[i][0], "longitude":points_projected[i][1]} for i in range(len(elevs))]
elevs_pd = pandas.DataFrame(elevs_names)
elevs_pd
elevs_pd.to_excel(elevation_filepath)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import statements
Step2: Transform points
|
5,603
|
<ASSISTANT_TASK:>
Python Code:
def timestamps(packets):
epoch = np.datetime64('2000-01-01T12:00:00')
t = np.array([struct.unpack('>I', p[ccsds.SpacePacketPrimaryHeader.sizeof():][:4])[0]
for p in packets], 'uint32')
return epoch + t * np.timedelta64(1, 's')
def load_frames(path):
frame_size = 223 * 5 - 2
frames = np.fromfile(path, dtype = 'uint8')
frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))
return frames
frames = np.concatenate((
load_frames('lucy_frames_eb3frn_20211019_233036.u8'),
load_frames('lucy_frames_eb3frn_20211019_235245.u8')))
frames.shape[0]
aos = [AOSFrame.parse(f) for f in frames]
collections.Counter([a.primary_header.transfer_frame_version_number for a in aos])
collections.Counter([a.primary_header.spacecraft_id for a in aos])
collections.Counter([a.primary_header.virtual_channel_id for a in aos])
vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63]
[a.primary_header for a in vc63[:10]]
vc63_frames = np.array([f for f, a in zip(frames, aos) if a.primary_header.virtual_channel_id == 63])
np.unique(vc63_frames[:, 6:8], axis = 0)
bytes(vc63_frames[0, 6:8]).hex()
np.unique(vc63_frames[:, 8:])
hex(170)
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc63])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 63 (OID) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
first_part = fc < 219000
fc[first_part].size/(fc[first_part][-1]-fc[0]+1)
vc0 = [a for a in aos if a.primary_header.virtual_channel_id == 0]
[a.primary_header for a in vc0[:10]]
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc0])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 0 (telemetry) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
first_part = fc < 995800
fc[first_part].size/(fc[first_part][-1]-fc[0]+1)
vc0_packets = list(ccsds.extract_space_packets(vc0, 49, 0))
vc0_t = timestamps(vc0_packets)
vc0_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc0_packets]
vc0_apids = collections.Counter([p.APID for p in vc0_sp_headers])
vc0_apids
apid_axis = {a : k for k, a in enumerate(sorted(vc0_apids))}
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(vc0_t, [apid_axis[p.APID] for p in vc0_sp_headers], '.')
plt.yticks(ticks=range(len(apid_axis)), labels=apid_axis)
plt.xlabel('Space Packet timestamp')
plt.ylabel('APID')
plt.title('Lucy Virtual Channel 0 APID distribution');
vc0_by_apid = {apid : [p for h,p in zip(vc0_sp_headers, vc0_packets)
if h.APID == apid] for apid in vc0_apids}
plot_apids(vc0_by_apid)
tags = {2: Int16ub, 3: Int16ub, 15: Int32ub, 31: Int16ub, 32: Int16ub, 1202: Float64b,
1203: Float64b, 1204: Float64b, 1205: Float64b, 1206: Float64b, 1208: Float32b,
1209: Float32b, 1210: Float32b, 1601: Float32b, 1602: Float32b, 1603: Float32b,
1630: Float32b, 1631: Float32b, 1632: Float32b, 17539: Float32b, 17547: Float32b,
17548: Float32b, 21314: Int32sb, 21315: Int32sb, 21316: Int32sb, 21317: Int32sb,
46555: Int32sb, 46980: Int16ub, 46981: Int16ub, 46982: Int16ub, 47090: Int16ub,
47091: Int16ub, 47092: Int16ub,
}
values = list()
for packet in vc0_by_apid[5]:
t = timestamps([packet])[0]
packet = packet[6+5:] # skip primary and secondary headers
while True:
tag = Int16ub.parse(packet)
packet = packet[2:]
value = tags[tag].parse(packet)
packet = packet[tags[tag].sizeof():]
values.append((tag, value, t))
if len(packet) == 0:
break
values_keys = {v[0] for v in values}
values = {k: [(v[2], v[1]) for v in values if v[0] == k] for k in values_keys}
for k in sorted(values_keys):
vals = values[k]
plt.figure()
plt.title(f'Key {k}')
plt.plot([v[0] for v in vals], [v[1] for v in vals], '.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: AOS frames
Step2: Virtual Channel 63 (Only Idle Data)
Step3: Virtual channel 0
Step4: APID 5
|
5,604
|
<ASSISTANT_TASK:>
Python Code:
primes = []
i = 2
while len(primes) < 25:
for p in primes:
if i % p == 0:
break
else:
primes.append(i)
i += 1
print(primes)
def square(val):
print(val)
return val ** 2
squared_numbers = [square(i) for i in range(5)]
print('Squared from list:')
print(squared_numbers)
squared_numbers = (square(i) for i in range(5))
print('Squared from iterable:')
print(squared_numbers)
def squared_numbers(num):
for i in range(num):
yield i ** 2
print('This is only printed after all the numbers output have been consumed')
print(squared_numbers(5))
for i in squared_numbers(5):
print(i)
import functools
def plus(val, n):
return val + n
f = functools.partial(plus, 5)
f(5)
def decorator(inner):
def inner_decorator():
print('before')
inner()
print('after')
return inner_decorator
def decorated():
print('decorated')
f = decorator(decorated)
f()
@decorator
def decorated():
print('decorated')
decorated()
import time
@functools.lru_cache()
def slow_compute(n):
time.sleep(1)
print(n)
start = time.time()
slow_compute(1)
print(time.time() - start)
start = time.time()
slow_compute(1)
print(time.time() - start)
start = time.time()
slow_compute(2)
print(time.time() - start)
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
mammal = True
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
print(person1, person2)
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
mammal = True
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
def __str__(self):
return '{0} who is {1} years old.'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
print(person1, person2)
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
friends = []
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
def __str__(self):
return '{0} who is {1} years old'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.friends.append('Charlie')
person2.friends.append('Danielle')
print(person1.friends, person2.friends)
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
self.friends = []
def __str__(self):
return '{0} who is {1} years old'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.friends.append('Charlie')
person2.friends.append('Danielle')
print(person1.friends, person2.friends)
print('This works:', person1.friends)
print('This does not work:', friends)
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
self.friends = []
def __str__(self):
Return a string representation of the object
return '{0} who is {1} years old'.format(self.name, self.age)
def add_friend(self, friend):
Add a friend
self.friends.append(friend)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.add_friend('Charlie')
person2.add_friend('Danielle')
print(person1.friends, person2.friends)
# cookbook classes
class recipe(object):
A class definition of recipe. The following attributes are supported:
Attributes:
name: A string of the recipe name, e.g. key lime pie
kind: A string of the type of food, e.g. dessert
ingredients: A list of the ingredient objects required, e.g. egg, milk
instructions: A string of the amount and steps to prepare and cook the
ingredients, e.g. mix and bake
equipment: A list of the equipment needed, e.g. spoon, bowl
serving_size: An integer for the number of recommended people it can
serve, e.g. 8
nutrition: A dictionary of the nutritional facts, e.g. Total calories,
fat calories
time: A datetime for cooking time, e.g. 30 minutes
def __init__(self, name, kind, ingredients, instructions, equipment,
serving_size, nutrition, time):
Returns a recipe object with name, kind, ingredients, instructions,
equipment, serving_size, nutrition, and time
self.name = name
self.kind = kind
self.ingredients = ingredients
self.instructions = instructions
self.equipment = equipment
self.serving_size = serving_size
self.nutrition = nutrition
self.time = time
def __str__(self):
Returns a basic string representation of the recipe object
return "A {0} called {1}, which serves {2}.".format(
self.kind, self.name, self.serving_size
)
def getNutrition(self):
Returns the all nutrition facts from the ingredient objects
pass
def scaleServings(self, n):
Returns scaled serving size based on n
pass
pie = recipe("key lime pie", "dessert", ["pie crust", "limes"],
"mix and bake", "bowl and knife", 4,
{"fat": "10g", "carbs": "15g"}, 30.4)
print(pie)
print("How much fat?", pie.nutrition['fat'])
class ingredient(object):
A class definition of a ingredient. The following attributes are supported:
Attributes:
name: A string representing the ingredient name, e.g. chicken wings
nutrition: A dictionary representing grams in each nutrient category,
e.g. fats, carbs, proteins, vitamins
amount: A float in grams of the amount of ingredient proportional to
nutritional facts, e.g. 200.0
def __init__(self, name, nutrition, amount):
Returns a recipe object with name, nutrition, amount
self.name = name
self.nutrition = nutrition
self.amount = amount
def __str__(self):
Returns a basic string representation of the ingredient
return "{0}, has {1} grams of carbs, and is {2} grams total.".format(
self.name, self.nutrition["carbs"], self.amount
)
def scaleAmount(self, n):
Returns scaled amount and nutritional facts of ingredient by n
pass
egg = ingredient("egg", {"fats": 5, "carbs": 7, "proteins": 14}, 40.0)
print(egg)
!flake8 cookbook.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functional
Step4: Object oriented
Step7: There is a lot happening above.
Step11: There are many more special methods.
Step14: Both of our objects point to the same instance of the list type so adding a new friend to either object shows up in both.
Step15: Objects have their own namespace, although we have created variables called name, age, and friends they can only be accessed in the context of the object.
Step20: We are not limited to special methods when creating classes. Standard functions, or in this context methods, are an integral part of object oriented programming. Their definition is identical to special methods and functions outside of classes.
Step30: Private vs Public
Step31: Add documentation to the classes and their methods
|
5,605
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from selenium import webdriver
import time,re,json,numpy as np
import pandas as pd
from collections import defaultdict,Counter
import matplotlib.pyplot as plt
url = "http://www.imdb.com/list/ls061683439/"
with open('./img/filmfare.json',encoding="utf-8") as f:
datatbl = json.load(f)
driver = webdriver.Chrome(datatbl['data']['chromedriver'])
driver.get(url)
from bs4 import BeautifulSoup
soup = BeautifulSoup(driver.page_source, 'lxml')
lstelem = soup.findAll("div", attrs={"class" : 'info'})
Movielist = []
for lst in lstelem:
Movielist.append((lst.find('b').contents[0]).text)
print("First 10 Movies in the list")
Movielist[0:10]
def ExtractText(Xpath):
textlist=[]
if(Xpath=='Movies_Director_Xpath'):
for item in range(1,123,2):
textlist.append(driver.find_element_by_xpath(datatbl['data'][Xpath]+'[%d]'%item).text)
else:
[textlist.append(item.text) for item in driver.find_elements_by_xpath(datatbl['data'][Xpath])]
return textlist
#Extracting Data from Web
Movies_Votes,Movies_Name,Movies_Ratings,Movies_RunTime=[[] for i in range(4)]
datarepo = [[]]*5
Xpath_list = ['Movies_Name_Xpath','Movies_Rate_Xpath','Movies_Runtime_Xpath','Movies_Votes_Xpath',
'Movies_Director_Xpath']
for i in range(5):
if(i==3):
driver.find_element_by_xpath(datatbl['data']['listview']).click()
if(i==4):
driver.find_element_by_xpath(datatbl['data']['detailview']).click()
datarepo[i] = ExtractText(Xpath_list[i])
driver.quit()
# Movie Name List & Ratings
print(datarepo[0][:5])
print(datarepo[1][:5])
print(datarepo[3][:5])
print(datarepo[4][:5])
print("")
print(datarepo[3][:5])
# Result in a Python Dictionary
Years=range(2015,1954,-1)
result = defaultdict(dict)
for i in range(0,len(datarepo[0])):
result[i]['Movie Name']= datarepo[0][i]
result[i]['Year']= Years[i]
result[i]['Rating']= datarepo[1][i]
result[i]['Votes']= datarepo[3][i]
result[i]['RunTime']= datarepo[2][i]
result[i]['Genre']= datatbl['data']['Genre'][i]
result[i]['Director']= datarepo[4][i]
print(json.dumps((result[0]),indent=4))
print(json.dumps((result),indent=4))
for key,values in result.items():
values['Votes'] = int(values['Votes'].replace(",",""))
values['Rating']= float(values['Rating'])
values['Director']= values['Director'].replace('Director: ','')
try:
values['RunTime'] = int(re.findall(r'\d+',values['RunTime'])[-1])
except TypeError:
values['RunTime'] = np.NaN
except IndexError:
values['RunTime'] = np.NaN
print(json.dumps((result[0]),indent=4))
print(json.dumps((result),indent=4))
# create dataframe
df = pd.DataFrame.from_dict(result,orient='index')
df = df[['Year', 'Movie Name', 'Rating', 'Votes','Genre','RunTime','Director']]
df.index = np.arange(1, 62)
df.head(10)
df
df[df['RunTime'].isnull()==True]
nans = df.shape[0] - df.dropna().shape[0]
print('%d rows have missing values' % nans)
df=df.fillna(int(df['RunTime'].mean()))
df.ix[[41, 49,59]]
df.info()
#Highest Rating Movies
df1=df.sort_values('Rating',ascending=[False]).head(5)
df1.index = np.arange(1, 6)
df1
df.plot(x=df.Year,y=['Rating']);
df1=df.sort_values('Rating',ascending=[True]).head(5)
df1.index = np.arange(1, 6)
df1
#Movies with maximum Run Time
df1=df.sort_values(['RunTime'],ascending=[False]).head(10)
df1.index = np.arange(1, 11)
df1
df.plot(x=df.Year,y=['RunTime']);
df['RunTime'].mean()
df[(df['Rating']>=7)]['Rating'].count()
import seaborn as sns
sns.set_style("white")
Rating_Histdic = defaultdict(dict)
Rating_Histdic['Btwn 6&7'] = df[(df['Rating']>=6)&(df['Rating']<7)]['Rating'].count()
Rating_Histdic['GTEQ 8'] = df[(df['Rating']>=8)]['Rating'].count()
Rating_Histdic['Btwn 7 & 8'] = df[(df['Rating']>=7)&(df['Rating']<8)]['Rating'].count()
plt.bar(range(len(Rating_Histdic)), Rating_Histdic.values(), align='center',color='b',width=0.4)
plt.xticks(range(len(Rating_Histdic)), Rating_Histdic.keys(), rotation=25);
sns.set_style("white")
sns.set_context("notebook")
Rating_Hist = []
import numpy as np
Rating_Hist.append(Rating_Histdic['Btwn 6&7'])
Rating_Hist.append(Rating_Histdic['GTEQ 8'])
Rating_Hist.append(Rating_Histdic['Btwn 7 & 8'])
labels = ['Btwn 6&7', 'GTEQ 8', 'Btwn 7 & 8']
colors = ['red', 'c', 'lightgreen']
plt.pie(Rating_Hist,labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90);
plt.figure(figsize=(8, 6));
Category=Counter(datatbl['data']['Genre'])
df1 = pd.DataFrame.from_dict(Category,orient='index')
df1 = df1.sort_values([0],ascending=[False]).head(5)
df1.plot(kind='barh',color=['g','c','m']);
df['freq']= df.groupby('Director')['Director'].transform('count')
df2=df[df['freq']>1]
del df2['freq']
df2.groupby(['Director','Year', 'Movie Name',
'Rating', 'Genre','Votes','RunTime']).count()[0:100]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initial Setup and Launch the browser to open the URL
Step2: Beautiful Soup
Step3: Getting Data
Step4: Let's extract all the required data like Ratings,Votes,Genre, Year of Release for the Best Movies
Step5: Data in List
Step6: Store Data in a Python Dictionary
Step7: Let's see now how the data in dictionary looks like?
Step8: Let's clean the data
Step9: Now let's look at the data and see how it looks like
Step10: Movie details in dictionary
Step11: Data in Pandas Dataframe
Step12: Let's use some of the Pandas functions now and start the Analysis
Step13: Replace NaN with Mean
Step14: Movies with Highest Ratings
Step15: Rating Trend for Best Movies from last 65 years
Step16: Movies with Lowest Ratings
Step17: Movies with Maximum Run time
Step18: Best Movie Run time
Step19: Mean of the Movie Run Time
Step20: Perform some analysis on the ratings of all the Best won movies
Step21: Movie Ratings Visualization using Bar Graph
Step22: Percentage distribution of the Ratings in a Pie-Chart
Step23: Best Picture by Genre
Step24: Directors whose movie won more than once for Best Film
|
5,606
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import salib as sl
sl.import_notebooks()
from Tables import Table
from Nodes import Node
from Members import Member
from LoadSets import LoadSet, LoadCombination
from NodeLoads import makeNodeLoad
from MemberLoads import makeMemberLoad
from collections import OrderedDict, defaultdict
import numpy as np
class Object(object):
pass
class Frame2D(object):
def __init__(self,dsname=None):
self.dsname = dsname
self.rawdata = Object()
self.nodes = OrderedDict()
self.members = OrderedDict()
self.nodeloads = LoadSet()
self.memberloads = LoadSet()
self.loadcombinations = LoadCombination()
#self.dofdesc = []
#self.nodeloads = defaultdict(list)
#self.membloads = defaultdict(list)
self.ndof = 0
self.nfree = 0
self.ncons = 0
self.R = None
self.D = None
self.PDF = None # P-Delta forces
COLUMNS_xxx = [] # list of column names for table 'xxx'
def get_table(self,tablename,extrasok=False,optional=False):
columns = getattr(self,'COLUMNS_'+tablename)
t = Table(tablename,columns=columns,optional=optional)
t.read(optional=optional)
reqdl= columns
reqd = set(reqdl)
prov = set(t.columns)
if reqd-prov:
raise Exception('Columns missing {} for table "{}". Required columns are: {}'\
.format(list(reqd-prov),tablename,reqdl))
if not extrasok:
if prov-reqd:
raise Exception('Extra columns {} for table "{}". Required columns are: {}'\
.format(list(prov-reqd),tablename,reqdl))
return t
%%Table nodes
NODEID,X,Y,Z
A,0.,0.,5000.
B,0,4000,5000
C,8000,4000,5000
D,8000,0,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_nodes = ('NODEID','X','Y')
def install_nodes(self):
node_table = self.get_table('nodes')
for ix,r in node_table.data.iterrows():
if r.NODEID in self.nodes:
raise Exception('Multiply defined node: {}'.format(r.NODEID))
n = Node(r.NODEID,r.X,r.Y)
self.nodes[n.id] = n
self.rawdata.nodes = node_table
def get_node(self,id):
try:
return self.nodes[id]
except KeyError:
raise Exception('Node not defined: {}'.format(id))
##test:
f = Frame2D()
##test:
f.install_nodes()
##test:
f.nodes
##test:
f.get_node('C')
%%Table supports
NODEID,C0,C1,C2
A,FX,FY,MZ
D,FX,FY
def isnan(x):
if x is None:
return True
try:
return np.isnan(x)
except TypeError:
return False
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_supports = ('NODEID','C0','C1','C2')
def install_supports(self):
table = self.get_table('supports')
for ix,row in table.data.iterrows():
node = self.get_node(row.NODEID)
for c in [row.C0,row.C1,row.C2]:
if not isnan(c):
node.add_constraint(c)
self.rawdata.supports = table
##test:
f.install_supports()
vars(f.get_node('D'))
%%Table members
MEMBERID,NODEJ,NODEK
AB,A,B
BC,B,C
DC,D,C
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_members = ('MEMBERID','NODEJ','NODEK')
def install_members(self):
table = self.get_table('members')
for ix,m in table.data.iterrows():
if m.MEMBERID in self.members:
raise Exception('Multiply defined member: {}'.format(m.MEMBERID))
memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))
self.members[memb.id] = memb
self.rawdata.members = table
def get_member(self,id):
try:
return self.members[id]
except KeyError:
raise Exception('Member not defined: {}'.format(id))
##test:
f.install_members()
f.members
##test:
m = f.get_member('BC')
m.id, m.L, m.dcx, m.dcy
%%Table releases
MEMBERID,RELEASE
AB,MZK
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_releases = ('MEMBERID','RELEASE')
def install_releases(self):
table = self.get_table('releases',optional=True)
for ix,r in table.data.iterrows():
memb = self.get_member(r.MEMBERID)
memb.add_release(r.RELEASE)
self.rawdata.releases = table
##test:
f.install_releases()
##test:
vars(f.get_member('AB'))
try:
from sst import SST
__SST = SST()
get_section = __SST.section
except ImportError:
def get_section(dsg,fields):
raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg))
##return [1.] * len(fields.split(',')) # in case you want to do it that way
%%Table properties
MEMBERID,SIZE,IX,A
BC,W460x106,,
AB,W310x97,,
DC,,
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_properties = ('MEMBERID','SIZE','IX','A')
def install_properties(self):
table = self.get_table('properties')
table = self.fill_properties(table)
for ix,row in table.data.iterrows():
memb = self.get_member(row.MEMBERID)
memb.size = row.SIZE
memb.Ix = row.IX
memb.A = row.A
self.rawdata.properties = table
def fill_properties(self,table):
data = table.data
prev = None
for ix,row in data.iterrows():
nf = 0
if type(row.SIZE) in [type(''),type(u'')]:
if isnan(row.IX) or isnan(row.A):
Ix,A = get_section(row.SIZE,'Ix,A')
if isnan(row.IX):
nf += 1
data.loc[ix,'IX'] = Ix
if isnan(row.A):
nf += 1
data.loc[ix,'A'] = A
elif isnan(row.SIZE):
data.loc[ix,'SIZE'] = '' if nf == 0 else prev
prev = data.loc[ix,'SIZE']
table.data = data.fillna(method='ffill')
return table
##test:
f.install_properties()
##test:
vars(f.get_member('DC'))
%%Table node_loads
LOAD,NODEID,DIRN,F
Wind,B,FX,-200000.
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F')
def install_node_loads(self):
table = self.get_table('node_loads')
dirns = ['FX','FY','FZ']
for ix,row in table.data.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in dirns:
raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))
l = makeNodeLoad({row.DIRN:row.F})
self.nodeloads.append(row.LOAD,n,l)
self.rawdata.node_loads = table
##test:
f.install_node_loads()
##test:
for o,l,fact in f.nodeloads.iterloads('Wind'):
print(o,l,fact,l*fact)
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
Live,BC,UDL,-50,,,,
Live,BC,PL,-200000,,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C')
def install_member_loads(self):
table = self.get_table('member_loads')
for ix,row in table.data.iterrows():
m = self.get_member(row.MEMBERID)
l = makeMemberLoad(m.L,row)
self.memberloads.append(row.LOAD,m,l)
self.rawdata.member_loads = table
##test:
f.install_member_loads()
##test:
for o,l,fact in f.memberloads.iterloads('Live'):
print(o.id,l,fact,l.fefs()*fact)
%%Table load_combinations
COMBO,LOAD,FACTOR
One,Live,1.5
One,Wind,1.75
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR')
def install_load_combinations(self):
table = self.get_table('load_combinations')
for ix,row in table.data.iterrows():
self.loadcombinations.append(row.COMBO,row.LOAD,row.FACTOR)
self.rawdata.load_combinations = table
##test:
f.install_load_combinations()
##test:
for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):
print(o.id,l,fact)
for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):
print(o.id,l,fact,l.fefs()*fact)
@sl.extend(Frame2D)
class Frame2D:
def iter_nodeloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads):
yield o,l,f
def iter_memberloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads):
yield o,l,f
##test:
for o,l,fact in f.iter_nodeloads('One'):
print(o.id,l,fact)
for o,l,fact in f.iter_memberloads('One'):
print(o.id,l,fact)
##test:
Table.CELLDATA
@sl.extend(Frame2D)
class Frame2D:
def install_all(self):
self.install_nodes()
self.install_supports()
self.install_members()
self.install_releases()
self.install_properties()
self.install_node_loads()
self.install_member_loads()
self.install_load_combinations()
f = Frame2D(dsname='frame-6b')
f.install_all()
@sl.extend(Frame2D)
class Frame2D:
def number_dofs(self):
self.ndof = (3*len(self.nodes))
self.ncons = sum([len(node.constraints) for node in self.nodes.values()])
self.nfree = self.ndof - self.ncons
ifree = 0
icons = self.nfree
self.dofdesc = [None] * self.ndof
for node in self.nodes.values():
for dirn,ix in node.DIRECTIONS.items():
if dirn in node.constraints:
n = icons
icons += 1
else:
n = ifree
ifree += 1
node.dofnums[ix] = n
self.dofdesc[n] = (node,dirn)
##test:
f.number_dofs()
##test:
f.ndof, f.ncons, f.nfree
def prhead(txt,ul='='):
Print a heading and underline it.
print()
print(txt)
if ul:
print(ul*(len(txt)//len(ul)))
print()
@sl.extend(Frame2D)
class Frame2D:
def print_nodes(self,precision=0,printdof=False):
prhead('Nodes:')
print('Node X Y Constraints DOF #s')
print('---- ----- ----- ----------- ------')
for nid,node in self.nodes.items():
ct = ','.join(sorted(node.constraints,key=lambda t: Node.DIRECTIONS[t]))
dt = ','.join([str(x) for x in node.dofnums])
print('{:<5s}{:>10.{precision}f}{:>10.{precision}f} {:<11s} {}'\
.format(nid,node.x,node.y,ct,dt,precision=precision))
if not printdof:
return
print()
print('DOF# Node Dirn')
print('---- ---- ----')
for i in range(len(self.dofdesc)):
node,dirn = self.dofdesc[i]
print('{:>4d} {:<4s} {}'.format(i,node.id,dirn))
##test:
f.print_nodes(printdof=True)
@sl.extend(Frame2D)
class Frame2D:
def print_members(self,precision=1):
prhead('Members:')
print('Member Node-J Node-K Length dcx dcy Size Ix A Releases')
print('------ ------ ------ ------ ------- ------- -------- -------- ----- --------')
for mid,memb in self.members.items():
nj = memb.nodej
nk = memb.nodek
rt = ','.join(sorted(memb.releases,key=lambda t: Member.RELEASES[t]))
print('{:<7s} {:<6s} {:<6s} {:>8.{precision}f} {:>8.5f} {:>8.5f} {:<10s} {:>10g} {:>10g} {}'\
.format(memb.id,nj.id,nk.id,memb.L,memb.dcx,memb.dcy,str(memb.size),memb.Ix,memb.A,rt,precision=precision))
##test:
f.print_members()
@sl.extend(Frame2D)
class Frame2D:
def print_loads(self,precision=0):
prhead('Node Loads:')
if self.nodeloads:
print('Type Node FX FY MZ')
print('---- ---- ---------- ---------- ----------')
for lname,node,load in self.nodeloads:
print('{:<4s} {:<4s} {:>10.{precision}f} {:>10.{precision}f} {:>10.{precision}f}'
.format(lname,node.id,load.fx,load.fy,load.mz,precision=precision))
else:
print(" - - - none - - -")
prhead('Member Loads:')
if self.memberloads:
print('Type Member Load')
print('---- ------ ----------------')
for lname,memb,load in self.memberloads:
print("{:<4s} {:<6s} {}".format(lname,memb.id,load))
else:
print(" - - - none - - -")
prhead("Load Combinations:")
if self.loadcombinations:
print('Combo Type Factor')
print('----- ---- ------')
prev = None
for cname,lname,f in self.loadcombinations:
cn = ' '*(len(prev)//2)+'"' if cname == prev else cname
print("{:<5s} {:<4s} {:>6.2f}".format(cn,lname,f))
prev = cname
else:
print(" - - - none - - -")
##test:
f.print_loads()
@sl.extend(Frame2D)
class Frame2D:
def print_input(self):
prhead('Frame '+str(self.dsname)+':')
print()
print(' # of nodal degrees of freedom:',self.ndof)
print(' # of constrained nodal degrees of freedom:',self.ncons)
print('# of unconstrained nodal degrees of freedom:',self.nfree,' (= degree of kinematic indeterminacy)')
m = len(self.members)
r = self.ncons
j = len(self.nodes)
c = len(self.rawdata.releases)
print()
print(' # of members:',m)
print(' # of reactions:',r)
print(' # of nodes:',j)
print(' # of conditions:',c)
print(' degree of static indeterminacy:',(3*m+r)-(3*j+c))
print('\n')
self.print_nodes()
print('\n')
self.print_members()
print('\n')
self.print_loads()
##test:
f.print_input()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test Frame
Step2: Supports
Step3: Members
Step4: Releases
Step5: Properties
Step6: Node Loads
Step7: Member Loads
Step8: Load Combinations
Step9: Load Iterators
Step10: Accumulated Cell Data
Step11: Input Everything
Step12: Number the DOFs
Step14: Display Nodes
Step15: Display Members
Step16: Display loads
|
5,607
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tflite-model-maker
import os
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
import matplotlib.pyplot as plt
image_path = tf.keras.utils.get_file(
'flower_photos.tgz',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')
data = DataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
model = image_classifier.create(train_data)
loss, accuracy = model.evaluate(test_data)
model.export(export_dir='.')
image_path = tf.keras.utils.get_file(
'flower_photos.tgz',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')
data = DataLoader.from_folder(image_path)
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.gen_dataset().unbatch().take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
model = image_classifier.create(train_data, validation_data=validation_data)
model.summary()
loss, accuracy = model.evaluate(test_data)
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.gen_dataset().unbatch().take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
model.export(export_dir='.')
model.export(export_dir='.', export_format=ExportFormat.LABEL)
model.evaluate_tflite('model.tflite', test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)
model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
inception_v3_spec = image_classifier.ModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
loss, accuracy = model.evaluate(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image classification with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: Simple End-to-End Example
Step4: You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
Step5: Step 2. Customize the TensorFlow model.
Step6: Step 3. Evaluate the model.
Step7: Step 4. Export to TensorFlow Lite model.
Step8: After these simple 4 steps, we could further use TensorFlow Lite model file in on-device applications like in image classification reference app.
Step 1
Step9: Use DataLoader class to load data.
Step10: Split it to training data (80%), validation data (10%, optional) and testing data (10%).
Step11: Show 25 image examples with labels.
Step12: Step 2
Step13: Have a look at the detailed model structure.
Step14: Step 3
Step15: We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
Step16: If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
Step 4
Step17: See example applications and guides of image classification for more details about how to integrate the TensorFlow Lite model into mobile apps.
Step18: You can also evaluate the tflite model with the evaluate_tflite method.
Step19: Advanced Usage
Step20: Then we export the TensorFlow Lite model with such configuration.
Step21: In Colab, you can download the model named model_fp16.tflite from the left sidebar, same as the uploading part mentioned above.
Step22: Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
Step23: Change to the model in TensorFlow Hub
Step24: Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.
Step25: Evaluate the newly retrained model with 10 training epochs.
|
5,608
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
for bid in range(1,5):
for sid in range(9):
batch_id = bid
sample_id = sid
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x/255.0
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# y = np.zeros((len(x), 10))
# for i in range(len(x)):
# y[i,x[i]] = 1
return np.eye(10)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=[None] + list(image_shape), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, [None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name="keep_prob") #dropout (keep probability)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# Filter (weights and bias)
# The shape of the filter weight is (height, width, input_depth, output_depth)
# The shape of the filter bias is (output_depth,)
# TODO: Define the filter weights `F_W` and filter bias `F_b`.
# NOTE: Remember to wrap them in `tf.Variable`, they are trainable parameters after all.
input_shape = x_tensor.shape # (batch_size, height, width, depth)
input_height = int(input_shape[1])
input_width = int(input_shape[2])
input_depth = int(input_shape[3])
filter_height = conv_ksize[0] # since it's a 2-D Tuple for the convolutional layer for spatial dimension
filter_width = conv_ksize[1] # since it's a 2-D Tuple for the convolutional layer for spatial dimension
filter_depth = conv_num_outputs # since we want to increase depth to 'conv_num_outputs' as output_shape is (1,2,2,conv_num_outputs)
# TODO: Set the stride for each dimension (batch_size, height, width, depth)
strides = [1] + list(conv_strides) + [1]
output_height = np.ceil(float(input_height) / float(strides[1])) # for SAME padding
output_width = np.ceil(float(input_width) / float(strides[2])) # for SAME padding
output_depth = conv_num_outputs
# shape of filter weight is (height, width, input_depth, output_depth)
F_shape = [int(filter_height), int(filter_width), int(input_depth), int(output_depth)]
initializer = tf.contrib.layers.xavier_initializer()
F_W = tf.Variable(initializer(F_shape))
# shape of the filter bias is (output_depth,)
# F_b = tf.Variable(tf.zeros(conv_num_outputs))
F_b = tf.Variable(tf.constant(0.1, shape=[conv_num_outputs,]))
# TODO: set the padding, either 'VALID' or 'SAME'.
padding = 'SAME'
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#conv2d
# `tf.nn.conv2d` does not include the bias computation so we have to add it ourselves after.
conv_layer = tf.nn.conv2d(x_tensor, F_W, strides, padding)
conv_layer = tf.nn.bias_add(conv_layer, F_b)
conv_layer = tf.nn.elu(conv_layer)
# TODO: Set the ksize (filter size) for each dimension (batch_size, height, width, depth)
ksize = [1] + list(pool_ksize) + [1]
# TODO: Set the stride for each dimension (batch_size, height, width, depth)
strides = [1] + list(pool_strides) + [1]
# TODO: set the padding, either 'VALID' or 'SAME'.
padding = 'SAME'
# TODO: Implement Function
conv_layer = tf.nn.max_pool(conv_layer, ksize, strides, padding)
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
dims = x_tensor.shape.as_list() # e.g. (1, 32, 32, 5) => dims[1:] = (32, 32, 5)
return tf.reshape(x_tensor,[-1, np.prod(dims[1:])]) # e.g. for a greyscale image of 32x32x1 size prod = 1024
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
dims = x_tensor.get_shape().as_list()
shape = list( (dims[-1],) + (num_outputs,)) #list((dims[-1], num_outputs))
weight = tf.Variable(tf.truncated_normal(shape, 0, 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
# initializer = tf.contrib.layers.xavier_initializer()
# weight = tf.Variable(initializer(shape))
# bias = tf.Variable(tf.constant(0.1, shape=[num_outputs,]))
fc1 = tf.matmul(x_tensor, weight)
fc1 = tf.add(fc1, bias)
fc1 = tf.nn.elu(fc1)
return fc1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
dims = x_tensor.get_shape().as_list()
shape = list( (dims[-1],) + (num_outputs,)) #list((dims[-1], num_outputs))
weight = tf.Variable(tf.truncated_normal(shape, 0, 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.add(tf.matmul(x_tensor, weight), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = conv2d_maxpool(x, conv_num_outputs=20, conv_ksize=(2,2), conv_strides=(1,1), pool_ksize=(4,4), pool_strides=(1,1))
# Here's more info on the architecture of conv nets.
# Usually we don't apply dropout to convolutional layers because they already have a lot of regularization built-in.
# conv = tf.nn.dropout(conv, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
conv = flatten(conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
conv = fully_conn(conv, 384)
conv = tf.nn.dropout(conv, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
conv = output(conv, 10)
# TODO: return output
return conv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# Launch the graph
session.run(optimizer, feed_dict={x:feature_batch, y:label_batch, keep_prob:keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.0})
valid_accuracy = session.run(accuracy, feed_dict={x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print("Loss = " + "{:.6f}".format(loss) + ", Validation accuracy= " + "{:.5f}".format(valid_accuracy))
# TODO: Tune Parameters
epochs = 100
batch_size = 256
keep_probability = 0.75
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
5,609
|
<ASSISTANT_TASK:>
Python Code:
from nltk.featstruct import FeatStruct
f1 = FeatStruct(
'[Vorname=Max, Nachname=Mustermann,' +
'Privat=[Strasse=Hauptstrasse, Ort=[Muenchen]]]'
)
f2 = FeatStruct(
'[Arbeit=[Strasse="Oettingenstrasse", Ort=(1)["Muenchen"]],' +
'Privat=[Ort->(1)]]')
f3 = FeatStruct(
'[Strasse="Hauptstrasse"]'
)
f4 = FeatStruct(
'[Privat=[Strasse="Hauptstrasse", Ort=["Passau"]]]'
)
print(f1.unify(f2).__repr__())
print(f2.unify(f4).__repr__())
grammar =
S -> NP[*CASE*=nom] VP
NP[*CASE*=?x] -> DET[*CASE*=?x,GEN=?y] NOM[*CASE*=?x,GEN=?y]
NOM[*CASE*=?x,GEN=?y] -> N[*CASE*=?x,GEN=?y] NP[*CASE*=gen]
NOM[*CASE*=?x,GEN=?y] -> N[*CASE*=?x,GEN=?y]
VP -> V
V -> "schläft"
DET[*CASE*=nomakk,GEN=fem] -> "die"
DET[*CASE*=nomakk,GEN=neut] -> "das"
DET[*CASE*=gen,GEN=mask] -> "des"
DET[*CASE*=gen,GEN=neut] -> "des"
DET[*CASE*=nom,GEN=mask] -> "der"
DET[*CASE*=gen,GEN=fem] -> "der"
N[*CASE*=nongen,GEN=mask] -> "Mann"
N[*CASE*=nongen,GEN=fem] -> "Frau"
N[*CASE*=nongen,GEN=neut] -> "Kind"
N[*CASE*=gen,GEN=fem] -> "Frau"
N[*CASE*=gen,GEN=mask] -> "Mannes"
N[*CASE*=gen,GEN=neut] -> "Kindes"
from IPython.display import display
import nltk
from typed_features import HierarchicalFeature, TYPE
type_hierarchy = {
"gen": [],
"nongen": ["nomakk", "dat"],
"nomakk": ["nom", "akk"],
"nom": [],
"dat": [],
"akk": []
}
CASE = HierarchicalFeature("CASE", type_hierarchy)
compiled_grammar = nltk.grammar.FeatureGrammar.fromstring(
grammar, features=(CASE, TYPE)
)
parser = nltk.FeatureEarleyChartParser(compiled_grammar)
for t in parser.parse("das Kind der Frau schläft".split()):
display(t)
list(parser.parse("des Mannes schläft".split()))
for t in parser.parse("der Mann der Frau schläft".split()):
display(t)
print(f1.unify(f4).__repr__())
print(f2.unify(f3).__repr__())
redundant_grammar =
S -> NP[KAS=nom] VP
NP[KAS=?y] -> DET[GEN=?x,KAS=?y] NOM[GEN=?x,KAS=?y]
NOM[GEN=?x,KAS=?y] -> N[GEN=?x,KAS=?y] NP[KAS=gen]
NOM[GEN=?x,KAS=?y] -> N[GEN=?x,KAS=?y]
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=mask,KAS=gen] -> "des"
DET[GEN=mask,KAS=dat] -> "dem"
DET[GEN=mask,KAS=akk] -> "den"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=gen] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=gen] -> "des"
DET[GEN=neut,KAS=dat] -> "dem"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask,KAS=nom] -> "Mann"
N[GEN=mask,KAS=gen] -> "Mannes"
N[GEN=mask,KAS=dat] -> "Mann"
N[GEN=mask,KAS=akk] -> "Mann"
N[GEN=fem,KAS=nom] -> "Frau"
N[GEN=fem,KAS=gen] -> "Frau"
N[GEN=fem,KAS=dat] -> "Frau"
N[GEN=fem,KAS=akk] -> "Frau"
N[GEN=neut,KAS=nom] -> "Buch"
N[GEN=neut,KAS=gen] -> "Buches"
N[GEN=neut,KAS=dat] -> "Buch"
N[GEN=neut,KAS=akk] -> "Buch"
VP -> V NP[KAS=dat] NP[KAS=akk]
V -> "gibt" | "schenkt"
pos_sentences = [
"der Mann gibt der Frau das Buch",
"die Frau des Mannes gibt dem Mann der Frau das Buch des Buches"
]
neg_sentences = [
]
def test_grammar(grammar, sentences):
cfg = nltk.grammar.FeatureGrammar.fromstring(grammar)
parser = nltk.parse.FeatureEarleyChartParser(cfg)
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
results = parser.parse(sent.split())
analyzed = False
for tree in results:
print(tree) # oder display(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
test_grammar(redundant_grammar, pos_sentences)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gegeben seien folgende Merkmalstrukturen
Step2: Unifizieren Sie
Step3: f2 mit f4
Step5: Aufgabe 2 Typhierarchie im NLTK
Step6: Hier muss die Typhierarchie in Form eines Dictionary definiert werden
Step7: Folgendes sollte funktionieren
Step8: Folgendes sollte leer sein
Step9: Folgendes sollte wieder funktionieren. Betrachten Sie aufmerksam die Merkmale im Syntaxbaum.
Step10: Hausaufgaben
Step11: f2 mit f3
Step13: Aufgabe 4 Weniger Redundanz dank besonderer Merkmale
Step14: Testen Sie mit Ihren eigenen Negativbeispielen!
|
5,610
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow.contrib import rnn
class SeriesPredictor:
def __init__(self, input_dim, seq_size, hidden_dim=10):
# Hyperparameters
self.input_dim = input_dim
self.seq_size = seq_size
self.hidden_dim = hidden_dim
# Weight variables and input placeholders
self.W_out = tf.Variable(tf.random_normal([hidden_dim, 1]), name='W_out')
self.b_out = tf.Variable(tf.random_normal([1]), name='b_out')
self.x = tf.placeholder(tf.float32, [None, seq_size, input_dim])
self.y = tf.placeholder(tf.float32, [None, seq_size])
# Cost optimizer
self.cost = tf.reduce_mean(tf.square(self.model() - self.y))
self.train_op = tf.train.AdamOptimizer().minimize(self.cost)
# Auxiliary ops
self.saver = tf.train.Saver()
def model(self):
:param x: inputs of size [T, batch_size, input_size]
:param W: matrix of fully-connected output layer weights
:param b: vector of fully-connected output layer biases
cell = rnn.BasicLSTMCell(self.hidden_dim, reuse=tf.get_variable_scope().reuse)
outputs, states = tf.nn.dynamic_rnn(cell, self.x, dtype=tf.float32)
num_examples = tf.shape(self.x)[0]
W_repeated = tf.tile(tf.expand_dims(self.W_out, 0), [num_examples, 1, 1])
out = tf.matmul(outputs, W_repeated) + self.b_out
out = tf.squeeze(out)
return out
def train(self, train_x, train_y):
with tf.Session() as sess:
tf.get_variable_scope().reuse_variables()
sess.run(tf.global_variables_initializer())
for i in range(1000):
_, mse = sess.run([self.train_op, self.cost], feed_dict={self.x: train_x, self.y: train_y})
if i % 100 == 0:
print(i, mse)
save_path = self.saver.save(sess, 'model.ckpt')
print('Model saved to {}'.format(save_path))
def test(self, test_x):
with tf.Session() as sess:
tf.get_variable_scope().reuse_variables()
self.saver.restore(sess, './model.ckpt')
output = sess.run(self.model(), feed_dict={self.x: test_x})
return output
if __name__ == '__main__':
predictor = SeriesPredictor(input_dim=1, seq_size=4, hidden_dim=10)
train_x = [[[1], [2], [5], [6]],
[[5], [7], [7], [8]],
[[3], [4], [5], [7]]]
train_y = [[1, 3, 7, 11],
[5, 12, 14, 15],
[3, 7, 9, 12]]
predictor.train(train_x, train_y)
test_x = [[[1], [2], [3], [4]], # 1, 3, 5, 7
[[4], [5], [6], [7]]] # 4, 9, 11, 13
actual_y = [[[1], [3], [5], [7]],
[[4], [9], [11], [13]]]
pred_y = predictor.test(test_x)
print("\nLets run some tests!\n")
for i, x in enumerate(test_x):
print("When the input is {}".format(x))
print("The ground truth output should be {}".format(actual_y[i]))
print("And the model thinks it is {}\n".format(pred_y[i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Define the RNN model
Step3: Now, we'll train a series predictor. Let's say we have a sequence of numbers [a, b, c, d] that we want to transform into [a, a+b, b+c, c+d]. We'll give the RNN a couple examples in the training data. Let's see how well it learns this intended transformation
|
5,611
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from qutip import *
from qutip import rcsolve
Del = 1.0 # The number of qubits in the system.
wq = 0.5 # Energy of the 2-level system.
Hsys = 0.5 * wq * sigmaz() + 0.5 * Del * sigmax()
Q = sigmaz()
wc = 0.05 # Cutoff frequency.
alpha = 2.5/pi # Coupling strength.
N = 10 # Number of cavity fock states.
Temperature = 1/0.95 # Tempertaure.
tlist = np.linspace(0, 40, 600)
initial_state = basis(2,1) * basis(2,1).dag() # Initial state of the system.
return_vals = [Q] # List for which to calculate expectation value
options = Options(nsteps=15000, store_states=True) # Options for the solver.
output = rcsolve(Hsys, initial_state, tlist, return_vals, Q, wc, alpha, N,
Temperature, options=options)
fig, axes = subplots(1, 1, sharex=True, figsize=(8,4))
axes.plot(tlist, real(output.expect[0]), 'b', linewidth=2, label="P12")
axes.legend(loc=0)
output.states[0]
t_idx_vec = range(0,len(tlist),200)
fig, axes = subplots(len(t_idx_vec), 1, sharey=True, figsize=(8,2*len(t_idx_vec)))
for idx, t_idx in enumerate(t_idx_vec):
psi_a = ptrace(output.states[t_idx], 0)
cont1 = axes[idx].bar(range(0, N), real(psi_a.diag()))
fig.tight_layout()
xvec = linspace(-5,5,200)
t_idx_vec = range(0,len(tlist),200)
fig, axes = subplots(len(t_idx_vec), 1, sharex=True, sharey=True, figsize=(8,4*len(t_idx_vec)))
for idx, t_idx in enumerate(t_idx_vec):
psi_a = ptrace(output.states[t_idx], 0)
W_a = wigner(psi_a, xvec, xvec)
cont1 = axes[idx].contourf(xvec, xvec, W_a, 100)
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining the 2-level system
Step2: Defining the coupling Q such that
Step3: Plotting the TLS state occupation
Step4: Plotting the photon distributions at arbitrary times
Step5: Plotting the wigner function at arbitrary times
Step6: Software versions
|
5,612
|
<ASSISTANT_TASK:>
Python Code:
# import data from url
from py2cytoscape.data.cyrest_client import CyRestClient
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Load a sample network
network = cy.network.create_from('http://chianti.ucsd.edu/~kono/data/galFiltered.sif')
# Apply layout to the cytoscape network object
cy.layout.apply(network = network)
# png
from IPython.display import Image
network_png = network.get_png()
Image(network_png)
# svg
from IPython.display import SVG
network_svg = network.get_svg()
SVG(network_svg)
# pdf
network_pdf = network.get_pdf()
# save the file
f = open('resultImage/scale_free_500.pdf', 'wb')
f.write(network_pdf)
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Save image as png
Step2: Save image as svg
Step3: Save image as pdf
|
5,613
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
from sklearn.datasets import load_boston
# Load the housing dataset
boston = load_boston()
print(boston.DESCR)
# Histogram of prices (this is the target of our dataset)
plt.hist(boston.target,bins=50)
#label
plt.xlabel('Price in $1000s')
plt.ylabel('Number of houses')
# Plot the column at the 5 index (Labeled RM)
plt.scatter(boston.data[:,5],boston.target)
#label
plt.ylabel('Price in $1000s')
plt.xlabel('Number of rooms')
# reset data as pandas DataFrame
boston_df = DataFrame(boston.data)
# label columns
boston_df.columns = boston.feature_names
#show
boston_df.head()
# Set price column for target
boston_df['Price'] = boston.target
# Show result
boston_df.head()
# Using seabron to create a linear fit
sns.lmplot('RM','Price',data = boston_df)
# Quick display of image form wikipedia
from IPython.display import Image
url = 'http://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/Linear_least_squares_example2.svg/220px-Linear_least_squares_example2.svg.png'
Image(url)
# Set up X as median room values
X = boston_df.RM
# Use v to make X two-dimensional
X = np.vstack(boston_df.RM)
# Set up Y as the target price of the houses.
Y = boston_df.Price
# Create the X array in the form [X 1]
X = np.array( [ [value,1] for value in X ] )
# Now get out m and b values for our best fit line
m, b = np.linalg.lstsq(X, Y)[0]
# First the original points, Price vs Avg Number of Rooms
plt.plot(boston_df.RM,boston_df.Price,'o')
# Next the best fit line
x= boston_df.RM
plt.plot(x, m*x + b,'r',label='Best Fit Line')
# Get the resulting array
result = np.linalg.lstsq(X,Y)
# Get the total error
error_total = result[1]
# Get the root mean square error
rmse = np.sqrt(error_total/len(X) )
# Print
print("The root mean squared error was %.2f " %rmse)
# Import for Linear Regression
import sklearn
from sklearn.linear_model import LinearRegression
# Create a LinearRegression Object
lreg = LinearRegression()
# Data Columns
X_multi = boston_df.drop('Price',1)
# Targets
Y_target = boston_df.Price
# Implement Linear Regression
lreg.fit(X_multi,Y_target)
print(' The estimated intercept coefficient is %.2f ' %lreg.intercept_)
print(' The number of coefficients used was %d ' % len(lreg.coef_))
# Set a DataFrame from the Features
coeff_df = DataFrame(boston_df.columns)
coeff_df.columns = ['Features']
# Set a new column lining up the coefficients from the linear regression
coeff_df["Coefficient Estimate"] = pd.Series(lreg.coef_)
# Show
coeff_df
# Grab the output and set as X and Y test and train data sets!
X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(X,boston_df.Price)
# Residual plot of all the dataset using seaborn
sns.residplot('RM', 'Price', data = boston_df)
# Print shapes of the training and testing data sets
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
# Create our regression object
lreg = LinearRegression()
# Once again do a linear regression, except only on the training sets this time
lreg.fit(X_train,Y_train)
# Predictions on training and testing sets
pred_train = lreg.predict(X_train)
pred_test = lreg.predict(X_test)
print("Fit a model X_train, and calculate MSE with Y_train: %.2f" % np.mean((Y_train - pred_train) ** 2))
print("Fit a model X_train, and calculate MSE with X_test and Y_test: %.2f" %np.mean((Y_test - pred_test) ** 2))
# Scatter plot the training data
train = plt.scatter(pred_train,(Y_train-pred_train),c='b',alpha=0.5)
# Scatter plot the testing data
test = plt.scatter(pred_test,(Y_test-pred_test),c='r',alpha=0.5)
# Plot a horizontal axis line at 0
plt.hlines(y=0,xmin=-10,xmax=50)
#Labels
plt.legend((train,test),('Training','Test'),loc='lower left')
plt.title('Residual Plots')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports for plotting
Step2: Now import dataset from scikit learn as well as the linear_model module. Note
Step3: Next we'll download the data set
Step4: Let's see what the data set contains
Step5: Step 2
Step6: Interesting, now let's see a scatter plot of one feature, versus the target. In this case we'll use the housing price versus the number of rooms in the dwelling.
Step7: Great! Now we can make out a slight trend that price increases along with the number of rooms in that house, which intuitively makes sense! Now let's use scikit learn to see if we can fit the data linearly.
Step8: Now let's add the target of the boston data set, the price. We'll create a new column in our DataFrame.
Step9: Now let's see the resultign DataFrame!
Step10: Now, you might be reminded of the seaborn lmplot function we used during the visualization lectures. You could use it here to do a linear fit automatically!
Step11: However, we won't be able to do this when we move to more complicated regression models, so we'll stay focused on using the scikit learn library!
Step 3
Step12: Now as before, we're labeling each green line as having a distance D, and each red point as having a coordinate of (X,Y). Then we can define our best fit line as the line having the property were
Step13: Now that we have our X and Y, let's go ahead and use numpy to create the single variable linear regression.
Step14: Great! Now we can get the best fit values!
Step15: Finally let's plot it all together! Note that we use the original format of the boston information. We only did our matrix transformations to utilize the numpy least square method.
Step16: Step 5
Step17: Since the root mean square error (RMSE) corresponds approximately to the standard deviation we can now say that the price of a house won't vary more than 2 times the RMSE 95% of the time. Note
Step18: Next, we create a LinearRegression object, afterwards, type lm. then press tab to see the list of methods availble on this object.
Step19: The functions we will be using are
Step20: Finally, we're ready to pass the X and Y using the linear regression object.
Step21: Let's go ahead check the intercept and number of coefficients.
Step22: Great! So we have basically made an equation for a line, but instead of just oneo coefficient m and an intercept b, we now have 13 coefficients. To get an idea of what this looks like check out the documentation for this equation
Step23: Just like we initially plotted out, it seems the highest correlation between a feature and a house price was the number of rooms.
Step 7
Step24: Let's go ahead and see what the output of the train_test_split was
Step25: Great! Now that we have our training and testing sets we can continue on to predicint gprices based on the multiple variables.
Step 8
Step26: Now run a prediction on both the X training set and the testing set.
Step27: Now we will get the mean square error
Step28: It looks like our mean square error between our training and testing was pretty close. But how do we actually visualize this?
Step 9
|
5,614
|
<ASSISTANT_TASK:>
Python Code:
#here we define sympy symbols to be used in the analytic calculations
g,mu,b,D,k=sympy.symbols('gamma mu B Delta k',real=True)
# onsite and hopping terms
U=sympy.Matrix([[-mu+b,g,0,D],
[g,-mu-b,-D,0],
[0,-D,mu-b,-g],
[D,0,-g,mu+b]])
T=sympy.Matrix([[0,0,0,0],
[g,0,0,0],
[0,0,0,0],
[0,0,-g,0]])
Hk=sympy.exp(sympy.I*k)*T+sympy.exp(-sympy.I*k)*T.transpose()+U
Hk
# this is where we will keep the eigenvalues of the BdG matrix
bdgspectr=list(Hk.eigenvals())
# in theis list we keep the eigenvalues of the particle block
wspect=list(
((sympy.exp(sympy.I*k).rewrite(sin)*T+
sympy.exp(-sympy.I*k).rewrite(sin)*T.transpose()+
U)[:2,:2]).eigenvals()
)
wspect
#Pauli matrices to be used in symbolic calculations
S1=sympy.physics.matrices.msigma(1)
S2=sympy.physics.matrices.msigma(2)
S3=sympy.physics.matrices.msigma(3)
S0=S1*S1
Kron(S1,S0)
P,D=Kron(S1,S0).diagonalize()
P*Hk*P.inv()
detblock=sympy.simplify((
((( P*( Hk )*P.inv() )[:2,2:]).det())
).rewrite(sin))
sympy.re(detblock)
sympy.im(detblock)
figsize(12,6)
@interact(mu=(-3,3,0.1),B=(0,2,0.1),gamma=fixed(1),Delta=(0,2,0.1))
def spectr_wire(mu=0,B=0,gamma=1,Delta=0):
# this part produces the spectrum
subplot(121)
k=linspace(0,2*pi,100)
I=1j
# evaluating the BdG spectra
plot(k,real(eval(str(bdgspectr[0]))),
'o',lw=3,label='BdG',mec='green',mfc='green',alpha=0.5,mew=0)
for i in [1,2,3]:
plot(k,real(eval(str(bdgspectr[i]))),
'o',lw=3,mec='green',mfc='green',alpha=0.5,mew=0)
# evaluating the particle and the hole spectra without superconductivity
plot(k,eval(str(wspect[0])),'r-',lw=3,label='particle')
plot(k,eval(str(wspect[1])),'r-',lw=3)
plot(k,-eval(str(wspect[0])),'b--',lw=3,label='hole')
plot(k,-eval(str(wspect[1])),'b--',lw=3)
plot(k,0*k,'k-',lw=4)
grid()
xlim(0,2*pi)
ylim(-3,3)
xlabel(r'$k$',fontsize=20)
ylabel(r'$E$',fontsize=20)
legend(fontsize=20)
# this part produces the winding plot
subplot(122)
plot(0,0,'ko',ms=8)
plot(eval(str(sympy.re(detblock))),
eval(str(sympy.im(detblock))),lw=3)
xlabel(r'$\mathrm{Re}(\mathrm{det}(h))$',fontsize=20)
ylabel(r'$\mathrm{Im}(\mathrm{det}(h))$',fontsize=20)
grid()
xlim(-5,5)
ylim(-5,5)
# this builds a BdG matrix of a finite system
def HgTe_wire_BDG_Ham(N=10,mu=0,B=0,gamma=1,Delta=0):
idL=eye(N); # identity matrix of dimension L
odL=diag(ones(N-1),1);# upper off diagonal matrix with ones of size L
U=matrix([[-mu+B,gamma,0,Delta],
[gamma,-mu-B,-Delta,0],
[0,-Delta,mu-B,-gamma],
[Delta,0,-gamma,mu+B]])
T=matrix([[0,0,0,0],
[gamma,0,0,0],
[0,0,0,0],
[0,0,-gamma,0]])
return kron(idL,U)+kron(odL,T)+kron(odL,T).H
# calculate the spectrum as the function of chemical potential for different values of N,B and Delta
figsize(12,6)
uran=linspace(-3,3,50)
@interact(N=(10,20,1),B=(0,3,.1),Delta=(0,1,0.1))
def playBdG(N=10,B=0,Delta=0.2,):
dat=[]
for mu in uran:
dat.append(eigvalsh(HgTe_wire_BDG_Ham(N,mu,B,1,Delta)))
plot(uran,dat,'r',lw=3);
xlabel(r'$\mu$',fontsize=20)
ylabel(r'$E^{BdG}_n$',fontsize=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us define a simple real, and hence time reversal invariant lattice model that can serve as a good description to a 1D chiral edge channel. We start from the SSH model and relabel the sublattice degrees of freedom as spins, and we introduce an extra onsite magnetic field $B$ in the z direction.
Step2: The $k$-dependent Bogoliubov–de Gennes matrix is defined below
Step3: We can diagonalize this yealding the eigenvalues of the system
Step4: Since we have an explicitely real Bogoliubov-de Gennes matrix we can define a chiral symmetry as $\tau_1\sigma_0$. Whit the help of this symmetry we can transform the Bogoliubov-de Gennes matrix to a block off-diagonal form.
Step5: This is the Chiral symmetry operator
Step6: The eigenvectors of this matrix give the unitary transformation necessary to block off-diagonalize the $\mathcal{H}$ matrix
Step7: The Winding number of the determinant of the nonzero subblock will serve as a topological invariant for this model
Step8: Looking at the imaginary and real part of this quantity we recognize that it describes an ellipse
Step9: From the above expressions we can infer the topological phase diagram of the system. While keeping a finite value of $\Delta$ tuning $\mu^2+\Delta^2-B^2$ in the interval $[0,4\gamma^2]$ the system has a Winding number of one and hence is topological, otherwhise it is trivial.
Step10: Finally let us calculate the spectrum of a wire of finite length and let us look for zero energy excitations!
|
5,615
|
<ASSISTANT_TASK:>
Python Code:
from datasets import *
from qiskit_aqua.utils import split_dataset_to_data_and_labels
from qiskit_aqua.input import get_input_instance
from qiskit_aqua import run_algorithm
import numpy as np
n = 2 # dimension of each data point
sample_Total, training_input, test_input, class_labels = Wine(training_size=40,
test_size=10, n=n, PLOT_DATA=True)
temp = [test_input[k] for k in test_input]
total_array = np.concatenate(temp)
aqua_dict = {
'problem': {'name': 'svm_classification', 'random_seed': 10598},
'algorithm': {
'name': 'QSVM.Kernel'
},
'feature_map': {'name': 'SecondOrderExpansion', 'depth': 2, 'entangler_map': {0: [1]}},
'multiclass_extension': {'name': 'AllPairs'},
'backend': {'name': 'qasm_simulator', 'shots': 1024}
}
algo_input = get_input_instance('SVMInput')
algo_input.training_dataset = training_input
algo_input.test_dataset = test_input
algo_input.datapoints = total_array
result = run_algorithm(aqua_dict, algo_input)
for k,v in result.items():
print("'{}' : {}".format(k, v))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we choose the Wine dataset which has 3 classes.
Step2: Now we setup an Aqua configuration dictionary to use the quantum QSVM.Kernel algorithm and add a multiclass extension to classify the Wine data set, since it has 3 classes.
|
5,616
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os
import os.path as op
import numpy as np
from scipy.misc import imread
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import spm_face
from mne.minimum_norm import apply_inverse, make_inverse_operator
from mne.cov import compute_covariance
print(__doc__)
data_path = spm_face.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1) # Take first run
# To save time and memory for this demo, we'll just use the first
# 2.5 minutes (all we need to get 30 total events) and heavily
# resample 480->60 Hz (usually you wouldn't do either of these!)
raw = raw.crop(0, 150.).load_data().resample(60, npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, None, n_jobs=1, fir_design='firwin')
events = mne.find_events(raw, stim_channel='UPPT001')
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.5
baseline = None # no baseline as high-pass is applied
reject = dict(mag=3e-12)
# Make source space
trans = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
src = mne.setup_source_space('spm', spacing='oct6', subjects_dir=subjects_dir,
add_dist=False)
bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(raw.info, trans, src, bem)
del src
# inverse parameters
conditions = 'faces', 'scrambled'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
clim = dict(kind='value', lims=[0, 2.5, 5])
samples_epochs = 5, 15,
method = 'empirical', 'shrunk'
colors = 'steelblue', 'red'
evokeds = list()
stcs = list()
methods_ordered = list()
for n_train in samples_epochs:
# estimate covs based on a subset of samples
# make sure we have the same number of conditions.
events_ = np.concatenate([events[events[:, 2] == id_][:n_train]
for id_ in [event_ids[k] for k in conditions]])
epochs_train = mne.Epochs(raw, events_, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject)
epochs_train.equalize_event_counts(event_ids)
assert len(epochs_train) == 2 * n_train
noise_covs = compute_covariance(
epochs_train, method=method, tmin=None, tmax=0, # baseline only
return_estimators=True) # returns list
# prepare contrast
evokeds = [epochs_train[k].average() for k in conditions]
del epochs_train, events_
# do contrast
# We skip empirical rank estimation that we introduced in response to
# the findings in reference [1] to use the naive code path that
# triggered the behavior described in [1]. The expected true rank is
# 274 for this dataset. Please do not do this with your data but
# rely on the default rank estimator that helps regularizing the
# covariance.
stcs.append(list())
methods_ordered.append(list())
for cov in noise_covs:
inverse_operator = make_inverse_operator(evokeds[0].info, forward,
cov, loose=0.2, depth=0.8,
rank=274)
stc_a, stc_b = (apply_inverse(e, inverse_operator, lambda2, "dSPM",
pick_ori=None) for e in evokeds)
stc = stc_a - stc_b
methods_ordered[-1].append(cov['method'])
stcs[-1].append(stc)
del inverse_operator, evokeds, cov, noise_covs, stc, stc_a, stc_b
del raw, forward # save some memory
fig, (axes1, axes2) = plt.subplots(2, 3, figsize=(9.5, 6))
def brain_to_mpl(brain):
convert image to be usable with matplotlib
tmp_path = op.abspath(op.join(op.curdir, 'my_tmp'))
brain.save_imageset(tmp_path, views=['ven'])
im = imread(tmp_path + '_ven.png')
os.remove(tmp_path + '_ven.png')
return im
for ni, (n_train, axes) in enumerate(zip(samples_epochs, (axes1, axes2))):
# compute stc based on worst and best
ax_dynamics = axes[1]
for stc, ax, method, kind, color in zip(stcs[ni],
axes[::2],
methods_ordered[ni],
['best', 'worst'],
colors):
brain = stc.plot(subjects_dir=subjects_dir, hemi='both', clim=clim,
initial_time=0.175)
im = brain_to_mpl(brain)
brain.close()
del brain
ax.axis('off')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.imshow(im)
ax.set_title('{0} ({1} epochs)'.format(kind, n_train * 2))
# plot spatial mean
stc_mean = stc.data.mean(0)
ax_dynamics.plot(stc.times * 1e3, stc_mean,
label='{0} ({1})'.format(method, kind),
color=color)
# plot spatial std
stc_var = stc.data.std(0)
ax_dynamics.fill_between(stc.times * 1e3, stc_mean - stc_var,
stc_mean + stc_var, alpha=0.2, color=color)
# signal dynamics worst and best
ax_dynamics.set_title('{0} epochs'.format(n_train * 2))
ax_dynamics.set_xlabel('Time (ms)')
ax_dynamics.set_ylabel('Source Activation (dSPM)')
ax_dynamics.set_xlim(tmin * 1e3, tmax * 1e3)
ax_dynamics.set_ylim(-3, 3)
ax_dynamics.legend(loc='upper left', fontsize=10)
fig.subplots_adjust(hspace=0.4, left=0.03, right=0.98, wspace=0.07)
fig.canvas.draw()
fig.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get data
Step2: Estimate covariances
Step4: Show the resulting source estimates
|
5,617
|
<ASSISTANT_TASK:>
Python Code:
import pyautogui
# let us first change directory to the `files` subdirectory, to store these values
import os
os.chdir('files')
os.getcwd()
pyautogui.screenshot()
pyautogui.screenshot('screenshot_example.png')
pyautogui.locateOnScreen('calc7key.png')
pyautogui.locateCenterOnScreen('calc7key.png')
pyautogui.moveTo((1309, 595), duration=1)
pyautogui.click((1309, 595), clicks = 7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can use the screenshot() function to take a screenshot of the current screen.
Step2: This creates an immediate screenshot file, stored in memory. To save it to the file, we can pass a file path to it.
Step3: Now, the module can 'see' the screen, but to do proper image recognition, we must use the the locateOnScreen() function. We will use the default OSX calculator for this example
Step4: It has returned a tuple of 4 integers
Step5: We can now move to and click the calculator.
|
5,618
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import optimize
sns.set()
from collections import Counter
from ICGC_data_parser import SSM_Reader
mutations_per_gene = Counter()
mutations = SSM_Reader(filename='/home/ad115/Downloads/simple_somatic_mutation.aggregated.vcf.gz')
# Fix weird bug due to malformed description headers
mutations.infos['studies'] = mutations.infos['studies']._replace(type='String')
consequences = mutations.subfield_parser('CONSEQUENCE')
for i, record in enumerate(mutations):
if i % 100000 == 0:
print(i)
affected_genes = [c.gene_symbol for c in consequences(record) if c.gene_affected]
mutations_per_gene.update(affected_genes)
mutations_per_gene.most_common(5)
len(mutations_per_gene)
distribution = Counter(mutations_per_gene.values())
distribution.most_common(10)
x = sorted(distribution.keys())
y = [distribution[i] for i in x]
plt.figure(figsize=(10, 7))
plt.plot(x, y)
plt.yscale('log')
plt.xscale('log')
plt.title('Mutation distribution by gene')
plt.xlabel('$n$')
plt.ylabel('genes with $n$ mutations')
plt.show()
# In order to find out the length of the
# genes, we will use the Ensembl REST API.
import ensembl_rest
from itertools import islice
def chunks_of(iterable, size=10):
A generator that yields chunks of fixed size from the iterable.
iterator = iter(iterable)
while True:
next_ = list(islice(iterator, size))
if next_:
yield next_
else:
break
# ---
# Instantiate a client for communication with
# the Ensembl REST API.
client = ensembl_rest.EnsemblClient()
normalized_counts = dict()
lengths_distribution = Counter()
for i, gene_batch in enumerate(chunks_of(mutations_per_gene, size=1000)):
# Get information of the genes
gene_data = client.symbol_post('human',
params={'symbols': gene_batch})
gene_lengths = {gene: data['end'] - data['start'] + 1
for gene, data in gene_data.items()}
lengths_distribution.update(gene_lengths.values())
# Get the normalization
normalized_counts.update({
gene: mutations_per_gene[gene] / gene_lengths[gene]
for gene in gene_data
})
print((i+1)*1000)
c = Counter()
c.update(normalized_counts)
c.most_common(10)
normalized_distribution = Counter(normalized_counts.values())
normalized_distribution.most_common(10)
x = sorted(normalized_distribution.keys())
y = [normalized_distribution[i] for i in x]
plt.figure(figsize=(10, 7))
plt.plot(x, y)
plt.xscale('log')
plt.title('Mutations per base distribution by gene (normalized)')
plt.xlabel('$x$')
plt.ylabel('genes with $x$ mutations per base pair')
plt.show()
max(lengths_distribution)
min(lengths_distribution)
lengths_distribution.most_common(5)
x = sorted(lengths_distribution.keys())
y = [lengths_distribution[i] for i in x]
plt.figure(figsize=(10, 7))
plt.plot(x, y)
plt.xscale('log')
plt.yscale('log')
plt.title('Gene lengths distribution')
plt.xlabel('$L$')
plt.ylabel('genes with length $L$')
plt.savefig('gene-lengths.png')
plt.show()
lengths_distribution
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first map genes to the number of mutations they harbor (read from a random sample of 100,000 mutations)
Step2: Now we want to group by number of mutations
Step3: Now we plot the data...
Step5: We can see the data resembles a power law but does not quite fit. It looks like it has a bump in the middle, this may be because the genes have wildly varying lengths. In order to correct this we have to normalize the mutations per gene by the length of the gene. This is done as follows
|
5,619
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1 / m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1 - Y), np.log(1 - AL)))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1 / m * np.dot(dZ, A_prev.T)
db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L - 1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
#print ("W3 = "+ str(parameters["W3"]))
#print ("b3 = "+ str(parameters["b3"]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2 - Outline of the Assignment
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
|
5,620
|
<ASSISTANT_TASK:>
Python Code:
import cantera
print cantera.__version__
from rmgpy.chemkin import *
from rmgpy.tools.canteraModel import *
from rmgpy.tools.plot import parseCSVData
from rmgpy.species import Species
from IPython.display import display, Image
speciesList, reactionList = loadChemkinFile('data/minimal_model/chem_annotated.inp',
'data/minimal_model/species_dictionary.txt',
'data/minimal_model/tran.dat')
# Find the species: ethane and methane
user_ethane = Species().fromSMILES('CC')
user_methane = Species().fromSMILES('C')
speciesDict = getRMGSpeciesFromUserSpecies([user_ethane, user_methane], speciesList)
ethane = speciesDict[user_ethane]
methane = speciesDict[user_methane]
sensitiveSpecies = [ethane, methane]
#reactorTypeList = ['IdealGasReactor']
reactorTypeList = ['IdealGasConstPressureTemperatureReactor']
molFracList=[{ethane: 1}]
Tlist = ([1300],'K')#,1500,2000],'K')
Plist = ([1],'atm')
reactionTimeList = ([0.5], 'ms')
# Create cantera object, loading in the species and reactions
job = Cantera(speciesList=speciesList, reactionList=reactionList, outputDirectory='temp', sensitiveSpecies=sensitiveSpecies)
# The cantera file must be created from an associated chemkin file
# We can either load the Model from the initialized set of rmg species and reactions
job.loadModel()
# Or load it from a chemkin file by uncommenting the following line:
#job.loadChemkinModel('data/minimal_model/chem_annotated.inp',transportFile='data/minimal_model/tran.dat')
# Generate the conditions based on the settings we declared earlier
job.generateConditions(reactorTypeList, reactionTimeList, molFracList, Tlist, Plist)
# Simulate and plot
alldata = job.simulate()
job.plot(alldata)
# Show the plots in the ipython notebook
for i, condition in enumerate(job.conditions):
print 'Cantera Simulation: Condition {0} Mole Fractions'.format(i+1)
display(Image(filename="temp/{0}_mole_fractions.png".format(i+1)))
print 'Cantera Simulation: Condition {0} Ethane Reaction Sensitivity'.format(i+1)
display(Image(filename="temp/{0}_ethane(1)_sensitivity.png".format(i+1)))
# Let's compare against the same simulation in RMG
# Create an input file
input = '''
database(
thermoLibraries = ['primaryThermoLibrary'],
reactionLibraries = [],
seedMechanisms = [],
kineticsDepositories = 'default',
kineticsFamilies = 'default',
kineticsEstimator = 'rate rules',
)
species(
label = "ethane",
structure = SMILES("CC"))
species(
label = "methane",
structure = SMILES("C"))
simpleReactor(
temperature = (1300,"K"),
pressure = (1,"atm"),
initialMoleFractions={
"ethane": 1
},
terminationTime = (0.5,"ms"),
sensitivity=['ethane','methane']
)
model(
toleranceMoveToCore = 0.04,
)
options(
saveSimulationProfiles = True,
)
'''
f = open('temp/temp_input.py', 'wb')
f.write(input)
f.close()
from rmgpy.tools.sensitivity import runSensitivity
runSensitivity('temp/temp_input.py', 'data/minimal_model/chem_annotated.inp', 'data/minimal_model/species_dictionary.txt')
print 'RMG Native Simulation: Species Mole Fractions'
display(Image(filename="temp/solver/simulation_1_19.png"))
print 'RMG Native Simulation: Ethane Reaction Sensitivity'
display(Image(filename="temp/solver/sensitivity_1_SPC_1_reactions.png"))
# Let's also compare against the same simulation and sensitivity analysis that was conducted in CHEMKIN
# and saved as a .csv file
time, dataList = parseCSVData('data/minimal_model/chemkin_mole_fractions.csv')
SimulationPlot(xVar=time,yVar=dataList).plot('temp/chemkin_mole_fractions.png')
print 'CHEMKIN Simulation: Mole Fractions'
display(Image(filename="temp/chemkin_mole_fractions.png"))
time, dataList = parseCSVData('data/minimal_model/chemkin_sensitivity_ethane.csv')
ReactionSensitivityPlot(xVar=time,yVar=dataList).barplot('temp/chemkin_ethane_sensitivity.png')
print 'CHEMKIN Simulation: Ethane Reaction Sensitivity'
display(Image(filename="temp/chemkin_ethane_sensitivity.png"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the species and reaction from the RMG-generated chemkin file chem_annotated.inp and species_dictionary.txt file found in your chemkin folder after running a job.
Step2: Set the reaction conditions
|
5,621
|
<ASSISTANT_TASK:>
Python Code:
from fretbursts import *
sns = init_notebook()
url = 'http://files.figshare.com/2182602/dsdna_d7_d17_50_50_1.hdf5'
download_file(url, save_dir='./data')
filename = './data/dsdna_d7_d17_50_50_1.hdf5'
filename
# filename = OpenFileDialog()
# filename
import os
if os.path.isfile(filename):
print("Perfect, I found the file!")
else:
print("Sorry, I can't find the file:\n%s" % filename)
d = loader.photon_hdf5(filename)
#d = loader.nsalex(fname)
d.time_max
d.det_t
print("Detector Counts")
print("-------- --------")
for det, count in zip(*np.unique(d.det_t, return_counts=True)):
print("%8d %8d" % (det, count))
#d.add(A_ON=(200, 1500), D_ON=(1750, 3200), det_donor_accept=(4, 6))
d.nanotimes_t
bpl.plot_alternation_hist(d)
loader.alex_apply_period(d)
d.calc_bg(fun=bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
dplot(d, timetrace)
xlim(1, 2)
ylim(-50, 50)
d.burst_search()
ds = d.select_bursts(select_bursts.size, th1=30)
alex_jointplot(ds)
ds.leakage = 0.05
alex_jointplot(ds)
dplot(ds, hist_fret, show_kde=True)
d.nanotimes
nanotimes = d.nanotimes[0]
nanotimes_d = nanotimes[d.get_D_em()]
nanotimes_a = nanotimes[d.get_A_em()]
hist_params = dict(bins=range(4096), histtype='step', alpha=0.6, lw=1.5)
hist(nanotimes, color='k', label='Total ph.', **hist_params)
hist(nanotimes_d, color='g', label='D. em. ph.', **hist_params)
hist(nanotimes_a, color='r', label='A. em. ph.', **hist_params)
plt.legend()
plt.yscale('log')
ph_in_bursts_mask = d.ph_in_bursts_mask_ich()
bursts_nanotimes_t = nanotimes[ph_in_bursts_mask]
bursts_nanotimes_d = nanotimes[ph_in_bursts_mask * d.get_D_em()]
bursts_nanotimes_a = nanotimes[ph_in_bursts_mask * d.get_A_em()]
hist_params = dict(bins=range(4096), histtype='step', alpha=0.6, lw=1.5)
hist(bursts_nanotimes_t, color='k', label='Total ph.', **hist_params)
hist(bursts_nanotimes_d, color='g', label='D. em. ph.', **hist_params)
hist(bursts_nanotimes_a, color='r', label='A. em. ph.', **hist_params)
plt.legend()
plt.yscale('log')
nanotimes.tofile('nanotimes_t.csv', sep=',\n') # save in CSV txt format
from scipy.io import savemat
savemat('bursts_nanotimes.mat',
dict(bn_t=bursts_nanotimes_t,
bn_d=bursts_nanotimes_d,
bn_a=bursts_nanotimes_a,))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Downloading the sample data file
Step2: Selecting a data file
Step3: Load the selected file
Step4: Execute the previous 2 cells until you get a satisfying
Step5: Burst search and selection
Step6: Nanotimes
Step7: We can plot the histogram for this 3 nanotimes
Step8: We can also select only nanotimes of photons inside bursts.
Step9: Then we apply this selection to the nanotimes array.
Step10: And, as before, we can histogram the nanotimes
Step11: Saving to a file
Step12: Save to legacy MATLAB format
|
5,622
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt, cm
from skimage import io
image = io.imread('../images/zebrafish-spinal-cord.png')
from scipy import ndimage as nd
top, bottom = image[[0, -1], :]
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(8, 3))
top_smooth = nd.gaussian_filter1d(top, sigma=20)
ax0.plot(top, color='blue', lw=2)
ax0.plot(top_smooth, color='orange', lw=2)
ax0.set_title('top')
bottom_smooth = nd.gaussian_filter1d(bottom, sigma=20)
ax1.plot(bottom, color='blue', lw=2)
ax1.plot(bottom_smooth, color='orange', lw=2)
ax1.set_title('bottom')
top_mode = np.argmax(top_smooth)
top_max = top_smooth[top_mode]
top_width = (top_smooth > float(top_max) / 2).sum()
bottom_mode = np.argmax(bottom_smooth)
bottom_max = bottom_smooth[bottom_mode]
bottom_width = (bottom_smooth > float(bottom_max) / 2).sum()
width = max(bottom_width, top_width)
print(top_mode, top_width, bottom_mode, bottom_width)
from skimage import measure
trace = measure.profile_line(None) # Replace `None` with correct args
plt.plot(trace, color='black', lw=2)
plt.xlabel('position along embryo')
plt.ylabel('mean fluorescence intensity')
%reload_ext load_style
%load_style ../themes/tutorial.css
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SciPy to estimate coordinates
Step2: With smooth curves, we can get the mode (the position of the center) and width of the signal.
Step3: scikit-image to trace the profile
Step4: Finally, plot the trace.
Step5: From this trace, we can compute various summary statistics (e.g. min/max, gap width, slope, etc), and plot these over time as the wound recovers.
|
5,623
|
<ASSISTANT_TASK:>
Python Code:
# pip install watermark
%reload_ext watermark
%watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib
import os.path
if not os.path.isfile('text8'):
!wget -c http://mattmahoney.net/dc/text8.zip
!unzip text8.zip
LOGS = False
if LOGS:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim.models import Word2Vec, KeyedVectors
from gensim.models.word2vec import Text8Corpus
# Using params from Word2Vec_FastText_Comparison
params = {
'alpha': 0.05,
'size': 100,
'window': 5,
'iter': 5,
'min_count': 5,
'sample': 1e-4,
'sg': 1,
'hs': 0,
'negative': 5
}
model = Word2Vec(Text8Corpus('text8'), **params)
print(model)
# Set up the model and vector that we are using in the comparison
from gensim.similarities.index import AnnoyIndexer
model.init_sims()
annoy_index = AnnoyIndexer(model, 100)
# Dry run to make sure both indices are fully in RAM
vector = model.wv.syn0norm[0]
model.most_similar([vector], topn=5, indexer=annoy_index)
model.most_similar([vector], topn=5)
import time
import numpy as np
def avg_query_time(annoy_index=None, queries=1000):
Average query time of a most_similar method over 1000 random queries,
uses annoy if given an indexer
total_time = 0
for _ in range(queries):
rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))]
start_time = time.clock()
model.most_similar([rand_vec], topn=5, indexer=annoy_index)
total_time += time.clock() - start_time
return total_time / queries
queries = 10000
gensim_time = avg_query_time(queries=queries)
annoy_time = avg_query_time(annoy_index, queries=queries)
print("Gensim (s/query):\t{0:.5f}".format(gensim_time))
print("Annoy (s/query):\t{0:.5f}".format(annoy_time))
speed_improvement = gensim_time / annoy_time
print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement))
# 100 trees are being used in this example
annoy_index = AnnoyIndexer(model, 100)
# Derive the vector for the word "science" in our model
vector = model["science"]
# The instance of AnnoyIndexer we just created is passed
approximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = model.most_similar([vector], topn=11)
print("\nNormal (not Annoy-indexed) Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
fname = '/tmp/mymodel.index'
# Persist index to disk
annoy_index.save(fname)
# Load index back
if os.path.exists(fname):
annoy_index2 = AnnoyIndexer()
annoy_index2.load(fname)
annoy_index2.model = model
# Results should be identical to above
vector = model["science"]
approximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2)
for neighbor in approximate_neighbors2:
print(neighbor)
assert approximate_neighbors == approximate_neighbors2
# Remove verbosity from code below (if logging active)
if LOGS:
logging.disable(logging.CRITICAL)
from multiprocessing import Process
import os
import psutil
%%time
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model["science"]
annoy_index = AnnoyIndexer(new_model,100)
approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
%%time
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model["science"]
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = new_model
approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
import matplotlib.pyplot as plt
%matplotlib inline
exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)]
x_values = []
y_values_init = []
y_values_accuracy = []
for x in range(1, 300, 10):
x_values.append(x)
start_time = time.time()
annoy_index = AnnoyIndexer(model, x)
y_values_init.append(time.time() - start_time)
approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index)
top_words = [result[0] for result in approximate_results]
y_values_accuracy.append(len(set(top_words).intersection(exact_results)))
plt.figure(1, figsize=(12, 6))
plt.subplot(121)
plt.plot(x_values, y_values_init)
plt.title("num_trees vs initalization time")
plt.ylabel("Initialization time (s)")
plt.xlabel("num_trees")
plt.subplot(122)
plt.plot(x_values, y_values_accuracy)
plt.title("num_trees vs accuracy")
plt.ylabel("% accuracy")
plt.xlabel("num_trees")
plt.tight_layout()
plt.show()
# To export our model as text
model.wv.save_word2vec_format('/tmp/vectors.txt', binary=False)
from smart_open import smart_open
# View the first 3 lines of the exported file
# The first line has the total number of entries and the vector dimension count.
# The next lines have a key (a string) followed by its vector.
with smart_open('/tmp/vectors.txt') as myfile:
for i in range(3):
print(myfile.readline().strip())
# To import a word2vec text model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# To export our model as binary
model.wv.save_word2vec_format('/tmp/vectors.bin', binary=True)
# To import a word2vec binary model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)
annoy_index = AnnoyIndexer(wv, 100)
annoy_index.save('/tmp/mymodel.index')
# Load and test the saved word vectors and saved annoy index
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = wv
vector = wv["cat"]
approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = wv.most_similar([vector], topn=11)
print("\nNormal (not Annoy-indexed) Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Download Text8 Corpus
Step2: Import & Set up Logging
Step3: 2. Build Word2Vec Model
Step5: See the Word2Vec tutorial for how to initialize and save this model.
Step6: This speedup factor is by no means constant and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors.
Step7: Analyzing the results
Step8: Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.
Step9: Bad Example
Step10: Good example. Two processes load both the Word2vec model and index from disk and memory-map the index
Step11: 5. Evaluate relationship of num_trees to initialization time and accuracy
Step12: Build dataset of Initialization times and accuracy measures
Step13: Plot results
Step14: Initialization
|
5,624
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data_small.gl/kc_house_data_small.gl')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
features = feature_matrix / norms
return features, norms
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
print features_test[0]
print features_train[9]
print np.sqrt(np.sum((features_train[9]-features_test[0])**2))
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[0])**2)))
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[2])**2)))
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
print features_train[0:3] - features_test[0]
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
diff = features_train[0:len(features_train)] - features_test[0]
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
distances = np.sqrt(np.sum(diff**2, axis=1))
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
def compute_distances(features_instances, features_query):
diff = features_instances[0:len(features_instances)] - features_query
distances = np.sqrt(np.sum(diff**2, axis=1))
return distances
distances = compute_distances(features_train, features_test[2])
min = distances[0]
index = 0
for i in xrange(len(distances)):
if(distances[i] < min):
min = distances[i]
index = i
print min
print index
print output_train[382]
def k_nearest_neighbors(k, feature_train, features_query):
distances = compute_distances(features_train, features_query)
neighbors = np.argsort(distances)[0:k]
return neighbors
print k_nearest_neighbors(4, features_train, features_test[2])
def predict_output_of_query(k, features_train, output_train, features_query):
neighbors = k_nearest_neighbors(k, features_train, features_query)
prices = output_train[neighbors]
prediction = np.sum(prices)/k
return prediction
print predict_output_of_query(4, features_train, output_train, features_test[2])
def predict_output(k, features_train, output_train, features_query):
predictions = []
for i in xrange(len(features_query)):
prediction = predict_output_of_query(k, features_train, output_train, features_query[i])
predictions.append(prediction)
return predictions
print predict_output(10, features_train, output_train,features_test[0:10])
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Import useful functions from previous notebooks
Step3: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step4: Split data into training, test, and validation sets
Step5: Extract features and normalize
Step6: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
Step7: Compute a single distance
Step8: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step9: QUIZ QUESTION
Step10: Compute multiple distances
Step11: QUIZ QUESTION
Step12: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Step13: The subtraction operator (-) in Numpy is vectorized as follows
Step14: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step15: Aside
Step16: To test the code above, run the following cell, which should output a value -0.0934339605842
Step17: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
Step18: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Step19: To test the code above, run the following cell, which should output a value 0.0237082324496
Step20: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step21: QUIZ QUESTIONS
Step22: Perform k-nearest neighbor regression
Step23: QUIZ QUESTION
Step24: Make a single prediction by averaging k nearest neighbor outputs
Step25: QUIZ QUESTION
Step26: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Step27: QUIZ QUESTION
Step28: Choosing the best value of k using a validation set
|
5,625
|
<ASSISTANT_TASK:>
Python Code:
#Add all dependencies to PYTHON_PATH
import sys
sys.path.append("/usr/lib/spark/python")
sys.path.append("/usr/lib/spark/python/lib/py4j-0.10.4-src.zip")
sys.path.append("/usr/lib/python3/dist-packages")
#Define environment variables
import os
os.environ["HADOOP_CONF_DIR"] = "/etc/hadoop/conf"
os.environ["PYSPARK_PYTHON"] = "python3"
os.environ["PYSPARK_DRIVER_PYTHON"] = "ipython"
#Load PySpark to connect to a Spark cluster
from pyspark import SparkConf, SparkContext
#from osgeo import gdal
#To read GeoTiffs as a ByteArray
from io import BytesIO
from rasterio.io import MemoryFile
appName = "plot_GeoTiff_bands_python"
masterURL="spark://pheno0.phenovari-utwente.surf-hosted.nl:7077"
#A context needs to be created if it does not already exist
try:
sc.stop()
except NameError:
print("A new Spark Context will be created.")
sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))
file_path = "hdfs:///user/hadoop/spring-index/BloomFinal/1980.tif"
data = sc.binaryFiles(file_path).take(1)
dataByteArray = bytearray(data[0][1])
#If it is needed to convert to a numpy array
#import numpy as np
#file_bytes = np.asarray(dataByteArray, dtype=np.uint8)
#Lets check if the files was read correctly by printing its metadata
with MemoryFile(dataByteArray) as memfile:
with memfile.open() as dataset:
print(dataset.profile)
import matplotlib.pyplot as plt
import rasterio
from rasterio import plot
with MemoryFile(dataByteArray) as memfile:
with memfile.open() as dataset:
plot.show((dataset,1))
#Define variables so they can be re-used.
memfile = MemoryFile(dataByteArray)
dataset = memfile.open()
%matplotlib notebook
fig, (axr, axg, axb) = plt.subplots(1,3, figsize=(12, 4), sharex=True, sharey=True)
plot.show((dataset, 1), title='band 1', ax=axr)
plot.show((dataset, 2), title='band 2', ax=axg)
plot.show((dataset, 3), title='band 3', ax=axb)
%matplotlib inline
plot.show_hist(dataset)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to Spark
Step2: Read a GeoTiff file
Step3: Visualization
Step4: Interactive visualization
Step5: Histogram
|
5,626
|
<ASSISTANT_TASK:>
Python Code:
import pymongo
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
import re
from pymongo import MongoClient
%matplotlib inline
client = MongoClient('mongodb')
db = client.dp
collection = db.divorce
data = db.divorce.find()[0]['data']
for entry in data:
entry['DIVORCES'] = entry['values'][0]['NUMBER']
s = entry['DURATION']
tmp = re.findall(r'\d+', s)
if (len(tmp) == 1):
tmp[0] = 0
del entry['values']
del entry['NUTS1']
del entry['NUTS2']
entry['DURATION'] = tmp[0]
data_json = json.dumps(data)
df = pd.read_json(data_json)
filtered = df[df.DURATION == 10].filter(items=['DIVORCES','REF_YEAR'])
filtered
filtered.plot.bar(x='REF_YEAR',y='DIVORCES')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Connect to mongoDB
Step2: 2. Fetch & Transform
Step3: Transform to JSON for pandas import
Step4: Plot
|
5,627
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib.colors import ListedColormap
from sklearn import cross_validation, datasets, linear_model, metrics
import numpy as np
%pylab inline
blobs = datasets.make_blobs(centers = 2, cluster_std = 5.5, random_state=1)
colors = ListedColormap(['red', 'blue'])
pylab.figure(figsize(8, 8))
pylab.scatter([x[0] for x in blobs[0]], [x[1] for x in blobs[0]], c=blobs[1], cmap=colors)
train_data, test_data, train_labels, test_labels = cross_validation.train_test_split(blobs[0], blobs[1],
test_size = 0.3,
random_state = 1)
#создание объекта - классификатора
ridge_classifier = linear_model.RidgeClassifier(random_state = 1)
#обучение классификатора
ridge_classifier.fit(train_data, train_labels)
#применение обученного классификатора
ridge_predictions = ridge_classifier.predict(test_data)
print test_labels
print ridge_predictions
#оценка качества классификации
metrics.accuracy_score(test_labels, ridge_predictions)
ridge_classifier.coef_
ridge_classifier.intercept_
log_regressor = linear_model.LogisticRegression(random_state = 1)
log_regressor.fit(train_data, train_labels)
lr_predictions = log_regressor.predict(test_data)
lr_proba_predictions = log_regressor.predict_proba(test_data)
print test_labels
print lr_predictions
print lr_proba_predictions
print metrics.accuracy_score(test_labels, lr_predictions)
print metrics.accuracy_score(test_labels, ridge_predictions)
ridge_scoring = cross_validation.cross_val_score(ridge_classifier, blobs[0], blobs[1], scoring = 'accuracy', cv = 10)
lr_scoring = cross_validation.cross_val_score(log_regressor, blobs[0], blobs[1], scoring = 'accuracy', cv = 10)
lr_scoring
print 'Ridge mean:{}, max:{}, min:{}, std:{}'.format(ridge_scoring.mean(), ridge_scoring.max(),
ridge_scoring.min(), ridge_scoring.std())
print 'Log mean:{}, max:{}, min:{}, std:{}'.format(lr_scoring.mean(), lr_scoring.max(),
lr_scoring.min(), lr_scoring.std())
scorer = metrics.make_scorer(metrics.accuracy_score)
cv_strategy = cross_validation.StratifiedShuffleSplit(blobs[1], n_iter = 20 , test_size = 0.3, random_state = 2)
ridge_scoring = cross_validation.cross_val_score(ridge_classifier, blobs[0], blobs[1], scoring = scorer, cv = cv_strategy)
lr_scoring = cross_validation.cross_val_score(log_regressor, blobs[0], blobs[1], scoring = scorer, cv = cv_strategy)
print 'Ridge mean:{}, max:{}, min:{}, std:{}'.format(ridge_scoring.mean(), ridge_scoring.max(),
ridge_scoring.min(), ridge_scoring.std())
print 'Log mean:{}, max:{}, min:{}, std:{}'.format(lr_scoring.mean(), lr_scoring.max(),
lr_scoring.min(), lr_scoring.std())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Генерация данных
Step2: Линейная классификация
Step3: LogisticRegression
Step4: Оценка качества по cross-validation
Step5: cross_val_score с заданными scorer и cv_strategy
|
5,628
|
<ASSISTANT_TASK:>
Python Code:
!pip install google-cloud-bigquery
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "your_project_id"
REGION = 'US'
from google.cloud import bigquery
import time
import pandas as pd
pd.set_option('display.float_format', lambda x: '%.3f' % x)
!bq mk --location=$REGION --dataset $PROJECT_ID:bqml
%%bigquery --project $PROJECT_ID
## follows the Google Analytics schema:
#https://support.google.com/analytics/answer/3437719?hl=en
SELECT
CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId,
hitNumber,
time,
page.pageTitle,
type,
productSKU,
v2ProductName,
v2ProductCategory,
productPrice/1000000 as productPrice_USD
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_20160801`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
LIMIT 5
%%bigquery --project $PROJECT_ID
## follows schema from https://support.google.com/analytics/answer/3437719?hl=en&ref_topic=3416089
CREATE OR REPLACE TABLE bqml.aggregate_web_stats AS (
WITH
durations AS (
--calculate pageview durations
SELECT
CONCAT(fullVisitorID,'-',
CAST(visitNumber AS STRING),'-',
CAST(hitNumber AS STRING) ) AS visitorId_session_hit,
LEAD(time, 1) OVER (
PARTITION BY CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING))
ORDER BY
time ASC ) - time AS pageview_duration
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_2016*`,
UNNEST(hits) AS hit
),
prodview_durations AS (
--filter for product detail pages only
SELECT
CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId,
productSKU AS itemId,
IFNULL(dur.pageview_duration,
1) AS pageview_duration,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_2016*` t,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
JOIN
durations dur
ON
CONCAT(fullVisitorID,'-',
CAST(visitNumber AS STRING),'-',
CAST(hitNumber AS STRING)) = dur.visitorId_session_hit
WHERE
#action_type: Product detail views = 2
eCommerceAction.action_type = "2"
),
aggregate_web_stats AS(
--sum pageview durations by visitorId, itemId
SELECT
visitorId,
itemId,
SUM(pageview_duration) AS session_duration
FROM
prodview_durations
GROUP BY
visitorId,
itemId )
SELECT
*
FROM
aggregate_web_stats
);
-- Show table
SELECT
*
FROM
bqml.aggregate_web_stats
LIMIT
10
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
bqml.aggregate_web_stats
LIMIT
10
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE MODEL bqml.retail_recommender
OPTIONS(model_type='matrix_factorization',
user_col='visitorId',
item_col='itemId',
rating_col='session_duration',
feedback_type='implicit'
)
AS
SELECT * FROM bqml.aggregate_web_stats
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
ML.EVALUATE(MODEL bqml.retail_recommender)
%%bigquery --project $PROJECT_ID
#check for a single visitor
DECLARE MY_VISITORID STRING DEFAULT "0824461277962362623-1";
SELECT
*
FROM
ML.RECOMMEND(MODEL `bqml.retail_recommender`,
(SELECT MY_VISITORID as visitorID)
)
ORDER BY predicted_session_duration_confidence DESC
LIMIT 5
%%bigquery --project $PROJECT_ID
DECLARE MY_VISITORID STRING DEFAULT "6499749315992064304-2";
WITH product_details AS(
SELECT
productSKU,
v2ProductName,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_2016*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
GROUP BY 2,1
)
SELECT
r.*,
d.v2ProductName
FROM
ML.RECOMMEND(MODEL `bqml.retail_recommender`,
(
SELECT
MY_VISITORID as visitorId)) r
JOIN
product_details d
ON
r.itemId = d.productSKU
ORDER BY predicted_session_duration_confidence DESC
LIMIT 5
%%bigquery --project $PROJECT_ID
-- Create output table
CREATE OR REPLACE TABLE bqml.prod_recommendations AS (
WITH predictions AS (
SELECT
visitorId,
ARRAY_AGG(STRUCT(itemId,
predicted_session_duration_confidence)
ORDER BY
predicted_session_duration_confidence DESC
LIMIT 5) as recommended
FROM ML.RECOMMEND(MODEL bqml.retail_recommender)
GROUP BY visitorId
)
SELECT
visitorId,
itemId,
predicted_session_duration_confidence
FROM
predictions p,
UNNEST(recommended)
);
-- Show table
SELECT
*
FROM
bqml.prod_recommendations
ORDER BY
visitorId
LIMIT
20
%%bigquery --project $PROJECT_ID
WITH predictions AS (
SELECT
visitorId,
ARRAY_AGG(STRUCT(itemId,
predicted_session_duration_confidence)
ORDER BY
predicted_session_duration_confidence) as recommended
FROM ML.RECOMMEND(MODEL bqml.retail_recommender)
WHERE itemId = "GGOEYOLR018699"
GROUP BY visitorId
)
SELECT
visitorId,
ML.MIN_MAX_SCALER(
predicted_session_duration_confidence
) OVER() as GGOEYOLR018699
FROM
predictions p,
UNNEST(recommended)
ORDER BY GGOEYOLR018699 DESC
%%bigquery df --project $PROJECT_ID
SELECT
*
FROM
bqml.prod_recommendations
LIMIT 100
df.head()
%%bigquery --project $PROJECT_ID
EXPORT DATA OPTIONS (
uri="gs://mybucket/myfile/recommendations_*.csv",
format=CSV
) AS
SELECT
*
FROM
bqml.prod_recommendations
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up your GCP project
Step2: Import libraries and define constants
Step3: Creating a BigQuery dataset
Step4: Raw data
Step5: Pre-process the data
Step6: The training data
Step7: Train the matrix factorization model
Step8: Model Evaluation
Step9: Hyperparameter Tuning
Step10: What are the names of the recommended products? Discover the product names by joining the resulting productSKU recommendations back with the product names
Step11: Batch predictions for all users
Step12: Using the predicted recommendations in production
Step13: To create a column per product, you can use the pivot() function as described in this blogpost.
Step14: 2-2. Export predictions table to Google Cloud Storage
|
5,629
|
<ASSISTANT_TASK:>
Python Code:
!ls
!ps x|grep python
%load_ext fortranmagic
%%fortran
subroutine f1(x, y, z)
real, intent(in) :: x,y
real, intent(out) :: z
z = sin(x+y)
end subroutine f1
f1(3,4)
?f1
%load_ext oct2py.ipython
%%octave
A=rand(10)
A
x = %octave [1 2; 3 4];
x
%%octave -f svg
p = [12 -2.5 -8 -0.1 8];
x = 0:0.01:1;
polyout(p, 'x')
plot(x, polyval(p, x));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fortran
Step2: Octave
|
5,630
|
<ASSISTANT_TASK:>
Python Code:
import os, numpy as np
import histogram.hdf as hh, histogram as H
from matplotlib import pyplot as plt
%matplotlib notebook
# %matplotlib inline
import mantid
from multiphonon.sqe import plot as plot_sqe
from multiphonon.ui.getdos import Context, NxsWizardStart, QEGridWizardStart, GetDOSWizStart
context=Context()
workdir = os.path.abspath('./V_Ei120meV')
!mkdir -p {workdir}
%cd {workdir}
# context.from_yaml('getdos2-V_Ei120meV-context.yaml')
dest = 'ARCS_V_annulus.nxs'
url = "https://mcvine.ornl.gov/multiphonon/ARCS_V_annulus.nxs"
cmd = 'wget %r -O %r' % (url, dest)
print cmd
%%time
!{cmd} >log.download 2>err.download
ls
context.sample_nxs = './ARCS_V_annulus.nxs'
NxsWizardStart(context).show()
context.to_yaml('./getdos2-V_Ei120meV-context.yaml')
QEGridWizardStart(context).show()
%%script bash
cat work/raw2iqe-sample.params
iqe = hh.load('work/iqe.h5')
plt.figure(figsize=(6,4))
plot_sqe(iqe)
# plt.xlim(0, 11)
plt.clim(0, 3e-3)
iqe2 = iqe.copy()
I = iqe2.I; I[I!=I] = 0 # remove NaNs
IE = iqe2.sum('Q') # sum over Q
plt.figure(figsize=(6,4))
plt.plot(IE.energy, IE.I)
context.to_yaml('./getdos2-V_Ei120meV-context.yaml')
context.initdos = ''
GetDOSWizStart(context).show()
context.to_yaml('./getdos2-V_Ei120meV-context.yaml')
# print context
ls work/
dos = hh.load('work/final-dos.h5')
plt.figure(figsize=(5,3))
plt.plot(dos.E, dos.I)
plt.xlabel('Energy (meV)')
# plt.xlim(0, 30)
plt.tight_layout()
from multiphonon.backward import plotutils as pu
plt.figure(figsize=(5,3))
pu.plot_dos_iteration('work/')
plt.figure(figsize=(6,4))
pu.plot_residual('work/')
plt.figure(figsize=(8, 4))
pu.plot_intermediate_result_se('work/round-4')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a context for getdos. It stores the processing parameters.
Step2: Create a new working directory and change into it. All intermediate results and final outputs will be in this new directory.
Step3: If you want to reuse a previously-saved context, please uncomment the following cell and execute
Step4: Get experimental data
Step5: Download
Step6: The following command should show the downloaded file "ARCS_V_annulus.nxs"
Step7: Set sample_nxs
Step8: Experimental data and condition
Step9: Save configuration so you can reuse it
Step10: Obtain S(Q,E)
Step11: Parameters are saved in the work dir. Uncomment the script below to see.
Step12: Plot sample IQE
Step13: This is a plot of vanadium S(Q, E) histogram.
Step14: At the center of this plot there is an enormous peak that is due to elastic scattering, which should be excluded from the phonon DOS calculation
Step15: Run GetDOS
Step16: Save context
Step17: Print context
Step18: Check output
Step19: Plot the final result for DOS
Step20: More plotting utils are available
|
5,631
|
<ASSISTANT_TASK:>
Python Code:
from collections import Counter
import pandas as pd
%matplotlib inline
from pylab import rcParams
from bs4 import BeautifulSoup
import textacy
rcParams['figure.figsize'] = 10, 4
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import spacy
nlp = spacy.load('en')
with open('pride.txt') as f:
pride = f.read()
doc = nlp(pride)
def proportionWithTag(doc, tag):
Returns the proportion of words in the document that have a certain POS tag.
If given a list instead of a tag, returns the proportions of words in the document
that have those tags.
totalWords = len(doc)
if type(tag) == list:
wordsWithTag = [word for word in doc if word.tag_ in tag]
else:
wordsWithTag = [word for word in doc if word.tag_ == tag]
return len(wordsWithTag)/totalWords
def proportionWithLemma(doc, lemma):
totalWords = len(doc)
wordsWithLemma = [word for word in doc if word.lemma_ == lemma]
return len(wordsWithLemma)/totalWords
def beProportion(doc):
totalWords = len(doc)
bes = [word for word in doc if word.lemma_ == 'be' and word.tag_ in verbtags] # 488 is "be"
return len(bes)/totalWords
presentVerbTags = ['VB', 'VBG', 'VBP', 'VBZ']
verbtags = ['VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']
quoted_passages = [It is a truth universally acknowledged,
that a single man in possession of a good fortune, must be in want of a wife.,
In vain I have struggled. It will not do. My feelings will not be repressed.
You must allow me to tell you how ardently I admire and love you.,
Happiness in marriage is entirely a matter of chance. If the dispositions of the parties are ever
so well known to each other or ever so similar beforehand,
it does not advance their felicity in the least.
They always continue to grow sufficiently unlike afterwards to have their share of vexation;
and it is better to know as little as possible of the defects of the person with whom you are to pass your life.,
There are few people whom I really love, and still fewer of whom I think well.
The more I see of the world, the more am I dissatisfied with it;
and every day confirms my belief of the inconsistency of all human characters,
and of the little dependence that can be placed on the appearance of merit or sense.,
"Pride," observed Mary, who piqued herself upon the solidity of her reflections, "is a very common failing,,
Vanity and pride are different things, though the words are often used synonymously.
A person may be proud without being vain. Pride relates more to our opinion of ourselves,
vanity to what we would have others think of us.
]
joined = ' '.join(quoted_passages)
quoted = nlp(joined)
tagDict = {"CC": "Coordinating conjunction",
"DT": "Determiner",
"EX": "Existential there",
"IN": "Preposition or subordinating conjunction",
"JJ": "Adjective",
"JJR": "Adjective, comparative",
"JJS": "Adjective, superlative",
"MD": "Modal",
"NN": "Noun, singular or mass",
"NNS": "Noun, plural",
"NNP": "Proper noun, singular",
"NNPS": "Proper noun, plural",
"PDT": "Predeterminer",
"POS": "Possessive ending",
"PRP": "Personal pronoun",
"PRP$": "Possessive pronoun",
"RB": "Adverb",
"RBR": "Adverb, comparative",
"RBS": "Adverb, superlative",
"RP": "Particle",
"TO": "to",
"UH": "Interjection",
"VB": "Verb, base form",
"VBD": "Verb, past tense",
"VBG": "Verb, gerund or present participle",
"VBN": "Verb, past participle",
"VBP": "Verb, non-3rd person singular present",
"VBZ": "Verb, 3rd person singular present",
"WDT": "Wh-determiner",
"WP": "Wh-pronoun",
"WP$": "Possessive wh-pronoun",
"WRB": "Wh-adverb"}
tagset = list(tagDict.keys())
def compareTags(a, b, tagset):
proportionsDict = {}
for tag in tagset:
proportionsDict[tag] = [proportionWithTag(x, tag) for x in [a, b]]
df = pd.DataFrame(proportionsDict).T
df['factor'] = (df[1]/df[0])-1
return df['factor']
compareTags(doc, quoted, tagset)
def compareLemmas(a, b, lemmas):
proportionsDict = {}
for lemma in lemmas:
proportionsDict[lemma] = [proportionWithLemma(x, lemma) for x in [a, b]]
df = pd.DataFrame(proportionsDict).T
df['factor'] = df[1]/df[0]
df['factor'].plot(kind="bar")
compareLemmas(doc, quoted, ['be', 'have', 'do'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploratory analysis of quoted speech
Step2: From the Penn Treebank table
Step9: Pride and Prejudice Highlights
|
5,632
|
<ASSISTANT_TASK:>
Python Code:
a=[12586269025, 20365011074, 32951280099, 53316291173, 86267571272, 139583862445, 225851433717,365435296162, 591286729879,
956722026041, 1548008755920, 2504730781961, 4052739537881, 6557470319842, 10610209857723, 17167680177565, 27777890035288,
44945570212853, 72723460248141, 117669030460994]
b=[832040, 1346269, 2175309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986]
c=[267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073, 4807526976,7778742049,
12586269025, 20365011074, 32951280099, 53316291173, 86267571272]
def kiholmit(nap,ora,fiulany):
"..." # ide jön a docstring
#
# ide jön a varázslat..
#
return # ide jön a visszatérési érték
def mertani(x0,q,N):
"..." # ide jön a docstring
#
# ide jön a varázslat..
#
adatok={'Alonzo Hinton': '(855) 278-2590',
'Cleo Hennings': '(844) 832-0585',
'Daine Ventura': '(833) 832-5081',
'Esther Leeson': '(855) 485-0624',
'Gene Connell': '(811) 973-2926',
'Lashaun Bottorff': '(822) 687-1735',
'Marx Hermann': '(844) 164-8116',
'Nicky Duprey': '(811) 032-6328',
'Piper Subia': '(844) 373-4228',
'Zackary Palomares': '(822) 647-3686'}
def telefon_kozpont(korzet,adatok):
"Ha megadod a körzetszámot (korzet) akkor kiírom ki lakik ott."
#
#ide jön a varázslat...
#
return # ide jön a visszatérési érték
def poly(x,*a):
"Polinom függvény f(x)=\sum_i a_i x^i" #Ez csak a docstring
#
# ide jön a varázslat..
#
return # ide jön a visszatérési érték
def fuggveny(x,*args,**kwargs):
"Ha a kwargs nem rendelkezik másképp akkor kiértékelek egy polinomot"
#
#ide jön a varázslat
#
if kwargs['fajta']=='inverz':
#
#
else:
#
#
#
return #ide jön a visszatérési érték..
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 02-if
Step2: 03-Mértani sorozat
Step3: 04-Telefon központ
Step4: 05-Változó számú argumentumok-I
Step5: 07-kulcsszavas függvény változó számú argumentummal ☠
|
5,633
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from sklearn.datasets import make_regression
bias = 100
X0, y, coef = make_regression(n_samples=100, n_features=1, bias=bias, noise=10, coef=True, random_state=1)
X = np.hstack([np.ones_like(X0), X0])
np.ones_like(X0)[:5] # no.ones_like(X0) : X0 사이즈와 동일한데 내용물은 1인 행렬 생성
X[:5]
y = y.reshape(len(y), 1)
w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)
print("bias:", bias)
print("coef:", coef)
print("w:\n", w)
w = np.linalg.lstsq(X, y)[0]
w
xx = np.linspace(np.min(X0) - 1, np.max(X0) + 1, 1000)
XX = np.vstack([np.ones(xx.shape[0]), xx.T]).T
yy = np.dot(XX, w)
plt.scatter(X0, y)
plt.plot(xx, yy, 'r-')
plt.show()
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
dfX_diabetes = pd.DataFrame(diabetes.data, columns=["X%d" % (i+1) for i in range(np.shape(diabetes.data)[1])])
dfy_diabetes = pd.DataFrame(diabetes.target, columns=["target"])
df_diabetes0 = pd.concat([dfX_diabetes, dfy_diabetes], axis=1)
df_diabetes0.tail()
from sklearn.linear_model import LinearRegression
model_diabetes = LinearRegression().fit(diabetes.data, diabetes.target)
print(model_diabetes.coef_)
print(model_diabetes.intercept_)
predictions = model_diabetes.predict(diabetes.data)
plt.scatter(diabetes.target, predictions)
plt.xlabel("target")
plt.ylabel("prediction")
plt.show()
mean_abs_error = (np.abs(((diabetes.target - predictions)/diabetes.target)*100)).mean()
print("MAE: %.2f%%" % (mean_abs_error))
import sklearn as sk
sk.metrics.median_absolute_error(diabetes.target, predictions)
sk.metrics.mean_squared_error(diabetes.target, predictions)
from sklearn.datasets import load_boston
boston = load_boston()
dfX_boston = pd.DataFrame(boston.data, columns=boston.feature_names)
dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"])
df_boston0 = pd.concat([dfX_boston, dfy_boston], axis=1)
df_boston0.tail()
model_boston = LinearRegression().fit(boston.data, boston.target)
print(model_boston.coef_)
print(model_boston.intercept_)
predictions = model_boston.predict(boston.data)
plt.scatter(boston.target, predictions)
plt.xlabel("target")
plt.ylabel("prediction")
plt.show()
mean_abs_error = (np.abs(((boston.target - predictions)/boston.target)*100)).mean()
print("MAE: %.2f%%" % (mean_abs_error))
sk.metrics.median_absolute_error(boston.target, predictions)
sk.metrics.mean_squared_error(boston.target, predictions)
df_diabetes = sm.add_constant(df_diabetes0)
df_diabetes.tail()
model_diabetes2 = sm.OLS(df_diabetes.ix[:, -1], df_diabetes.ix[:, :-1])
result_diabetes2 = model_diabetes2.fit()
result_diabetes2
print(result_diabetes2.summary())
df_boston = sm.add_constant(df_boston0)
model_boston2 = sm.OLS(df_boston.ix[:, -1], df_boston.ix[:, :-1])
result_boston2 = model_boston2.fit()
print(result_boston2.summary())
dir(result_boston2)
sm.graphics.plot_fit(result_boston2, "CRIM")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OLS (Ordinary Least Squares)
Step2: scikit-learn 패키지를 사용한 선형 회귀 분석
Step3: Boston Housing Price
Step4: statsmodels 를 사용한 선형 회귀 분석
Step5: RegressionResults 클래스는 분석 결과를 다양한 속성에 저장해주므로 추후 사용자가 선택하여 활용할 수 있다.
Step6: statsmodel는 다양한 회귀 분석 결과 플롯도 제공한다.
|
5,634
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)
model = keras.Sequential(
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),]
)
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)
class TemporalSplit(keras.layers.Layer):
Split the input tensor into 2 tensors along the time dimension.
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)
class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super(CustomEmbedding, self).__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyActivation, self).__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(
inputs_exp * broadcast_float_mask, axis=-1, keepdims=True
)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras 中的遮盖和填充
Step2: 简介
Step3: 遮盖
Step4: 您可以在输出结果中看到,该掩码是一个形状为 (batch_size, sequence_length) 的二维布尔张量,其中每个 False 条目表示对应的时间步骤应在处理时忽略。
Step5: 对以下函数式 API 的情况也是如此:
Step6: 将掩码张量直接传递给层
Step8: 在自定义层中支持遮盖
Step9: 下面是关于 CustomEmbedding 层的另一个示例,该层能够根据输入值生成掩码:
Step10: 在兼容层上选择启用掩码传播
Step11: 现在,您可以在掩码生成层(如 Embedding)和掩码使用层(如 LSTM)之间使用此自定义层,它会将掩码一路传递到掩码使用层。
Step12: 编写需要掩码信息的层
|
5,635
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
products['sentiment']
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products['perfect']
products['contains_perfect'] = products['perfect'] >= 1
products['contains_perfect'].sum()
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
sentiment
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = map(lambda x: 1 / (1 + np.exp(- x)), scores)
# return predictions
return predictions
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:,j])
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
y_hat = scores > 0
y_hat.sum()
num_mistakes = (y_hat == sentiment).sum() # YOUR CODE HERE
accuracy = 1. - float(num_mistakes) / len(products) # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.3f' % accuracy
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
sorted(word_coefficient_tuples, key=lambda x: x[1], reverse=True)[:10]
sorted(word_coefficient_tuples, key=lambda x: x[1], reverse=False)[:10]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load review dataset
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Step9: Quiz Question. How many reviews contain the word perfect?
Step10: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step11: Let us convert the data into NumPy arrays.
Step12: Quiz Question
Step13: Estimating conditional probability with link function
Step14: Aside. How the link function works with matrix algebra
Step15: Compute derivative of log likelihood with respect to a single coefficient
Step16: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
Step17: Checkpoint
Step18: Taking gradient steps
Step19: Now, let us run the logistic regression solver.
Step20: Quiz question
Step21: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step22: Quiz question
Step23: Measuring accuracy
Step24: Quiz question
Step25: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Step26: Quiz question
|
5,636
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# from scipy.stats import norm
import scipy.stats as ss
import elfi
import logging
import matplotlib
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde
%matplotlib inline
# Set an arbitrary global seed to keep the randomly generated quantities the same
seed = 10
np.random.seed(seed)
def gaussian_mixture(theta, batch_size=1, random_state=None):
sigma1 = 1
sigma2 = np.sqrt(0.01)
sigmas = np.array((sigma1, sigma2))
mixture_prob = 0.5
random_state = random_state or np.random
scale_array = random_state.choice(sigmas,
size=batch_size,
replace=True,
p=np.array((mixture_prob, 1-mixture_prob)))
observation = ss.norm.rvs(loc=theta,
scale=scale_array,
size=batch_size,
random_state=random_state)
return observation
yobs = 0
model = elfi.ElfiModel()
elfi.Prior('uniform', -10, 20, name='theta', model=model)
elfi.Simulator(gaussian_mixture, model['theta'], observed=yobs, name='GM')
elfi.Distance('euclidean', model['GM'], name='d');
smc = elfi.SMC(model['d'], batch_size=500, seed=1)
thresholds = [1., 0.5013, 0.2519, 0.1272, 0.0648, 0.0337, 0.0181, 0.0102, 0.0064, 0.0025]
smc_samples = smc.sample(1000, thresholds=thresholds)
adaptive_smc = elfi.AdaptiveThresholdSMC(model['d'], batch_size=500, seed=2, q_threshold=0.995)
adaptive_smc_samples = adaptive_smc.sample(1000, max_iter=10)
def gaussian_mixture_density(theta, sigma_1=1, sigma_2=0.1):
y = 0.5 * ss.norm.pdf(theta, loc=0, scale=sigma_1) + 0.5 * ss.norm.pdf(theta, loc=0, scale=sigma_2)
return y
print(smc_samples)
print(adaptive_smc_samples)
smc_posteriorpdf = gaussian_kde(smc_samples.samples_array[:,0])
adaptive_smc_posteriorpdf = gaussian_kde(adaptive_smc_samples.samples_array[:,0])
reference_posteriorpdf = gaussian_mixture_density
xs = np.linspace(-3,3,200)
smc_posteriorpdf.covariance_factor = lambda : .25
smc_posteriorpdf._compute_covariance()
adaptive_smc_posteriorpdf.covariance_factor = lambda : .25
adaptive_smc_posteriorpdf._compute_covariance()
plt.figure(figsize=(16,10))
plt.plot(xs,smc_posteriorpdf(xs))
plt.plot(xs,adaptive_smc_posteriorpdf(xs))
plt.plot(xs,reference_posteriorpdf(xs))
plt.legend(('abc-smc', 'adaptive abc-smc', 'reference'));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We reproduce the Example 1 from [1] as a test case for AdaptiveThresholdSMC.
Step2: Adaptive threshold selection ABC (elfi.AdaptiveThresholdSMC) can be used in similar fashion as elfi.SMC. One does not need to provide a list of thresholds but user can set densityratio-based termination condition (q_threshold) and a limit for the number of iterations (max_iter).
Step3: We compare visually the approximated posterior and the true posterior, which in this case is available.
Step4: We compute Kernel density estimates of the posteriors based on the approximate posterior samples and visualise them in a density plot.
|
5,637
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from tensorprob import Model, Parameter, Normal, Exponential, Mix2, ScipyLBFGSBOptimizer
# We use the matplotlib_hep library to easily create high energy physics plots
from matplotlib_hep import histpoints
plt.rcParams['figure.figsize'] = (10.0, 6.0)
with Model() as model:
mu = Parameter()
sigma = Parameter(lower=0)
lamb = Parameter(lower=0)
f = Parameter(lower=0.0, upper=1)
X = Mix2(f,
Normal(mu, sigma, lower=0, upper=50),
Exponential(lamb, lower=0, upper=50),
lower=0,
upper=50,
)
model.observed(X)
model.initialize({
mu: 25,
sigma: 2,
lamb: 0.03,
f: 0.2
})
np.random.seed(0)
exp_data = np.random.exponential(40, 10000)
exp_data = exp_data[(0 < exp_data) & (exp_data < 50)]
norm_data = np.random.normal(20, 2, 500)
data = np.concatenate([exp_data, norm_data])
result = model.fit(data)
print(result)
xs = np.linspace(0, 50, 200)
x, N, w = histpoints(data, bins=60, color='k', ms=3, capsize=0)
plt.plot(xs, w * model.pdf(xs), 'b-', lw=2)
plt.xlabel('mass')
plt.ylabel('candidates')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We model our distribution as a mixture of a normal distribution (parameters mu and sigma and mixture weight f) and an exponential distribution (parameter lamb and mixture weight 1 -f).
Step2: We declare X as an observed variable and set suitable initial parameter values
Step3: The dataset is generated with numpy
Step4: Now we perform a fit of the model using the default optimizer
Step5: The fit converged successfully and we can visualize the distribution
|
5,638
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
from IPython.display import display
assert True # leave this to grade the import statements
u = 'http://static1.squarespace.com/static/5201ab12e4b0ce82ad427ee2/53022aebe4b0649958de7de1/53022aeee4b0649958de7dec/1392650992037/tumblr_mz73s53TM11qf71bqo3_1280.jpg?format=1000w'
i = Image(url=u, width=600, height=600)
i
assert True # leave this to grade the image display
from IPython.display import HTML
%%html
<table>
<caption>Quarks</caption>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Chanrge(e)</th>
<th>Mass $$({Mev}/{c^2})$$</th>
</tr>
<tr>
<td>up</td>
<td>u</td>
<td>$$\bar{u}$$</td>
<td>$$+2/3$$</td>
<td>$$1.5-3.33$$</td>
</tr>
<tr>
<td>down</td>
<td>d</td>
<td>$$\bar{d}$$</td>
<td>$$-1/3$$</td>
<td>$$3.5-6.0$$</td>
</tr>
<tr>
<td>charm</td>
<td>c</td>
<td>$$\bar{c}$$</td>
<td>$$+2/3$$</td>
<td>$$1,160-1,340$$</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>$$\bar{s}$$</td>
<td>$$-1/3$$</td>
<td>$$70-130$$</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>$$\bar{t}$$</td>
<td>$$+2/3$$</td>
<td>$$169,100-173,300$$</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>$$\bar{b}$$</td>
<td>$$-1/3$$</td>
<td>$$4,130-4,370$$</td>
</tr>
</table>
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
5,639
|
<ASSISTANT_TASK:>
Python Code:
#!pip install pint
#!pip install git+https://github.com/hgrecco/pint-pandas#egg=Pint-Pandas-0.1.dev0
import pint
units = pint.UnitRegistry()
pint.__version__
thickness = 68 * units.m
thickness
thickness.magnitude, thickness.units, thickness.dimensionality
f'{thickness**2}'
print(f'{thickness**2:P}')
print(f'{thickness**2:~P}')
print(f'{thickness**2:~L}')
print(f'{thickness**2:~H}')
thickness * 2
thickness + 10
# This is meant to produce an error...
area = 60 * units.km**2
n2g = 0.5 * units.dimensionless # Optional dimensionless 'units'...
phi = 0.2 # ... but you can just do this.
sat = 0.7
volume = area * thickness * n2g * phi * sat
volume
volume.to_compact()
volume.to('m**3') # Or use m^3
volume.to_compact('L')
volume.to_compact('oil_barrel')
f"The volume is {volume.to_compact('oil_barrel'):~0.2fL}"
units.define('barrel_of_oil_equivalent = 6000 ft**3 = boe')
volume.to('boe')
volume.to_compact('boe')
units('2.34 km')
units('2.34*10^3 km')
units('-12,000.ft')
units('3.2 m')
from uncertainties import ufloat
area = ufloat(64, 5) * units.km**2 # 64 +/- 5 km**2
(thickness * area).to('Goil_bbl')
import numpy as np
vp = np.array([2300, 2400, 2550, 3200]) * units.m/units.s
rho = np.array([2400, 2550, 2500, 2650]) * units.kg/units.m**3
z = vp * rho
z
print(z)
z.m
pint._HAS_PINTPANDAS
import pandas as pd
df = pd.DataFrame({
"Vp": pd.Series(vp.m, dtype="pint[m/s]"),
"Vs": pd.Series([1200, 1200, 1250, 1300], dtype="pint[m/s]"),
"rho": pd.Series(rho.m, dtype="pint[kg/m**3]"),
})
df
import bruges as bg
df['E'] = bg.rockphysics.moduli.youngs(df.Vp, df.Vs, df.rho)
df.E
df.loc[0, 'E'].to('GPa')
df.E.apply(lambda x: x.to('GPa'))
class UnitDataFrame(pd.DataFrame):
def _repr_html_(self):
New repr for Jupyter Notebook.
html = super()._repr_html_() # Get the old repr string.
units = [''] + [f"{dtype.units:~H}" for dtype in self.dtypes]
style = "text-align: right; color: gray;"
new = f'<tr style="{style}"><th>' + "</th><th>".join(units) + "</th></tr></thead>"
return html.replace('</thead>', new)
df = UnitDataFrame({
"Vp": pd.Series(vp.m, dtype="pint[m/s]"),
"Vs": pd.Series([1200, 1200, 1250, 1300], dtype="pint[m/s]"),
"rho": pd.Series(rho.m, dtype="pint[kg/m**3]"),
})
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To use it in its typical mode, we import the library then instantiate a UnitRegistry object. The registry contains lots of physical units.
Step2: Attaching and printing units
Step3: In a Jupyter Notebook you see a 'pretty' version of the quantity. In the interpreter, you'll see something slightly different (the so-called repr of the class)
Step4: You can also use the following abbreviations for magnitude and units
Step5: But pint extends the string formatting options to include special options for Quantity objects. The most useful option is P for 'pretty', but there's also L for $\LaTeX$ and H for HTML. Adding a ~ (tilde) before the option tells pint to use unit abbreviations instead of the full names
Step6: Doing maths
Step7: Note that you must use units when you need them
Step8: Let's try defining an area of $60\ \mathrm{km}^2$, then multiplying it by our thickness. To make it more like a hydrocarbon volume, I'll also multiply by net
Step9: We can convert to something more compact
Step10: Or be completely explicit about the units and multipliers we want
Step11: The to_compact() method can also take units, if you want to be more explicit; it applies multipliers automatically
Step12: Oil barrels are already defined (careful, they are abbreviated as oil_bbl not bbl — that's a 31.5 gallon barrel, about the same as a beer barrel).
Step13: If we use string formatting (see above), we can get pretty specific
Step14: Defining new units
Step15: Let's suspend reality for a moment and imagine we now want to compute our gross rock volume in BOEs...
Step16: Getting units from strings
Step17: This looks useful! Let's try something less nicely formatted.
Step18: You can also use the Quantity constructor, like this
Step19: pint with numpy
Step20: For some reason, this sometimes doesn't render properly. But we can always do this
Step21: As expected, the magnitude of this quantity is just a NumPy array
Step22: pint with pandas
Step23: To use this integration, we pass special pint data types to the pd.Series() object
Step24: We can't convert the units of a whole Series but we can do one
Step25: So to convert a whole series, we can use Series.apply()
Step27: Bonus
|
5,640
|
<ASSISTANT_TASK:>
Python Code:
from awips.dataaccess import DataAccessLayer
import matplotlib.tri as mtri
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from math import exp, log
import numpy as np
from metpy.calc import get_wind_components, lcl, dry_lapse, parcel_profile, dewpoint
from metpy.calc import wind_speed, wind_direction, thermo, vapor_pressure
from metpy.plots import SkewT, Hodograph
from metpy.units import units, concatenate
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
request = DataAccessLayer.newDataRequest()
request.setDatatype("modelsounding")
forecastModel = "GFS"
request.addIdentifier("reportType", forecastModel)
request.setParameters("pressure","temperature","specHum","uComp","vComp","omega","cldCvr")
locations = DataAccessLayer.getAvailableLocationNames(request)
locations.sort()
list(locations)
request.setLocationNames("KFRM")
cycles = DataAccessLayer.getAvailableTimes(request, True)
times = DataAccessLayer.getAvailableTimes(request)
try:
fcstRun = DataAccessLayer.getForecastRun(cycles[-1], times)
list(fcstRun)
response = DataAccessLayer.getGeometryData(request,[fcstRun[0]])
except:
print('No times available')
exit
tmp,prs,sh = np.array([]),np.array([]),np.array([])
uc,vc,om,cld = np.array([]),np.array([]),np.array([]),np.array([])
for ob in response:
tmp = np.append(tmp,ob.getNumber("temperature"))
prs = np.append(prs,ob.getNumber("pressure"))
sh = np.append(sh,ob.getNumber("specHum"))
uc = np.append(uc,ob.getNumber("uComp"))
vc = np.append(vc,ob.getNumber("vComp"))
om = np.append(om,ob.getNumber("omega"))
cld = np.append(cld,ob.getNumber("cldCvr"))
print("parms = " + str(ob.getParameters()))
print("site = " + str(ob.getLocationName()))
print("geom = " + str(ob.getGeometry()))
print("datetime = " + str(ob.getDataTime()))
print("reftime = " + str(ob.getDataTime().getRefTime()))
print("fcstHour = " + str(ob.getDataTime().getFcstTime()))
print("period = " + str(ob.getDataTime().getValidPeriod()))
t = (tmp-273.15) * units.degC
p = prs/100 * units.mbar
u,v = uc*1.94384,vc*1.94384 # m/s to knots
spd = wind_speed(u, v) * units.knots
dir = wind_direction(u, v) * units.deg
rmix = (sh/(1-sh)) *1000 * units('g/kg')
e = vapor_pressure(p, rmix)
td = dewpoint(e)
td2 = dewpoint(vapor_pressure(p, sh))
# new arrays
ntmp = tmp
# where p=pressure(pa), T=temp(C), T0=reference temp(273.16)
rh = 0.263*prs*sh / (np.exp(17.67*ntmp/(ntmp+273.15-29.65)))
vaps = 6.112 * np.exp((17.67 * ntmp) / (ntmp + 243.5))
vapr = rh * vaps / 100
dwpc = np.array(243.5 * (np.log(6.112) - np.log(vapr)) / (np.log(vapr) - np.log(6.112) - 17.67)) * units.degC
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 14)
# Create a skewT plot
skew = SkewT()
# Plot the data
skew.plot(p, t, 'r', linewidth=2)
skew.plot(p, td, 'b', linewidth=2)
skew.plot(p, td2, 'y')
skew.plot(p, dwpc, 'g', linewidth=2)
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
plt.title( forecastModel + " " \
+ ob.getLocationName().decode('UTF-8') \
+ "("+ str(ob.getGeometry()) + ")" \
+ ", " + str(ob.getDataTime())
)
# An example of a slanted line at constant T -- in this case the 0 isotherm
l = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Draw hodograph
ax_hod = inset_axes(skew.ax, '40%', '40%', loc=2)
h = Hodograph(ax_hod, component_range=wind_speed(u, v).max())
h.add_grid(increment=20)
h.plot_colormapped(u, v, spd)
# Show the plot
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Available Locations
Step2: Model Sounding Parameters
Step3: Calculating Dewpoint from Specific Humidity
Step4: 2) metpy calculated assuming spec. humidity = mixing ratio
Step5: 3) NCEP AWIPS soundingrequest plugin
Step6: MetPy SkewT and Hodograph
|
5,641
|
<ASSISTANT_TASK:>
Python Code:
# Package imports
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)
# Train the logistic rgeression classifier
clf = sklearn.linear_model.LogisticRegressionCV()
clf.fit(X, y)
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates the contour plot below.
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral)
# Plot the decision boundary
plot_decision_boundary(lambda x: clf.predict(x))
plt.title("Logistic Regression")
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
# Gradient descent parameters (I picked these by hand)
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
# Helper function to evaluate the total loss on the dataset
def calculate_loss(model):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation to calculate our predictions
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
# Add regulatization term to loss (optional)
data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)))
return 1./num_examples * data_loss
# Helper function to predict an output (0 or 1)
def predict(model, x):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation
z1 = x.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = {}
# Gradient descent. For each batch...
for i in xrange(0, num_passes):
# Forward propagation
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Backpropagation
delta3 = probs
delta3[range(num_examples), y] -= 1
dW2 = (a1.T).dot(delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2))
dW1 = np.dot(X.T, delta2)
db1 = np.sum(delta2, axis=0)
# Add regularization terms (b1 and b2 don't have regularization terms)
dW2 += reg_lambda * W2
dW1 += reg_lambda * W1
# Gradient descent parameter update
W1 += -epsilon * dW1
b1 += -epsilon * db1
W2 += -epsilon * dW2
b2 += -epsilon * db2
# Assign new parameters to the model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print "Loss after iteration %i: %f" %(i, calculate_loss(model))
return model
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating a dataset
Step2: The dataset we generated has two classes, plotted as red and blue points. You can think of the blue dots as male patients and the red dots as female patients, with the x- and y- axis being medical measurements.
Step3: The graph shows the decision boundary learned by our Logistic Regression classifier. It separates the data as good as it can using a straight line, but it's unable to capture the "moon shape" of our data.
Step4: First let's implement the loss function we defined above. We use this to evaluate how well our model is doing
Step5: We also implement a helper function to calculate the output of the network. It does forward propagation as defined above and returns the class with the highest probability.
Step6: Finally, here comes the function to train our Neural Network. It implements batch gradient descent using the backpropagation derivates we found above.
Step7: A network with a hidden layer of size 3
Step8: Yay! This looks pretty good. Our neural networks was able to find a decision boundary that successfully separates the classes.
|
5,642
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
!ls -lh {tflite_models_dir}
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_fp16.get_input_details()[0]["index"]
output_index = interpreter_fp16.get_output_details()[0]["index"]
interpreter_fp16.set_tensor(input_index, test_image)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(evaluate_model(interpreter_fp16))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Post-training float16 quantization
Step2: Train and export the model
Step3: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Step4: Write it out to a .tflite file
Step5: To instead quantize the model to float16 on export, first set the optimizations flag to use default optimizations. Then specify that float16 is the supported type on the target platform
Step6: Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
Step7: Note how the resulting file is approximately 1/2 the size.
Step8: Run the TensorFlow Lite models
Step9: Test the models on one image
Step10: Evaluate the models
Step11: Repeat the evaluation on the float16 quantized model to obtain
|
5,643
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import sqlite3
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
cnx = sqlite3.connect('database.sqlite')
rank_vs_ncomps = pd.read_sql_query(
SELECT U.Ranking AS rank, COUNT(M.TeamId) as num_comps
FROM Users AS U
JOIN TeamMemberships AS M ON U.Id = M.UserId
GROUP BY U.Id, U.Ranking
, cnx)
ax = rank_vs_ncomps.plot(x='rank', y='num_comps',
logx=True, logy= True,
kind='scatter')
ax.set_xlabel('log10(User Rank)', fontsize=14)
ax.set_ylabel('log10(# competitions entered)', fontsize = 14)
team_top3 = pd.read_sql_query(
SELECT team_size, rank, COUNT(*) AS num_teams
FROM
(
SELECT T.Id AS team_id, T.Ranking AS rank, COUNT(M.UserId) AS team_size
FROM Teams AS T
JOIN TeamMemberships AS M ON T.Id = M.TeamId
WHERE T.Ranking IN (1, 2, 3)
GROUP BY T.Id, T.Ranking
)
GROUP BY team_size, rank
, cnx)
team_top3['team_size'] = team_top3['team_size'].map(lambda x : x if (x<5) else 5)
team_top3 = team_top3.groupby(['team_size', 'rank'], as_index=False).sum()
ax = team_top3[team_top3['rank'] == 1].plot(kind='bar',
x='team_size',
y='num_teams',
title='Size of winning (1st place) teams')
ax.set_xticklabels(['1','2','3','4','>=5'], rotation='horizontal')
ax.set_xlabel('Team Size', fontsize=14)
ax.set_ylabel('Number of Teams', fontsize = 14)
users_sqlite = pd.read_sql_query(
SELECT *
FROM Users
, cnx)
users_sqlite.info()
users_csv = pd.read_csv('Users.csv', header=0)
users_csv.info()
import re # a regexp module
users_csv['MonYear'] = users_csv['RegisterDate'].map(lambda x : re.sub(r'([0-9]{4})-([0-9]{2})-([0-9]{2}).*',
r'\1\2', x))
# Get the count of registered users month-by-month
reg_by_monyear = users_csv.groupby('MonYear', as_index=False)['MonYear'].agg({'num' : 'count'})
# From MonYear, construct a column of datetime object called 'Time' - Set the day in the date with 01
reg_by_monyear['Time'] = reg_by_monyear['MonYear'].map(lambda x:
pd.to_datetime(x + '01', format='%Y%m%d'))
ax = reg_by_monyear.plot(x='Time', y='num',
title = 'Number of new registrations by time',
figsize=(8,4))
ax.set_xlabel('Month/Year of Registration', fontsize=14)
ax.set_ylabel('Number of registrants', fontsize = 14)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Is there any relationship between user ranking and the number of competitions entered?
Step4: We have used the log-log scale for both the x- and y-axes so that we could see more clearly the standing of top-ranking individuals. From this plot, it looked like practice contributes to user standing.
Step6: So, most winning teams consisted of only one individual
Step7: Compare the previous dataframe with that constructed from the CSV file
Step8: Note that while users_csv indicated columns with missing values, users_sqlite did not. It looked like pd.read_sql_query fills missing values with an empty string (i.e., ""). The RegisterDate column, however, seems to be complete. In the meantime, let us proceed with our exploration.
Step9: Plot the number of new users vs. month/year of registration
|
5,644
|
<ASSISTANT_TASK:>
Python Code:
def firstn(n):
num = 0
while num < n:
yield num
num += 1
print(sum(firstn(1000000)))
y = (x*2 for x in [1,2,3,4,5])
print y.next()
import random
class randomwalker_iter:
def __init__(self):
self.last = 1
self.rand = random.random()
def __iter__(self):
return self
def next(self):
if self.rand < 0.1:
raise StopIteration
else:
while abs(self.rand - self.last) < 0.4:
self.rand = random.random()
self.last = self.rand
return self.rand
rw = randomwalker_iter()
for rw_instance in rw:
print rw_instance
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: use build-in generator functions, such as xrange()
Step2: To implement a iterator, we have to define a class with two specific functions
|
5,645
|
<ASSISTANT_TASK:>
Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
# Import Libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Define parameters
vp0 = 1. # velocity m/s
r = 2. # distance from source
tmax = 5. # length of seismogram (s)
nt = 3000 # number of time samples
dt = tmax/nt # time increment
ts = 0 # source time
# Acquisition geometry
xs=0 # coordinates of source
ys=0
zs=0
xr=r # coordinates of receiver
yr=0
zr=0
# Define time vector
time = np.arange(0,tmax,dt)
# Calculating Green's function in 1D
G1=np.zeros(nt) # initialization G with zeros
for i in range (nt):
if (((time[i]-ts)-abs(xr-xs)/vp0)>=0):
G1[i]=1./(2*vp0)
else:
G1[i]=0
# Plotting Green's function in 1D
plt.plot(time, G1)
plt.title("Green's function for hom. 1D acoustic medium" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.grid()
plt.show()
# Calculation of 2D Green's function
G2=np.zeros(nt) # initialization G with zeros
r=np.sqrt((xs-xr)**2+(ys-yr)**2)
for i in range (nt):
if (((time[i]-ts)-abs(r)/vp0)>0):
G2[i]=(1./(2*np.pi*vp0**2))*(1./np.sqrt((time[i]-ts)**2-(r**2/vp0**2)))
else:
G2[i]=0
# Plotting Green's function in 2D
plt.plot(time, G2)
plt.title("Green's function for hom. 2D acoustic medium" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.xlim((0, tmax))
plt.grid()
plt.show()
# Calculation of 3D Green's function
G3=np.zeros(nt) # initialization G with zeros
r=np.sqrt((xs-xr)**2+(ys-yr)**2+(zs-zr)**2) # defining offset
amp=1./(4*np.pi*(vp0**2)*r) # defining amplitudes
t_arr=ts+(r/vp0) # time arrival
i_arr=t_arr/dt
b=int(i_arr)
G3[b]= amp/dt
# Plotting Green's function in 3D
plt.plot(time, G3)
plt.title("Green's function for hom. 3D acoustic medium" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.xlim((0, tmax))
plt.grid()
plt.show()
# Defining source time function
f0 = 1 # Frequency (Hz)
p=1./f0 # period
t0 = p/dt # defining t0
sigma=4./p
# Initialization of source-time function
src=np.zeros(nt)
source=np.zeros(nt)
# Initialization of first derivative of gaussian
for it in range(nt):
t=(it-t0)*dt
src[it]=-2*sigma*t*np.exp(-(sigma*t)**2)
source[0:nt]=src
# Plotting of source time function
plt.plot(time, src)
plt.title('Source time function')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.grid()
plt.show()
# Computation of 1D seismogram
# Convolution of Green's function with the 1st derivative of a Gaussian
# COMPUTE YOUR SEISMOGRAM HERE!
#G1_seis=
# Plotting Green's function in 1D
plt.plot(time, G1)
plt.title("Green's function for hom. 1D acoustic medium" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.grid()
plt.show()
# Plotting convolved Green's function in 1D
# PLOT YOUR SEISMOGRAM HERE!
# plt.plot()
plt.title('After convolution')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.xlim (0, tmax)
plt.grid()
plt.show()
# Convolution of Green's function with the 1st derivative of a Gaussian
# COMPUTE YOUR SEISMOGRAM HERE!
#G2_seis=
# Plotting Green's function in 2D
plt.plot(time, G2)
plt.title("Green's function in 2D" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.xlim((0, tmax))
plt.grid()
plt.show()
# Plotting convolved Green's function in 1D
# PLOT YOUR SEISMOGRAM HERE!
# plt.plot()
plt.title('After convolution')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.xlim((0, tmax))
plt.grid()
# Convolution of Green's function with the 1st derivative of a Gaussian
# COMPUTE YOUR SEISMOGRAM HERE!
#G3_seis =
# Plotting Green's function in 3D
plt.plot(time, G3)
plt.title("Green's function in 3D" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.xlim((0, tmax))
plt.grid()
plt.show()
# Plotting convolved Green's function in 1D
# PLOT YOUR SEISMOGRAM HERE!
# plt.plot()
plt.title('After convolution')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.xlim (0, tmax)
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computation of Green's functions and seismograms for the acoustic wave equation
Step2: 2D Green's function
Step3: 3D Green's function
Step4: Exercise
Step5: Excerise
|
5,646
|
<ASSISTANT_TASK:>
Python Code:
# notebook to simulate a jupyter working environment
# the python module gets imported here...
from ipysig.core import IPySig
import networkx as nx
ctrl = IPySig('./app') # note this should point to the root folder of the express app
import pickle
import os
pkl_g = os.path.abspath(os.path.join('.','IPySig_demo_data', 'stars_test_graph.pkl'))
stars_graph = pickle.load(open(pkl_g, "rb" ))
stars_graph.nodes(data=True)[:5]
stars_graph.edges(data=True)[:10]
ctrl.connect(stars_graph,'stars')
G = nx.random_graphs.newman_watts_strogatz_graph(80,15,0.2)
ctrl.connect(G, 'newman_watts')
ctrl.kill_express_process()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To start, we only need to import the IPySig object from the ipysig.core package. This is a singleton controller class that manages our sigma.js visualization connections. The root directory of the node.js app is passed in on instantiation.
Step2: Let's make a network graph from the stars dataset. This was a custom search from the NASA ADS api and contains relationships between authors and journals for the keyword search "stars".
Step3: The stars_graph was saved earlier and loaded via pickle for convenience, but nx graphs can obviously be created on the fly, loaded from a database or csv or come into the notebook from any other data source.
Step4: Connecting the graph
Step5: Quite a bit of functionality for our IPySig controller and node application is still under development.
|
5,647
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before creating a new feature.
sales['floors'] = sales['floors'].astype(int)
sales['floors_square'] = sales['floors']*sales['floors']
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
max_nonzeros = 7
l1_penalty_values = np.logspace(8, 10, num=20)
l1_penalty_min =
l1_penalty_max =
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Create new features
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Next, we write a loop that does the following
Step7: Exploring the larger range of values to find a narrow range with the desired sparsity
Step8: Now, implement a loop that search through this space of possible l1_penalty values
Step9: QUIZ QUESTIONS
|
5,648
|
<ASSISTANT_TASK:>
Python Code:
type(True)
type(False)
True == True
True == False
False == False
5 == 5
5 == 7
5 != 7
5 > 3
-1 < 0
1 > 0
3.14 > 2.72
3.14 <= 2.72
'Hello!' != 'Hello!'
'Hi!' != 'Hello!'
[1, 2, 3] == [1, 2, 3]
[1, 2, 3] == [4, 5]
x = 5 == 7
y = 3 != 0
print(x, y)
z = x or y # Either x or y is True? Yes!
print(z)
z = x and y # Both x and y are True? No!
print(z)
myvar = 5 # Points myvar to object 5
myvar == 3 # Check if myvar is equal to 3
x = True # Here x points to a bool
y = "True" # Here y points to a str
# z = true # This is an error!!! Here z is undefined.
bool(True) # Duh!
bool(False) # Duh!
bool(0)
bool(5)
bool(-15)
bool(0.0)
bool(3.14)
bool('') # Is there anything in this string? No!
bool('Hello') # Is there anything in this string? Yes!
bool([]) # Is there anything in this list? No!
bool([1, 2, 3]) # Is there anything in this list? Yes!
bool(print) # Even a function can be converted to a boolean.
# But why would you do that?
# Well, Python gives you the power!
# You just have to learn how to use it! ;-)
# help(bool)
type(42)
b = 42
print(b)
print(dir(int))
print('+\t', 29 + 3)
print('-\t', 29 - 3)
print('*\t', 29 * 3)
print('/\t', 29 / 3)
print('%\t', 29 % 3)
print('//\t', 29 // 3)
print('**\t', 29**3)
print('abs()\t', abs(-1))
print('==\t', 1 == 2)
print('!=\t', 1 != 2)
print('<\t', 1 < 2)
print('>\t', 1 > 2)
print('<=\t', 1 <= 2)
print('>=\t', 1 >= 2)
print(int(42)) # __int__ int to int (Duh!)
print(bool(42)) # __bool__ int to bool
print(int(True)) # __int__ bool to int
print(int(3.14)) # __int__ float to int
print(float(42)) # __float__ int to float
print(str(42)) # __str__ int to str
# print(int("Hi!")) # ERROR!!! str to int is not supported!
print(42) # __str__ is implicitly called by print()
i = 42
print(i.real, i.imag)
print(i.numerator, i.denominator)
i = 2**63 - 1 # On 64-bit systems, this is the maximum integer number allowed on most programming languages.
print(i) # This limitation is imposed by the hardware.
i.bit_length() # Number of bits necessary to represent it in binary.
# The extra bit is for the sign, so 1+63 == 64-bits.
i = 111**222 # In Python integer numbers are implemented in software.
print(i) # So, they have virtually infinite precision.
i.bit_length() # That's why it's possible to have very very very large ints. The only
# limitation is given by how much RAM memory you have installed on your machine.
# help(int)
type(3.14)
b = 3.14
print(b)
print(dir(float))
print('+\t', 10.71 + 3)
print('-\t', 10.71 - 3)
print('*\t', 10.71 * 3)
print('/\t', 10.71 / 3)
print('%\t', 10.71 % 3)
print('//\t', 10.71 // 3)
print('**\t', 10.71 ** 3)
print('abs()\t', abs(-1.0))
print('==\t', 3.14 == 5)
print('!=\t', 3.14 != 5)
print('<\t', 3.14 < 5)
print('>\t', 3.14 > 5)
print('<=\t', 3.14 <= 5)
print('>=\t', 3.14 >= 5)
print(float(3.14)) # __float__ float to float (Duh!)
print(bool(3.14)) # __bool__ float to bool
print(float(True)) # __float__ bool to float
print(int(3.14)) # __int__ float to int
print(float(42)) # __float__ int to float
print(str(3.14)) # __str__ float to str
# print(float("Hi!")) # ERROR!!! str to float is not supported!
print(3.14) # __str__ is implicitly called by print()
x, y = 3.14, 5.0
print(x.real, x.imag)
print(x.is_integer()) # Is 3.14 == int(3.14)? No!
print(y.is_integer()) # Is 5.0 == int(5.0)? Yes!
print(x.as_integer_ratio())
print(y.as_integer_ratio())
# help(float)
type('Hello')
s = 'Hello'
print(s)
print(dir(str))
print('==\t', 'Hi' == 'Hello')
print('!=\t', 'Hi' != 'Hello')
print('<\t', 'Hi' < 'Hello') # Humm... what's happening here?
print('>\t', 'Hi' > 'Hello') # Humm... what's happening here?
print('<=\t', 'Hi' <= 'Hello') # Humm... what's happening here?
print('>=\t', 'Hi' >= 'Hello') # Humm... what's happening here?
# String comparison uses lexicographical ordering: first the first two items
# are compared, and if they differ this determines the outcome of the comparison.
# If they are equal, the next two items are compared, and so on, until either
# sequence is exhausted.
print(ord('H'), ord('i'))
print(ord('H'), ord('e'), ord('l'), ord('l'), ord('o'))
help(ord)
ord('H') < ord('H') or ord('i') < ord('e') # A-Ha!!! Now we know what happened there!
ord('H') > ord('H') or ord('i') > ord('e')
ord('H') <= ord('H') and ord('i') <= ord('e')
ord('H') >= ord('H') and ord('i') >= ord('e')
print(len('Hi'), len('Hello'), len('abracadabra'))
print('==\t', len('Hi') == len('Hello'))
print('!=\t', len('Hi') != len('Hello'))
print('<\t', len('Hi') < len('Hello'))
print('>\t', len('Hi') > len('Hello'))
print('<=\t', len('Hi') <= len('Hello'))
print('>=\t', len('Hi') >= len('Hello'))
help(len)
x = 'Py'
y = 'thon'
z = 'is'
w = 'ok!'
print(x + y + z + w)
print(x + y + ' ' + z + ' ' + w)
x = 'Py'
y = 'thon ' # Note the white space
z = 'is ' # Note the white space
w = 'nice!'
print(x + y + z + w)
x = 'Py'
y = 'thon'
z = 'is'
w = 'awesome!'
print(' '.join([x + y, z, w]))
print(' -*- '.join([x + y, z, w]))
s = 'Python -*- is -*- awesome!'
'-*-' in s
'Python' in s
'Hello' in s
s in s # Duh!
'Ha ' * 5
'Ha ' * 5 + 'Ha!!!'
s = 'aWeSoMe!'
print(s.lower(), s.upper(), s.capitalize())
s = 'Python -*- is -*- awesome -*- and -*- I -*- love -*- it!'
print(s)
print(s.split(' '))
print(s.split('-*-'))
s.split('-*-', 1) # split by at most 1 time.
s.split('-*-', 3) # split by at most 3 times.
help(str.split)
print(s)
s.startswith('Py')
s.startswith('Python')
s.startswith('awesome')
s.endswith('!')
s.endswith('it!')
s.endswith('and')
S = 'Pithon'
# -^----
# 012345
# S[1] = 'y' # This is an ERROR! You can not change the content of a string!
# It will return a copy of S with all occurrences
# of substring old replaced by new.
ss = S.replace('i', 'y')
print(S)
print(ss)
S = 'Hi! How are you?'
ss = S.replace('Hi!', 'Hello!')
print(S)
print(ss)
S = 'tattarrattat' + ' # ' + 'abracadabra'
ss = S.replace('a', '-')
print(S)
print(ss)
S = 'tattarrattat' + ' # ' + 'abracadabra'
ss = S.replace('a', '-', 4)
print(S)
print(ss)
S = 'tattarrattat' + ' # ' + 'abracadabra'
ss = S
ss = ''.join(reversed(ss))
ss = ss.replace('a', '-', 5)
ss = ''.join(reversed(ss))
print(S)
print(ss)
# help(str)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comparing things returns a bool
Step2: The result of a comparison can be stored in a variable and we can use it to do other things
Step3: Caution! Please!
Step4: Anything can be converted to a boolean
Step5: Please uncomment the line below to see the full docummentation for the bool type.
Step6: The int type
Step7: What operations are supported by an int?
Step8: Those '__xxx__' methods are called Python's magic methods.
Step9: Comparison operators
Step10: Conversion to/from an int
Step11: Calling some attributes and methods
Step12: Please uncomment the line below to see the full docummentation for the int type.
Step13: The float type
Step14: What operations are supported by a float?
Step15: Arithmetic operators
Step16: Comparison operators
Step17: Conversion to/from a float
Step18: Calling some attributes and methods
Step19: Please uncomment the line below to see the full docummentation for the float type.
Step20: The complex type
Step22: What operations are supported by a str?
Step23: Comparison operators
Step24: Remember! String comparison uses lexicographical ordering. Lexicographical ordering for strings uses the Unicode code point number to order individual characters. However...
Step25: Concatenating strings
Step26: Check if a string contains a sub-string
Step27: Build a string of N identical sub-strings
Step28: Methods lower, upper and capitalize
Step29: Split a string into sub-strings
Step30: Check if a string starts or ends with a sub-string
Step31: Strings are immutable
Step32: Using the replace method
Step33: replace all occurrences
Step34: replace the first 4 occurrences
Step35: replace the last 5 occurrences
Step36: Please uncomment the line below to see the full docummentation for the str type.
|
5,649
|
<ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
import deepchem
deepchem.__version__
!pip install 'gym[atari]'
import deepchem as dc
import numpy as np
class PongEnv(dc.rl.GymEnvironment):
def __init__(self):
super(PongEnv, self).__init__('Pong-v0')
self._state_shape = (80, 80)
@property
def state(self):
# Crop everything outside the play area, reduce the image size,
# and convert it to black and white.
cropped = np.array(self._state)[34:194, :, :]
reduced = cropped[0:-1:2, 0:-1:2]
grayscale = np.sum(reduced, axis=2)
bw = np.zeros(grayscale.shape)
bw[grayscale != 233] = 1
return bw
def __deepcopy__(self, memo):
return PongEnv()
env = PongEnv()
import tensorflow as tf
from tensorflow.keras.layers import Input, Concatenate, Conv2D, Dense, Flatten, GRU, Reshape
class PongPolicy(dc.rl.Policy):
def __init__(self):
super(PongPolicy, self).__init__(['action_prob', 'value', 'rnn_state'], [np.zeros(16)])
def create_model(self, **kwargs):
state = Input(shape=(80, 80))
rnn_state = Input(shape=(16,))
conv1 = Conv2D(16, kernel_size=8, strides=4, activation=tf.nn.relu)(Reshape((80, 80, 1))(state))
conv2 = Conv2D(32, kernel_size=4, strides=2, activation=tf.nn.relu)(conv1)
dense = Dense(256, activation=tf.nn.relu)(Flatten()(conv2))
gru, rnn_final_state = GRU(16, return_state=True, return_sequences=True, time_major=True)(
Reshape((-1, 256))(dense), initial_state=rnn_state)
concat = Concatenate()([dense, Reshape((16,))(gru)])
action_prob = Dense(env.n_actions, activation=tf.nn.softmax)(concat)
value = Dense(1)(concat)
return tf.keras.Model(inputs=[state, rnn_state], outputs=[action_prob, value, rnn_final_state])
policy = PongPolicy()
from deepchem.models.optimizers import Adam
a2c = dc.rl.A2C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002))
# Change this to train as many steps as you have patience for.
a2c.fit(1000)
# This code doesn't work well on Colab
env.reset()
while not env.terminated:
env.env.render()
env.step(a2c.select_action(env.state))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reinforcement Learning
Step2: Next we create a model to implement our policy. This model receives the current state of the environment (the pixels being displayed on the screen at this moment) as its input. Given that input, it decides what action to perform. In Pong there are three possible actions at any moment
Step3: We will optimize the policy using the Advantage Actor Critic (A2C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate.
Step4: Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps.
Step5: Let's watch it play and see how it does!
|
5,650
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_func import *
from mnist import read_data_sets
mnist = read_data_sets('MNIST_data')
# Build Computational Graph
sess = tf.InteractiveSession()
# Initialize placeholders for data & labels
x = tf.placeholder(tf.float32, shape=[None, 256])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
# reshape to make image volumes
x_image = tf.reshape(x, [-1,1,1,256])
x_image_drop = tf.nn.dropout(x_image, keep_prob)
W_fc = weight_variable([1, 1, 256, 10])
BW_fc = binarize_weights(W_fc)
y_conv = tf.reshape(conv2d(x_image, BW_fc), [-1, 10])
# create train ops
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# initialize all variables
sess.run(tf.global_variables_initializer())
# train loop
for i in range(10000):
batch = mnist.train.next_batch(50)
if i % 1000 == 0:
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d,r training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
import pickle
# trained binary weights
res = BW_fc.eval()
alpha = np.abs(res).sum(0).sum(0).sum(0) / res[:,:,:,0].size
BW = np.sign(res)
BW = np.squeeze(BW, axis=(0, 1))
BW = BW.T
BW[BW==-1] = 0
# mnist samples ranging from label 0 to 9
imgs = [mnist.test.images[3], mnist.test.images[2], mnist.test.images[208], mnist.test.images[811], mnist.test.images[1140],
mnist.test.images[102], mnist.test.images[814], mnist.test.images[223],mnist.test.images[128], mnist.test.images[214]]
imgs = np.vstack(imgs)
imgs[imgs==-1]=0
weights_int16 = np.zeros((10, 16), dtype=np.uint16)
for index in range(10):
for i in range(16):
for j in range(15):
weights_int16[index, i] += BW[index, 16 * i + j]
weights_int16[index, i] = np.left_shift(weights_int16[index, i], 1)
weights_int16[index, i] += BW[index, 16 * i + 15]
imgs_int16 = np.zeros((10, 16), dtype=np.uint16)
for index in range(10):
for i in range(16):
for j in range(15):
imgs_int16[index, 15-i] += imgs[index, 16 * (15 - j) + i]
imgs_int16[index, 15-i] = np.left_shift(imgs_int16[index, 15-i], 1)
imgs_int16[index, 15-i] += imgs[index, 16 * 0 + i]
pickle.dump({'imgs':imgs, 'weights': BW, 'alpha':alpha,
'imgs_int16':imgs_int16, 'weights_int16':weights_int16}, open( "BNN.pkl", "wb" ))
import matplotlib.pyplot as plt
%matplotlib inline
def dis_img(imgs, index):
img = imgs[index, :]
img = np.reshape(img, [16, 16])
plt.imshow(img, cmap='gray')
plt.show()
for img_index in range(10):
res = []
for i in range(10):
kk = np.logical_not(np.logical_xor(imgs[img_index, :], BW[i, :].T))
pop_count = np.sum(kk)
res.append(pop_count)
plt.subplot(2, 5, img_index + 1)
img = np.reshape(imgs[img_index, :], [16, 16])
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.title("Pred: " + str(np.argmax(res, axis=0)))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training
Step2: Save Weights
Step3: Simulate on CPU
|
5,651
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 's', 3, 'b'],
'B': ['green', 'red', 'blue', 'yellow', 'black']})
def g(df):
return df[pd.to_numeric(df.A, errors='coerce').notnull()]
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
5,652
|
<ASSISTANT_TASK:>
Python Code:
from pygeocoder import Geocoder
apik='' #file-bol illeszd be
results = Geocoder(apik).geocode("FSEGA Cluj")
print(results[0].coordinates)
results[0].country
results[0].city
results[0].county
results[0].postal_code
results[0].formatted_address
results = Geocoder(apik).reverse_geocode(46.544151, 24.560025)
print(results[0])
results[0].city
df=pd.read_excel('df5.xlsx')
results = Geocoder(apik).geocode('Hungary')
print(results[0].coordinates)
koordinatak=[]
for orszag in list(df['Destinatie_tara2']):
try:
koord=Geocoder(apik).geocode(orszag)[0].coordinates
koordinatak.append(koord)
print(orszag,'geokódolva')
except:
print(orszag,'hiba')
if orszag=='Cseh':
koord=Geocoder(apik).geocode('Czech Republic')[0].coordinates
koordinatak.append(koord)
print(orszag,'geokódolva másodszorra')
df['koord']=koordinatak
df['lat']=df['koord'].astype(str).str.split(',').str[0].str.replace('(','').astype(float)
df['lon']=df['koord'].astype(str).str.split(',').str[1].str.replace(')','').astype(float)
df.to_excel('koordinatak.xlsx')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A válaszban az összes Google Maps cím-tulajdonság benne van.
Step2: Fordított geokódolás
Step3: Alkalmazás
Step4: Geokódólás és hibakezelés
Step5: Két új oszlopot készítünk a koordinátákból, mert külön kell választani a hosszúságot és a szélességet. Itt három String függvényt alkalmazunk az str előtaggal. Ugyanakkor először str típusba konvertáljuk azt a Dataframe oszlpot, amelyre szükségünk van, a végén pedig a szélességeket és hosszúságokat vissza számmá, azaz float típusba.
Step6: Kimentjük a viz-t Flourishba
|
5,653
|
<ASSISTANT_TASK:>
Python Code:
! pip install -q -U xarray matplotlib
! rm -rf data-driven-discretization-1d
! git clone https://github.com/google/data-driven-discretization-1d.git
! pip install -q -e data-driven-discretization-1d
# install the seaborn bug-fix from https://github.com/mwaskom/seaborn/pull/1602
! pip install -U -q git+git://github.com/stfnrpplngr/seaborn.git@309a9de383fac4db1c66dbf87815c4ba0c439c59
# Ensure we're using Tensorflow 1.x in Colab. If not using Colab, remove this magic.
%tensorflow_version 1.x
import tensorflow as tf
assert tf.__version__[:2] == '1.'
import seaborn
assert seaborn.__version__ == '0.9.0', 'restart kernel after running previous cell'
from matplotlib.colors import LogNorm
import enum
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import sys, time, os, h5py
import os
import json
import numpy as np
import seaborn
import pandas as pd
import xarray
import matplotlib.pyplot as plt
import pde_superresolution.utils
import pde_superresolution as pde
def _stack_all_rolls(inputs: tf.Tensor, max_offset: int) -> tf.Tensor:
Stack together all rolls of inputs, from 0 to max_offset.
rolled = [tf.concat([inputs[i:, ...], inputs[:i, ...]], axis=0)
for i in range(max_offset)]
return tf.stack(rolled, axis=0)
@enum.unique
class Dataset(enum.Enum):
TRAINING = 0
VALIDATION = 1
def _model_inputs(fine_inputs, resample_factor):
inputs = fine_inputs[:, resample_factor-1::resample_factor]
labels = tf.stack([fine_inputs[:, offset-1::resample_factor]
for offset in range(1, resample_factor)], axis=-1)
base_grid = pde.polynomials.regular_grid(
pde.polynomials.GridOffset.STAGGERED, derivative_order=0,
accuracy_order=hparams.coefficient_grid_min_size, dx=1)
baselines = []
for offset in range(1, hparams.resample_factor):
current_grid = base_grid + 0.5 - offset / hparams.resample_factor
method = pde.polynomials.Method.FINITE_DIFFERENCES
reconstruction = pde.polynomials.reconstruct(
inputs, current_grid, method, derivative_order=0)
baselines.append(reconstruction)
baseline = tf.stack(baselines, axis=-1)
results = {'inputs': inputs, 'labels': labels, 'baseline': baseline}
for accuracy_order in [1, 3, 5]:
base_grid = pde.polynomials.regular_grid(
pde.polynomials.GridOffset.STAGGERED, derivative_order=0,
accuracy_order=accuracy_order, dx=1)
baselines = []
for offset in range(1, hparams.resample_factor):
current_grid = base_grid + 0.5 - offset / hparams.resample_factor
method = pde.polynomials.Method.FINITE_DIFFERENCES
reconstruction = pde.polynomials.reconstruct(
inputs, current_grid, method, derivative_order=0)
baselines.append(reconstruction)
results[f'baseline_{accuracy_order}'] = tf.stack(baselines, axis=-1)
return results
def make_dataset(snapshots,
hparams,
dataset_type: Dataset = Dataset.TRAINING,
repeat: bool = True,
evaluation: bool = False) -> tf.data.Dataset:
snapshots = np.asarray(snapshots, dtype=np.float32)
num_training = int(round(snapshots.shape[0] * hparams.frac_training))
if dataset_type is Dataset.TRAINING:
indexer = slice(None, num_training)
else:
assert dataset_type is Dataset.VALIDATION
indexer = slice(num_training, None)
dataset = tf.data.Dataset.from_tensor_slices(snapshots[indexer])
# no need to do dataset augmentation with rolling for eval
rolls_stop = 1 if evaluation else hparams.resample_factor
dataset = dataset.map(lambda x: _stack_all_rolls(x, rolls_stop))
dataset = dataset.map(lambda x: _model_inputs(x, hparams.resample_factor))
dataset = dataset.apply(tf.data.experimental.unbatch())
dataset = dataset.cache()
if repeat:
dataset = dataset.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=10000))
batch_size = hparams.base_batch_size * hparams.resample_factor
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(buffer_size=1)
return dataset
def stack_reconstruction(inputs, predictions):
if isinstance(inputs, tf.Tensor):
stacked = tf.concat([predictions, inputs[..., tf.newaxis], ], axis=-1)
return tf.layers.flatten(stacked)
else:
stacked = np.concatenate([predictions, inputs[..., np.newaxis]], axis=-1)
new_shape = stacked.shape[:-2] + (np.prod(stacked.shape[-2:]),)
return stacked.reshape(new_shape)
def predict_coefficients(inputs: tf.Tensor,
hparams: tf.contrib.training.HParams,
reuse: object = tf.AUTO_REUSE) -> tf.Tensor:
_, equation = pde.equations.from_hparams(hparams)
pde.model.assert_consistent_solution(equation, inputs)
with tf.variable_scope('predict_coefficients', reuse=reuse):
num_derivatives = len(equation.DERIVATIVE_ORDERS)
base_grid = pde.polynomials.regular_grid(
pde.polynomials.GridOffset.STAGGERED, derivative_order=0,
accuracy_order=hparams.coefficient_grid_min_size,
dx=1.0)
net = inputs[:, :, tf.newaxis]
net /= equation.standard_deviation
activation = pde.model._NONLINEARITIES[hparams.nonlinearity]
for _ in range(hparams.num_layers - 1):
net = pde.layers.conv1d_periodic_layer(net, filters=hparams.filter_size,
kernel_size=hparams.kernel_size,
activation=activation, center=True)
poly_accuracy_layers = []
for offset in range(1, hparams.resample_factor):
current_grid = base_grid + 0.5 - offset / hparams.resample_factor
method = pde.polynomials.Method.FINITE_DIFFERENCES
poly_accuracy_layers.append(
pde.polynomials.PolynomialAccuracyLayer(
grid=current_grid,
method=method,
derivative_order=0,
accuracy_order=hparams.polynomial_accuracy_order,
out_scale=hparams.polynomial_accuracy_scale)
)
input_sizes = [layer.input_size for layer in poly_accuracy_layers]
if hparams.num_layers > 0:
net = pde.layers.conv1d_periodic_layer(net, filters=sum(input_sizes),
kernel_size=hparams.kernel_size,
activation=None, center=True)
else:
initializer = tf.initializers.zeros()
coefficients = tf.get_variable(
'coefficients', (sum(input_sizes),),
initializer=initializer)
net = tf.tile(coefficients[tf.newaxis, tf.newaxis, :],
[tf.shape(inputs)[0], inputs.shape[1].value, 1])
cum_sizes = np.cumsum(input_sizes)
starts = [0] + cum_sizes[:-1].tolist()
stops = cum_sizes.tolist()
zipped = zip(starts, stops, poly_accuracy_layers)
outputs = tf.stack([layer.apply(net[..., start:stop])
for start, stop, layer in zipped], axis=-2)
assert outputs.shape.as_list()[-1] == base_grid.size
return outputs
def predict(inputs, hparams):
coefficients = predict_coefficients(inputs, hparams)
return pde.model.apply_coefficients(coefficients, inputs)
def setup_training(dataset, hparams, scale=1.0):
tensors = dataset.make_one_shot_iterator().get_next()
predictions = predict(tensors['inputs'], hparams)
loss = tf.reduce_mean((tensors['labels'] - predictions) ** 2) / scale
train_step = pde.training.create_training_step(loss, hparams)
return loss, train_step
def baseline_loss(snapshots, hparams):
dataset = make_dataset(snapshots, hparams, repeat=False, evaluation=True)
tensors = dataset.make_one_shot_iterator().get_next()
loss = tf.reduce_mean((tensors['labels'] - tensors['baseline_1']) ** 2)
sess = tf.Session(config=pde.training._session_config())
losses = []
while True:
try:
losses.append(sess.run(loss))
except tf.errors.OutOfRangeError:
break
return np.mean(losses)
! gsutil cp gs://data-driven-discretization-public/training-data/burgers.h5 .
with h5py.File('burgers.h5') as f:
snapshots = f['v'][...]
hparams = pde.training.create_hparams(
equation='burgers',
conservative=False,
coefficient_grid_min_size=6,
resample_factor=16,
equation_kwargs=json.dumps(dict(num_points=512)),
base_batch_size=32,
)
demo_dataset = make_dataset(snapshots, hparams, repeat=False, evaluation=True)
sess = tf.Session(config=pde.training._session_config())
tf_example = demo_dataset.make_one_shot_iterator().get_next()
example = sess.run(tf_example)
plt.figure(figsize=(16, 4))
example_id = 2
plt.scatter(np.arange(0, 512, hparams.resample_factor),
np.roll(example['inputs'][example_id], 1, axis=-1), marker='s')
plt.plot(stack_reconstruction(example['inputs'], example['baseline'])[example_id], label='baseline')
plt.plot(stack_reconstruction(example['inputs'], example['labels'])[example_id], label='exact')
plt.legend()
demo_dataset = make_dataset(snapshots, hparams, Dataset.VALIDATION, repeat=False, evaluation=True)
tensors = demo_dataset.make_one_shot_iterator().get_next()
tensors['predictions'] = predict(tensors['inputs'], hparams)
sess.run(tf.global_variables_initializer())
example = sess.run(tensors)
for example_id in [0, 10, 20, 30, 40]:
plt.figure(figsize=(16, 4))
plt.scatter(np.arange(0, 512, hparams.resample_factor),
np.roll(example['inputs'][example_id], 1, axis=-1), marker='s')
plt.plot(stack_reconstruction(example['inputs'], example['baseline'])[example_id], label='baseline')
plt.plot(stack_reconstruction(example['inputs'], example['labels'])[example_id], label='exact')
plt.plot(stack_reconstruction(example['inputs'], example['predictions'])[example_id], label='predictions')
plt.legend()
hparams = pde.training.create_hparams(
equation='burgers',
conservative=False,
coefficient_grid_min_size=6,
resample_factor=8,
equation_kwargs=json.dumps(dict(num_points=512)),
eval_interval=500,
learning_stops=[20000, 40000],
learning_rates=[3e-3, 3e-4],
)
loss_scale = baseline_loss(snapshots, hparams)
%%time
tf.reset_default_graph()
dataset = make_dataset(snapshots, hparams)
loss, train_step = setup_training(dataset, hparams, scale=loss_scale)
sess = tf.Session(config=pde.training._session_config())
sess.run(tf.global_variables_initializer())
%%time
for step in range(hparams.learning_stops[-1]):
sess.run(train_step)
if (step + 1) % hparams.eval_interval == 0:
print(step, sess.run(loss))
demo_dataset = make_dataset(snapshots, hparams, Dataset.VALIDATION, repeat=False, evaluation=True)
tensors = demo_dataset.make_one_shot_iterator().get_next()
tensors['predictions'] = predict(tensors['inputs'], hparams)
array_list = []
while True:
try:
array_list.append(sess.run(tensors))
except tf.errors.OutOfRangeError:
break
arrays = {k: np.concatenate([d[k] for d in array_list])
for k in array_list[0]}
ds = xarray.Dataset({
'inputs': (('sample', 'x'), arrays['inputs']),
'labels': (('sample', 'x', 'offset'), arrays['labels']),
'nn_predictions': (('sample', 'x', 'offset'), arrays['predictions']),
'poly_predictions': (('sample', 'x', 'accuracy_order', 'offset'),
np.stack([arrays['baseline_1'],arrays['baseline_3'], arrays['baseline_5']], axis=-2)),
}, coords={'accuracy_order': [1, 3, 5]})
ds
!gsutil cp gs://data-driven-discretization-public/reconstruction/burgers_results_8x.nc .
hparams = pde.training.create_hparams(
equation='burgers',
conservative=False,
coefficient_grid_min_size=6,
resample_factor=8,
equation_kwargs=json.dumps(dict(num_points=512)),
eval_interval=500,
learning_stops=[20000, 40000],
learning_rates=[3e-3, 3e-4],
)
ds = xarray.open_dataset('burgers_results_8x.nc').load()
ds
plt.hist(abs(ds.labels - ds.poly_predictions.sel(accuracy_order=3)).data.ravel(),
bins=np.geomspace(1e-6, 2, num=51), alpha=0.5, label='3rd order');
plt.hist(abs(ds.labels - ds.nn_predictions).data.ravel(),
bins=np.geomspace(1e-6, 2, num=51), alpha=0.5, label='neural net');
plt.xscale('log')
plt.legend()
example_id = 0
fig, axes = plt.subplots(3, 1, figsize=(10, 15))
x = np.arange(512) * 2 * np.pi / 512
colors = seaborn.color_palette(n_colors=3)
for ax, example_id in zip(axes.ravel(), [0, 10, 20]):
ax.scatter(x[hparams.resample_factor-1::hparams.resample_factor],
ds.inputs.data[example_id],
marker='s', color=colors[0])
ax.plot(x, stack_reconstruction(ds.inputs.data, ds.labels.data)[example_id],
label='exact', color=colors[0])
ax.plot(x, stack_reconstruction(ds.inputs.data, ds.poly_predictions.sel(accuracy_order=3).data)[example_id],
label='baseline', color=colors[1])
ax.plot(x, stack_reconstruction(ds.inputs.data, ds.nn_predictions.data)[example_id],
label='predictions', linestyle='--', color=colors[2])
disc = xarray.Dataset()
disc['nn_error'] = abs(ds.nn_predictions - ds.labels).mean('offset')
disc['poly_error'] = abs(ds.poly_predictions.sel(accuracy_order=3) - ds.labels).mean('offset')
# https://en.wikipedia.org/wiki/Curvature#Curvature_of_the_graph_of_a_function
use_slope = 0
dx = 2*np.pi/512
y_xx = ds.labels.diff('offset', 2) / dx ** 2
y_x = 0.5 * (ds.labels.diff('offset').sel(offset=slice(None, -1)) + ds.labels.diff('offset').sel(offset=slice(1, None))) / dx
disc['bin_curvature'] = (abs(y_xx) / (1 + use_slope * y_x ** 2) ** (3/2)).max('offset')
y = stack_reconstruction(ds.inputs.data, ds.labels.data).astype(np.float64)
y_xx = (np.roll(y, -1, axis=-1) - 2 * y + np.roll(y, 1, axis=-1)) / dx ** 2
y_x = (0.5 * np.roll(y, -1, axis=-1) - 0.5 * np.roll(y, 1, axis=-1)) / dx
curvature = abs(y_xx) / (1 + use_slope * y_x ** 2) ** (3/2)
resample_factor = 8
curvature = np.stack([curvature[:, offset-1::resample_factor]
for offset in range(1, resample_factor)], axis=-1)
disc['curvature'] = ds.labels.copy(data=curvature)
disc['nearest_curvature'] = 10 ** np.log10(disc.bin_curvature).round(1)
df = disc.to_dataframe().reset_index()
curvature_count = (df.groupby('nearest_curvature').count())['sample']
import pandas as pd
# TODO(shoyer): upstream this into Seaborn
class CustomLinePlotter(seaborn.relational._LinePlotter):
def aggregate(self, vals, grouper, units=None):
Compute an estimate and confidence interval using grouper.
func = self.estimator
ci = self.ci
n_boot = self.n_boot
# Define a "null" CI for when we only have one value
null_ci = pd.Series(index=["low", "high"], dtype=np.float)
# Group and get the aggregation estimate
grouped = vals.groupby(grouper, sort=self.sort)
est = grouped.agg(func)
lower = grouped.quantile(1 - ci)
upper = grouped.quantile(ci)
cis = pd.DataFrame(np.c_[lower, upper],
index=est.index,
columns=["low", "high"]).stack()
# Unpack the CIs into "wide" format for plotting
if cis.notnull().any():
cis = cis.unstack().reindex(est.index)
else:
cis = None
return est.index, est, cis
def custom_lineplot(x=None, y=None, hue=None, size=None, style=None, data=None,
palette=None, hue_order=None, hue_norm=None,
sizes=None, size_order=None, size_norm=None,
dashes=True, markers=None, style_order=None,
units=None, estimator="mean", ci=95, n_boot=1000,
sort=True, err_style="band", err_kws=None,
legend="brief", ax=None, **kwargs):
p = CustomLinePlotter(
x=x, y=y, hue=hue, size=size, style=style, data=data,
palette=palette, hue_order=hue_order, hue_norm=hue_norm,
sizes=sizes, size_order=size_order, size_norm=size_norm,
dashes=dashes, markers=markers, style_order=style_order,
units=units, estimator=estimator, ci=ci, n_boot=n_boot,
sort=sort, err_style=err_style, err_kws=err_kws, legend=legend,
)
if ax is None:
ax = plt.gca()
p.plot(ax, kwargs)
return ax
seaborn.set_context("notebook", font_scale=12/11)
fig = plt.figure(figsize=(2*3.42, 2*2))
# LEFT
example_id = 0
ax = fig.subplots(1, 1, gridspec_kw=dict(bottom=0.11, top=0.985, left=0.095, right=0.58))
x = np.arange(512) * 2 * np.pi / 512
colors = ['C0', 'C1', 'C2']
ax.scatter(x[hparams.resample_factor-1::hparams.resample_factor],
ds.inputs.data[example_id],
marker='D', label='Known points', color=colors[0])
ax.plot(x, stack_reconstruction(ds.inputs.data, ds.labels.data)[example_id],
label='Exact solution',
color=colors[0],
linewidth=3,
)
ax.plot(x, stack_reconstruction(ds.inputs.data, ds.poly_predictions.sel(accuracy_order=3).data)[example_id],
label='Polynomial interp.', color=colors[1])
ax.plot(x, stack_reconstruction(ds.inputs.data, ds.nn_predictions.data)[example_id],
label='Neural net interp.', linestyle='--', color=colors[-1])
ax.legend(frameon=False, loc='lower left')
ax.set_xlim(1.13, 1.88)
ax.set_ylim(-1.2, 1.2)
ax.set_xlabel('$x$', labelpad=1)
ax.set_ylabel('$v$', labelpad=-5)
seaborn.despine()
# TOP RIGHT
axes = fig.subplots(1, 2, sharex=False, sharey=True,
gridspec_kw=dict(bottom=0.65, top=0.945, left=0.58, right=0.92, wspace=0))
axes[0].set_aspect('equal')
axes[1].set_aspect('equal')
bins = np.linspace(-2, 2, num=51)
im = axes[0].hist2d(
ds.labels.data.ravel(),
ds.poly_predictions.sel(accuracy_order=3).data.ravel(),
bins=2*[bins], cmin=1, norm=LogNorm(vmin=1, vmax=1e4))
im[-1].set_edgecolor('none')
im[-1].set_rasterized(True)
im[-1].set_zorder(-1)
im = axes[1].hist2d(
ds.labels.data.ravel(),
ds.nn_predictions.data.ravel(),
bins=2*[bins], cmin=1, norm=LogNorm(vmin=1, vmax=1e4))
im[-1].set_edgecolor('none')
im[-1].set_rasterized(True)
im[-1].set_zorder(-1)
cbaxes = fig.add_axes([0.93, 0.65, 0.01, 0.295])
cb = plt.colorbar(im[3], cax=cbaxes, extendfrac=0.05, extend='max')
axes[0].set_title('Polynomial')
axes[1].set_title('Neural net')
axes[0].set_xticks([-2, 0, 2])
axes[0].set_xticklabels(['-2', '0', '2 '])
axes[0].get_xaxis().majorTicks[2].label1.set_horizontalalignment('right')
axes[1].set_xticks([-2, 0, 2])
axes[1].get_xaxis().majorTicks[0].label1.set_horizontalalignment('left')
axes[0].set_yticks([-2, 0, 2])
axes[0].set_xlabel(r'$v_\mathrm{exact}$', labelpad=1)
axes[1].set_xlabel(r'$v_\mathrm{exact}$', labelpad=1)
axes[0].set_ylabel(r'$v_\mathrm{predicted}$', labelpad=-3)
# BOTTOM RIGHT
xmin = 1e-1
xmax = 1e4
ax = fig.subplots(
1, 1, gridspec_kw=dict(bottom=0.115, top=0.46, left=0.73, right=1))
custom_lineplot(x='nearest_curvature', y='poly_error', data=df, ax=ax, color=colors[1], estimator=np.median, ci=0.95)
custom_lineplot(x='nearest_curvature', y='nn_error', data=df, ax=ax, color=colors[-1], estimator=np.median, ci=0.95)
plt.setp(ax.get_lines()[1], linestyle='--')
ax.text(4e2, 2.5e0, 'Polynomial', va='center', ha='center')
ax.text(1e3, 2e-4, 'Neural\nnet', va='center', ha='center')
ax.set_xscale('log')
ax.set_xlim(xmin, xmax)
ax.set_yscale('log')
ax.set_yticks([1e-4, 1e-2, 1])
ax.set_xlabel(r'Curvature', labelpad=1)
ax.set_ylabel('Abs. error', labelpad=1)
seaborn.despine(ax=ax)
plt.figtext(0, 1, '(a)', ha='left', va='top')
plt.figtext(.5, 1, '(b)', ha='left', va='top')
plt.figtext(.62, 0.48, '(c)', ha='left', va='top')
fig.dpi = 90
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Library code
Step3: model
Step4: one time step
Step5: visualize an example
Step6: Baseline performance
Step7: Untrained model
Step8: train a model (optional)
Step9: Examine results from the trained model
Step10: Figures
Step11: Some full examples -- we are much better than third-order polynomial interpolation!
Step13: Create Figure 1 from the paper
|
5,654
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28,28,1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28,28,1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu, strides=1)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, strides=(2,2), pool_size=(2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, strides=(2,2), pool_size=(2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, strides=(2,2), pool_size=(2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, filters=8, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, filters=16, kernel_size=(3,3), padding='same', strides=1, activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, filters=1, kernel_size=(3,3), padding='same', strides=1, activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Architecture
Step2: Training
Step3: Denoising
Step4: Checking out the performance
|
5,655
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for idx, review in reviews.to_records():
for word in review.split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
vocab = total_counts.most_common()[:10000]
print(vocab[:60])
print(vocab[-1])
word2idx = {word: i for i, (word, freq) in enumerate(vocab)}
def text_to_vector(text):
Converts text to feature vector based on the vocabulary
ret = np.zeros(len(vocab), dtype=int)
for word in text.split(' '):
if word in word2idx:
ret[word2idx[word]] += 1
return ret
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.to_records()):
word_vectors[ii] = text_to_vector(text)
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values.T[0][train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values.T[0][test_split], 2)
trainX
trainY
# Network building
def build_model(input_size, hidden_units=[100, 30], lr=0.1):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Input -- [batch_size, input_vector_dimension]
print('Input features size: %s' % input_size)
net = tflearn.input_data([None, input_size])
# Hidden --
for n in hidden_units:
net = tflearn.fully_connected(net, n, activation='ReLU')
# Output --
net = tflearn.fully_connected(net, 2, activation='softmax')
# sgd: stochastic gradient descent
net = tflearn.regression(net, optimizer='sgd',
learning_rate=lr,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model(trainX.shape[1], [5000, 200, 25], 0.08)
# Training
model = build_model(trainX.shape[1], [2000, 200, 40], 0.05)
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=300)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step7: Text to vector function
Step8: If you do this right, the following code should return
Step9: Now, run through our entire review data set and convert each review to a word vector.
Step10: Train, Validation, Test sets
Step11: Building the network
Step12: Intializing the model
Step13: Training the network
Step14: Testing
Step15: Try out your own text!
|
5,656
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import datetime
import open_cp
import open_cp.sources.sepp as source_sepp
rates = np.random.random(size=(10,10))
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
points = simulation.sample_to_randomised_grid(0, 365, grid_size=50)
time_unit = source_sepp.make_time_unit(datetime.timedelta(days=1))
timed_points = source_sepp.scale_to_real_time(points, datetime.datetime(2017,1,1),
time_unit)
fig, ax = plt.subplots(ncols=2, figsize=(16,7))
ax[0].scatter(timed_points.xcoords, timed_points.ycoords, alpha=0.1)
ax[0].set_title("Space location of simulated data")
ax[0].set(xlim=[0,500], ylim=[0,500])
times = timed_points.times_datetime()
ax[1].scatter(times, timed_points.xcoords, alpha=0.1)
ax[1].set_xlim([datetime.datetime(2017,1,1), datetime.datetime(2018,1,1)])
ax[1].set_title("Date against x location")
fig.autofmt_xdate()
None
import open_cp.seppexp as sepp
region = open_cp.RectangularRegion(xmin=0, xmax=500, ymin=0, ymax=500)
trainer = sepp.SEPPTrainer(region, grid_size=50)
trainer.data = timed_points
predictor = trainer.train(iterations=50)
print("Predicted omega={}, theta={}".format(predictor.omega, predictor.theta))
fig, ax = plt.subplots(ncols=1, figsize=(8,6))
ax.plot([0,1], [0,1], linewidth=1, color="r")
ax.scatter(rates.ravel(), predictor.mu.ravel()*1440)
ax.set(xlim=[0,1], ylim=[0,np.max(predictor.mu)*1440*1.1])
ax.set(xlabel="Actual background rate", ylabel="Predicted background rate")
ax.set_title("Predicted background rates")
None
predictor.data = timed_points
dates = [datetime.datetime(2018,1,1), datetime.datetime(2018,1,2)]
predictions = [predictor.predict(date) for date in dates]
background_prediction = predictor.background_prediction()
fig, ax = plt.subplots(ncols=3, figsize=(16,6))
for a in ax:
a.set(xlim=[region.xmin, region.xmax], ylim=[region.ymin, region.ymax])
for pred, date, a in zip(predictions, dates, ax):
m = a.pcolormesh(*pred.mesh_data(), pred.intensity_matrix, cmap="Blues")
a.set_title("Prediction for {}".format(date))
fig.colorbar(m, ax=a)
m2 = ax[2].pcolormesh(*background_prediction.mesh_data(), background_prediction.intensity_matrix, cmap="Blues")
ax[2].set_title("Background rate")
fig.colorbar(m2, ax=ax[2])
None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulate some test data
Step2: Train the model
Step3: Make predictions
|
5,657
|
<ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.quandl import fred_gnp
# Since this data is public domain and provided by Quandl for free, there is no _free version of this
# data set, as found in the premium sets. This import gets you the entirety of this data set.
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
fred_gnp.sort('asof_date')
fred_gnp.count()
gnp_df = odo(fred_gnp, pd.DataFrame)
gnp_df.plot(x='asof_date', y='value')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("GNP")
plt.title("US Gross National Product")
plt.legend().set_visible(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data goes all the way back to 1947 and is updated quarterly.
Step2: Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame
|
5,658
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import (absolute_import, division, print_function)
from functools import reduce, partial
from operator import mul
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
from pyneqsys.symbolic import SymbolicSys, TransformedSys, linear_exprs
sp.init_printing()
texnames = 'H^+ OH^- NH_4^+ NH_3 H_2O'.split()
n = len(texnames)
NH3_idx = texnames.index('NH_3')
NH3_varied = np.logspace(-7, 0)
c0 = 1e-7, 1e-7, 1e-7, 1, 55
K = Kw, Ka = 10**-14/55, 10**-9.24
stoichs = [[1, 1, 0, 0, -1], [1, 0, -1, 1, 0]] # our 2 equilibria
H = [1, 1, 4, 3, 2]
N = [0, 0, 1, 1, 0]
O = [0, 1, 0, 0, 1]
q = [1, -1, 1, 0, 0] # charge
preserv = [H, N, O, q]
prod = lambda x: reduce(mul, x)
def get_f(x, params, backend, lnK):
init_concs = params[:n]
eq_constants = params[n:]
le = linear_exprs(preserv, x, linear_exprs(preserv, init_concs), rref=True)
if lnK:
return le + [
sum(backend.log(xi)*p for xi, p in zip(x, coeffs)) - backend.log(K)
for coeffs, K in zip(stoichs, eq_constants)
]
else:
return le + [
prod(xi**p for xi, p in zip(x, coeffs)) - K for coeffs, K in zip(stoichs, eq_constants)
]
neqsys = SymbolicSys.from_callback(
partial(get_f, lnK=False), n, n+len(K),
latex_names=[r'\mathrm{[%s]}' % nam for nam in texnames],
latex_param_names=[r'\mathrm{[%s]_0}' % nam for nam in texnames] + [r'K_{\rm w}', r'K_{\rm a}(\mathrm{NH_4^+})']
)
neqsys
neqsys.get_jac()
%matplotlib inline
def solve_and_plot(nsys):
fig = plt.figure(figsize=(12, 4))
ax_out = plt.subplot(1, 2, 1, xscale='log', yscale='log')
ax_err = plt.subplot(1, 2, 2, xscale='log')
ax_err.set_yscale('symlog', linthreshy=1e-14)
xres, extra = nsys.solve_and_plot_series(
c0, c0+K, NH3_varied, NH3_idx, 'scipy',
plot_kwargs=dict(ax=ax_out), plot_residuals_kwargs=dict(ax=ax_err))
for ax in (ax_out, ax_err):
ax.set_xlabel('[NH3]0 / M')
ax_out.set_ylabel('Concentration / M')
ax_out.legend(loc='best')
ax_err.set_ylabel('Residuals')
avg_nfev = np.average([nfo['nfev'] for nfo in extra['info']])
avg_njev = np.average([nfo['njev'] for nfo in extra['info']])
success = np.average([int(nfo['success']) for nfo in extra['info']])
return {'avg_nfev': avg_nfev, 'avg_njev': avg_njev, 'success': success}
solve_and_plot(neqsys)
tneqsys = TransformedSys.from_callback(
partial(get_f, lnK=True), (sp.exp, sp.log), 5, 7,
latex_names=neqsys.latex_names, latex_param_names=neqsys.latex_param_names)
tneqsys
c_res, info = tneqsys.solve([1]*5, np.array(c0+K))
c0, c_res, info['success']
solve_and_plot(tneqsys)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's consider
Step2: Let's define the stoichiometry and composition
Step3: and now a function for the system of equations
Step4: note how we passed rref=True to linear_exprs, this will give a linear system in reduced row echolon form the system of equations. The four preservation equations (one for charge and three for atom types) has one linearly dependent equation which is dropped by pyneqsys.symbolic.linear_exprs, and after adding our two equations from the equilibria we are left with 5 equations (same number as unknowns).
Step5: Now let's see how pyneqsys can transform our system
Step6: Note how the conservation laws became non-linear while the expressions corresponding to the equilibria became linear.
|
5,659
|
<ASSISTANT_TASK:>
Python Code:
import torch
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from pyro.contrib.examples.bart import load_bart_od
from pyro.contrib.forecast import ForecastingModel, Forecaster, backtest, eval_crps
from pyro.infer.reparam import LocScaleReparam, StableReparam
from pyro.ops.tensor_utils import periodic_cumsum, periodic_repeat, periodic_features
from pyro.ops.stats import quantile
import matplotlib.pyplot as plt
%matplotlib inline
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(20200221)
dataset = load_bart_od()
print(dataset.keys())
print(dataset["counts"].shape)
print(" ".join(dataset["stations"]))
T, O, D = dataset["counts"].shape
data = dataset["counts"][:T // (24 * 7) * 24 * 7].reshape(T // (24 * 7), -1).sum(-1).log()
data = data.unsqueeze(-1)
plt.figure(figsize=(9, 3))
plt.plot(data)
plt.title("Total weekly ridership")
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, len(data));
# First we need some boilerplate to create a class and define a .model() method.
class Model1(ForecastingModel):
# We then implement the .model() method. Since this is a generative model, it shouldn't
# look at data; however it is convenient to see the shape of data we're supposed to
# generate, so this inputs a zeros_like(data) tensor instead of the actual data.
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1) # Should be 1 in this univariate tutorial.
feature_dim = covariates.size(-1)
# The first part of the model is a probabilistic program to create a prediction.
# We use the zero_data as a template for the shape of the prediction.
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
prediction = bias + (weight * covariates).sum(-1, keepdim=True)
# The prediction should have the same shape as zero_data (duration, obs_dim),
# but may have additional sample dimensions on the left.
assert prediction.shape[-2:] == zero_data.shape
# The next part of the model creates a likelihood or noise distribution.
# Again we'll be Bayesian and write this as a probabilistic program with
# priors over parameters.
noise_scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Normal(0, noise_scale)
# The final step is to call the .predict() method.
self.predict(noise_dist, prediction)
T0 = 0 # begining
T2 = data.size(-2) # end
T1 = T2 - 52 # train/test split
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = torch.stack([time], dim=-1)
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1)
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
print(samples.shape, p10.shape)
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = torch.cat([time.unsqueeze(-1),
periodic_features(T2, 365.25 / 7)], dim=-1)
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1)
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
class Model2(ForecastingModel):
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1)
feature_dim = covariates.size(-1)
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
# We'll sample a time-global scale parameter outside the time plate,
# then time-local iid noise inside the time plate.
drift_scale = pyro.sample("drift_scale",
dist.LogNormal(-20, 5).expand([1]).to_event(1))
with self.time_plate:
# We'll use a reparameterizer to improve variational fit. The model would still be
# correct if you removed this context manager, but the fit appears to be worse.
with poutine.reparam(config={"drift": LocScaleReparam()}):
drift = pyro.sample("drift", dist.Normal(zero_data, drift_scale).to_event(1))
# After we sample the iid "drift" noise we can combine it in any time-dependent way.
# It is important to keep everything inside the plate independent and apply dependent
# transforms outside the plate.
motion = drift.cumsum(-2) # A Brownian motion.
# The prediction now includes three terms.
prediction = motion + bias + (weight * covariates).sum(-1, keepdim=True)
assert prediction.shape[-2:] == zero_data.shape
# Construct the noise distribution and predict.
noise_scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Normal(0, noise_scale)
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = periodic_features(T2, 365.25 / 7)
forecaster = Forecaster(Model2(), data[:T1], covariates[:T1], learning_rate=0.1,
time_reparam="dct",
)
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
class Model3(ForecastingModel):
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1)
feature_dim = covariates.size(-1)
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
drift_scale = pyro.sample("drift_scale", dist.LogNormal(-20, 5).expand([1]).to_event(1))
with self.time_plate:
with poutine.reparam(config={"drift": LocScaleReparam()}):
drift = pyro.sample("drift", dist.Normal(zero_data, drift_scale).to_event(1))
motion = drift.cumsum(-2) # A Brownian motion.
prediction = motion + bias + (weight * covariates).sum(-1, keepdim=True)
assert prediction.shape[-2:] == zero_data.shape
# The next part of the model creates a likelihood or noise distribution.
# Again we'll be Bayesian and write this as a probabilistic program with
# priors over parameters.
stability = pyro.sample("noise_stability", dist.Uniform(1, 2).expand([1]).to_event(1))
skew = pyro.sample("noise_skew", dist.Uniform(-1, 1).expand([1]).to_event(1))
scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Stable(stability, skew, scale)
# We need to use a reparameterizer to handle the Stable distribution.
# Note "residual" is the name of Pyro's internal sample site in self.predict().
with poutine.reparam(config={"residual": StableReparam()}):
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(2)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = periodic_features(T2, 365.25 / 7)
forecaster = Forecaster(Model3(), data[:T1], covariates[:T1], learning_rate=0.1,
time_reparam="dct")
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
samples = forecaster(data[:T1], covariates, num_samples=1000)
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(data, 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(0, None)
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1, T2), data[T1:], 'k-', label='truth')
plt.title("Total weekly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Week after 2011-01-01")
plt.xlim(T1, None)
plt.legend(loc="best");
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
windows2 = backtest(data, covariates, Model2,
min_train_window=104, test_window=52, stride=26,
forecaster_options={"learning_rate": 0.1, "time_reparam": "dct",
"log_every": 1000, "warm_start": True})
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
windows3 = backtest(data, covariates, Model3,
min_train_window=104, test_window=52, stride=26,
forecaster_options={"learning_rate": 0.1, "time_reparam": "dct",
"log_every": 1000, "warm_start": True})
fig, axes = plt.subplots(3, figsize=(8, 6), sharex=True)
axes[0].set_title("Gaussian versus Stable accuracy over {} windows".format(len(windows2)))
axes[0].plot([w["crps"] for w in windows2], "b<", label="Gaussian")
axes[0].plot([w["crps"] for w in windows3], "r>", label="Stable")
axes[0].set_ylabel("CRPS")
axes[1].plot([w["mae"] for w in windows2], "b<", label="Gaussian")
axes[1].plot([w["mae"] for w in windows3], "r>", label="Stable")
axes[1].set_ylabel("MAE")
axes[2].plot([w["rmse"] for w in windows2], "b<", label="Gaussian")
axes[2].plot([w["rmse"] for w in windows3], "r>", label="Stable")
axes[2].set_ylabel("RMSE")
axes[0].legend(loc="best")
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intro to Pyro's forecasting framework
Step2: Let's start with a simple log-linear regression model, with no trend or seasonality. Note that while this example is univariate, Pyro's forecasting framework is multivariate, so we'll often need to reshape using .unsqueeze(-1), .expand([1]), and .to_event(1).
Step3: We can now train this model by creating a Forecaster object. We'll split the data into [T0,T1) for training and [T1,T2) for testing.
Step4: Next we can evaluate by drawing posterior samples from the forecaster, passing in full covariates but only partial data. We'll use Pyro's quantile() function to plot median and an 80% confidence interval. To evaluate fit we'll use eval_crps() to compute Continuous Ranked Probability Score; this is an good metric to assess distributional fit of a heavy-tailed distribution.
Step5: Zooming in to just the forecasted region, we see this model ignores seasonal behavior.
Step6: We could add a yearly seasonal component simply by adding new covariates (note we've already taken care in the model to handle feature_dim > 1).
Step7: Time-local random variables
Step8: Heavy-tailed noise
Step9: Backtesting
|
5,660
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import logging
from conf import LisaLogging
LisaLogging.setup()#level=logging.WARNING)
import pandas as pd
from perf_analysis import PerfAnalysis
import trappy
from trappy import ILinePlot
from trappy.stats.grammar import Parser
from tests.eas.generic import TwoBigTasks, TwoBigThreeSmall, RampUp, RampDown, EnergyModelWakeMigration, OneSmallTask
t = EnergyModelWakeMigration(methodName="test_task_placement")
print t.__doc__
t.setUpClass()
experiment = t.executor.experiments[0]
# print t.get_power_df.__doc__
estimated_power = t.get_power_df(experiment)
# print t.get_expected_power_df.__doc__
expected_power = t.get_expected_power_df(experiment)
trace = t.get_trace(experiment)
trappy.plotter.plot_trace(trace.ftrace)
df = pd.concat([
expected_power.sum(axis=1), estimated_power.sum(axis=1)],
axis=1, keys=['ideal_power', 'observed_power']).fillna(method='ffill')
ILinePlot(df, column=df.columns.tolist(), drawstyle='steps-post').view()
trace.analysis.frequency.plotClusterFrequencies()
try:
t.test_slack()
except AssertionError as e:
print "test_slack failed:"
print e
else:
print "test_slack passed"
try:
t.test_task_placement()
except AssertionError as e:
print "test_task_placement failed:"
print e
else:
print "test_task_placement passed"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run test workload
Step2: By default we'll run the EnergyModelWakeMigration test, which runs a workload alternating between high and low-intensity. All the other test classes shown above have the same interface, but run different workloads. To run the tests on different workloads, change this line below
Step3: Examine trace
Step4: Plot Schedule
Step5: Plot estimated ideal and estimated power usage
Step6: Plot CPU frequency
Step7: Assertions
Step8: test_task_placement checks that the task placement was energy efficient, taking advantage of lower-power CPUs whenever possible.
|
5,661
|
<ASSISTANT_TASK:>
Python Code:
# import numpy for SVD function
import numpy
# import matplotlib.pyplot for visualising arrays
import matplotlib.pyplot as plt
# create a really simple matrix
A = numpy.array([[-1,1], [1,1]])
# and show it
print("A = \n", A)
# plot the array
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("A")
p.plot(A[0,],A[1,],'ro')
plt.show()
# break it down into an SVD
U, s, VT = numpy.linalg.svd(A, full_matrices=False)
S = numpy.diag(s)
# what are U, S and V
print("U =\n", U, "\n")
print("S =\n", S, "\n")
print("V^T =\n", VT, "\n")
for px in [(131,U, "U"), (132,S, "S"), (133,VT, "VT")]:
subplot = px[0]
matrix = px[1]
matrix_name = px[2]
p = plt.subplot(subplot)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title(matrix_name)
p.plot(matrix[0,],matrix[1,],'ro')
pass
plt.show()
# rebuild A2 from U.S.V
A2 = numpy.dot(U,numpy.dot(S,VT))
print("A2 = \n", A2)
# plot the reconstructed A2
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("A2")
p.plot(A2[0,],A2[1,],'ro')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start With A Simple Matrix
Step2: Now Take the SVD
Step3: Check U, S and V^T Do Actually Reconstruct A
|
5,662
|
<ASSISTANT_TASK:>
Python Code:
# <!-- collapse=True -->
a = 1
b = 1.0
print(a, type(a), b, type(b))
# <!-- collapse=False -->
a = 'Mon texte'
b = "Mon deuxième texte"
print(a, type(a), b, type(b))
a = b'Mon texte'
print(a, type(a))
b = b'Mon deuxième texte'
b = b'Mon deuxi\xc3\xa8me texte'
print(b)
print(b.decode('utf-8'))
1 < 2, 1 > 2, 2 == 10, 2 >= 1.9, 2 == 2, 2 != 2
bool(a)
1 < 2 or 2 < 1
1 < 2 and 2 < 1
not(1 < 2 and 2 < 1)
a = [ 1, 'deux' , 3]
type(a)
len(a)
a[0], a[2]
a.index(1), a.index('deux'), a.index(3)
a = { 'nom': 'Doe' , 'prenom': 'John', 'age': 77 }
type(a)
len(a)
a['nom'], a['age']
# Valeurs initiales
liste = [ 1, 'deux' , 3]
dict = { 'nom': 'Doe' , 'prenom': 'John', 'age': 77 }
print("Valeurs initiales:\n", liste, dict)
# Modification des valeurs par assignation
liste[1] = 2
dict['age'] = 17
print("Modification des valeurs par assignation:\n", liste, dict)
# Ajout d'éléments
liste.append(4)
dict['nationalité'] = 'française'
print("Ajout d'éléments:\n", liste, dict)
# Suppression d'éléments
liste.pop(0)
dict.pop('age')
print("Suppression d'éléments:\n", liste, dict)
# Valeurs initiales
L = [ 1, 'deux' , 3]
print("Valeurs initiales:\n", L)
# Création d'une référencxe à la liste par simple assignation
L_ref = L
# Création d'une copie de la liste
L_copie = list(L)
# Modification de la liste initiale
L[1] = 2
print("Modification de la liste L:")
print("La liste L a bien, été modifiée:", L)
print("La liste L_ref a aussi été modifiée car il s'agit juste d'une référence vers la liste L:", L_ref)
print("La copie L_copie n'a pas été modifiée:", L_copie)
chiffre = input("Entrer la valeur du chiffre souhaité: ")
print(chiffre)
print(type(chiffre))
chiffre*10
chiffre = int(chiffre)
10*chiffre
fichier = open('data/lorem.txt')
fichier.read()
fichier = open('data/lorem.txt')
fichier.readline()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mais il implémente aussi des nombres plus exotiques tels que les
Step2: Les bytes sont précédées de la lettre b, il s'agit de texte brut utilisé
Step3: On ne peut représenter simplement que certains caractères(les caractères
Step4: Un encodage autre que le ASCII doit être utilisé pour ces caractères,
Step5: Valeurs logiques
Step6: On peut aussi les utiliser pour savoir si une variable existe.
Step7: On peut aussi combiner plusieurs tests avec les opérateurs booléens and, or et not.
Step8: Listes et dictionnaires
Step9: On peut facilement accéder à la longueur de la liste grace à la fonction
Step10: On peut inversement connaître l'indice correspondant à une valeur grâce à l'attribut index.
Step11: Dictionnaires
Step12: Pour accéder aux éléments du dictionnaire, il suffit d'appeler la clé
Step13: Modification des listes et dictionnaires
Step14: Si on a besoin de modifier une liste ou un dictionnaire, mais que l'on veut garder une trace des objets initiaux, il faut commencer par en créer une copie, il ne suffit pas de créer une variable suplémentaire sans quoi cette variable serait elle aussi modifiée si l'objet initial changeait
Step15: Entrée des données
Step16: Attention cependant, cette valeur est une chaîne de caractère, et les opérations sont celles des strings.
Step17: Il convient de la convertir un nombre entier ou flottant avant d'utiliser la valeur.
Step18: Lecture d'un fichier
Step19: Dans la sortie précédente, les caractères \n représentent les sauts de
|
5,663
|
<ASSISTANT_TASK:>
Python Code:
import logging
import os
from gensim import corpora, utils
from gensim.models.wrappers.dtmmodel import DtmModel
import numpy as np
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
documents = [[u'senior', u'studios', u'studios', u'studios', u'creators', u'award', u'mobile', u'currently', u'challenges', u'senior', u'summary', u'senior', u'motivated', u'creative', u'senior', u'performs', u'engineering', u'tasks', u'infrastructure', u'focusing', u'primarily', u'programming', u'interaction', u'designers', u'engineers', u'leadership', u'teams', u'teams', u'crews', u'responsibilities', u'engineering', u'quality', u'functional', u'functional', u'teams', u'organizing', u'prioritizing', u'technical', u'decisions', u'engineering', u'participates', u'participates', u'reviews', u'participates', u'hiring', u'conducting', u'interviews', u'feedback', u'departments', u'define', u'focusing', u'engineering', u'teams', u'crews', u'facilitate', u'engineering', u'departments', u'deadlines', u'milestones', u'typically', u'spends', u'designing', u'developing', u'updating', u'bugs', u'mentoring', u'engineers', u'define', u'schedules', u'milestones', u'participating', u'reviews', u'interviews', u'sized', u'teams', u'interacts', u'disciplines', u'knowledge', u'skills', u'knowledge', u'knowledge', u'xcode', u'scripting', u'debugging', u'skills', u'skills', u'knowledge', u'disciplines', u'animation', u'networking', u'expertise', u'competencies', u'oral', u'skills', u'management', u'skills', u'proven', u'effectively', u'teams', u'deadline', u'environment', u'bachelor', u'minimum', u'shipped', u'leadership', u'teams', u'location', u'resumes', u'jobs', u'candidates', u'openings', u'jobs'], [u'maryland', u'client', u'producers', u'electricity', u'operates', u'storage', u'utility', u'retail', u'customers', u'engineering', u'consultant', u'maryland', u'summary', u'technical', u'technology', u'departments', u'expertise', u'maximizing', u'output', u'reduces', u'operating', u'participates', u'areas', u'engineering', u'conducts', u'testing', u'solve', u'supports', u'environmental', u'understands', u'objectives', u'operates', u'responsibilities', u'handles', u'complex', u'engineering', u'aspects', u'monitors', u'quality', u'proficiency', u'optimization', u'recommendations', u'supports', u'personnel', u'troubleshooting', u'commissioning', u'startup', u'shutdown', u'supports', u'procedure', u'operating', u'units', u'develops', u'simulations', u'troubleshooting', u'tests', u'enhancing', u'solving', u'develops', u'estimates', u'schedules', u'scopes', u'understands', u'technical', u'management', u'utilize', u'routine', u'conducts', u'hazards', u'utilizing', u'hazard', u'operability', u'methodologies', u'participates', u'startup', u'reviews', u'pssr', u'participate', u'teams', u'participate', u'regulatory', u'audits', u'define', u'scopes', u'budgets', u'schedules', u'technical', u'management', u'environmental', u'awareness', u'interfacing', u'personnel', u'interacts', u'regulatory', u'departments', u'input', u'objectives', u'identifying', u'introducing', u'concepts', u'solutions', u'peers', u'customers', u'coworkers', u'knowledge', u'skills', u'engineering', u'quality', u'engineering', u'commissioning', u'startup', u'knowledge', u'simulators', u'technologies', u'knowledge', u'engineering', u'techniques', u'disciplines', u'leadership', u'skills', u'proven', u'engineers', u'oral', u'skills', u'technical', u'skills', u'analytically', u'solve', u'complex', u'interpret', u'proficiency', u'simulation', u'knowledge', u'applications', u'manipulate', u'applications', u'engineering', u'calculations', u'programs', u'matlab', u'excel', u'independently', u'environment', u'proven', u'skills', u'effectively', u'multiple', u'tasks', u'planning', u'organizational', u'management', u'skills', u'rigzone', u'jobs', u'developer', u'exceptional', u'strategies', u'junction', u'exceptional', u'strategies', u'solutions', u'solutions', u'biggest', u'insurers', u'operates', u'investment'], [u'vegas', u'tasks', u'electrical', u'contracting', u'expertise', u'virtually', u'electrical', u'developments', u'institutional', u'utilities', u'technical', u'experts', u'relationships', u'credibility', u'contractors', u'utility', u'customers', u'customer', u'relationships', u'consistently', u'innovations', u'profile', u'construct', u'envision', u'dynamic', u'complex', u'electrical', u'management', u'grad', u'internship', u'electrical', u'engineering', u'infrastructures', u'engineers', u'documented', u'management', u'engineering', u'quality', u'engineering', u'electrical', u'engineers', u'complex', u'distribution', u'grounding', u'estimation', u'testing', u'procedures', u'voltage', u'engineering', u'troubleshooting', u'installation', u'documentation', u'bsee', u'certification', u'electrical', u'voltage', u'cabling', u'electrical', u'engineering', u'candidates', u'electrical', u'internships', u'oral', u'skills', u'organizational', u'prioritization', u'skills', u'skills', u'excel', u'cadd', u'calculation', u'autocad', u'mathcad', u'skills', u'skills', u'customer', u'relationships', u'solving', u'ethic', u'motivation', u'tasks', u'budget', u'affirmative', u'diversity', u'workforce', u'gender', u'orientation', u'disability', u'disabled', u'veteran', u'vietnam', u'veteran', u'qualifying', u'veteran', u'diverse', u'candidates', u'respond', u'developing', u'workplace', u'reflects', u'diversity', u'communities', u'reviews', u'electrical', u'contracting', u'southwest', u'electrical', u'contractors'], [u'intern', u'electrical', u'engineering', u'idexx', u'laboratories', u'validating', u'idexx', u'integrated', u'hardware', u'entails', u'planning', u'debug', u'validation', u'engineers', u'validation', u'methodologies', u'healthcare', u'platforms', u'brightest', u'solve', u'challenges', u'innovation', u'technology', u'idexx', u'intern', u'idexx', u'interns', u'supplement', u'interns', u'teams', u'roles', u'competitive', u'interns', u'idexx', u'interns', u'participate', u'internships', u'mentors', u'seminars', u'topics', u'leadership', u'workshops', u'relevant', u'planning', u'topics', u'intern', u'presentations', u'mixers', u'applicants', u'ineligible', u'laboratory', u'compliant', u'idexx', u'laboratories', u'healthcare', u'innovation', u'practicing', u'veterinarians', u'diagnostic', u'technology', u'idexx', u'enhance', u'veterinarians', u'efficiency', u'economically', u'idexx', u'worldwide', u'diagnostic', u'tests', u'tests', u'quality', u'headquartered', u'idexx', u'laboratories', u'employs', u'customers', u'qualifications', u'applicants', u'idexx', u'interns', u'potential', u'demonstrated', u'portfolio', u'recommendation', u'resumes', u'marketing', u'location', u'americas', u'verification', u'validation', u'schedule', u'overtime', u'idexx', u'laboratories', u'reviews', u'idexx', u'laboratories', u'nasdaq', u'healthcare', u'innovation', u'practicing', u'veterinarians'], [u'location', u'duration', u'temp', u'verification', u'validation', u'tester', u'verification', u'validation', u'middleware', u'specifically', u'testing', u'applications', u'clinical', u'laboratory', u'regulated', u'environment', u'responsibilities', u'complex', u'hardware', u'testing', u'clinical', u'analyzers', u'laboratory', u'graphical', u'interfaces', u'complex', u'sample', u'sequencing', u'protocols', u'developers', u'correction', u'tracking', u'tool', u'timely', u'troubleshoot', u'testing', u'functional', u'manual', u'automated', u'participate', u'ongoing', u'testing', u'coverage', u'planning', u'documentation', u'testing', u'validation', u'corrections', u'monitor', u'implementation', u'recurrence', u'operating', u'statistical', u'quality', u'testing', u'global', u'multi', u'teams', u'travel', u'skills', u'concepts', u'waterfall', u'agile', u'methodologies', u'debugging', u'skills', u'complex', u'automated', u'instrumentation', u'environment', u'hardware', u'mechanical', u'components', u'tracking', u'lifecycle', u'management', u'quality', u'organize', u'define', u'priorities', u'organize', u'supervision', u'aggressive', u'deadlines', u'ambiguity', u'analyze', u'complex', u'situations', u'concepts', u'technologies', u'verbal', u'skills', u'effectively', u'technical', u'clinical', u'diverse', u'strategy', u'clinical', u'chemistry', u'analyzer', u'laboratory', u'middleware', u'basic', u'automated', u'testing', u'biomedical', u'engineering', u'technologists', u'laboratory', u'technology', u'availability', u'click', u'attach'], [u'scientist', u'linux', u'asrc', u'scientist', u'linux', u'asrc', u'technology', u'solutions', u'subsidiary', u'asrc', u'engineering', u'technology', u'contracts', u'multiple', u'agencies', u'scientists', u'engineers', u'management', u'personnel', u'allows', u'solutions', u'complex', u'aeronautics', u'aviation', u'management', u'aviation', u'engineering', u'hughes', u'technical', u'technical', u'aviation', u'evaluation', u'engineering', u'management', u'technical', u'terminal', u'surveillance', u'programs', u'currently', u'scientist', u'travel', u'responsibilities', u'develops', u'technology', u'modifies', u'technical', u'complex', u'reviews', u'draft', u'conformity', u'completeness', u'testing', u'interface', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'skills', u'travel', u'programming', u'linux', u'environment', u'cisco', u'knowledge', u'terminal', u'environment', u'clearance', u'clearance', u'input', u'output', u'digital', u'automatic', u'terminal', u'management', u'controller', u'termination', u'testing', u'evaluating', u'policies', u'procedure', u'interface', u'installation', u'verification', u'certification', u'core', u'avionic', u'programs', u'knowledge', u'procedural', u'testing', u'interfacing', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'missions', u'asrc', u'subsidiaries', u'affirmative', u'employers', u'applicants', u'disability', u'veteran', u'technology', u'location', u'airport', u'bachelor', u'schedule', u'travel', u'contributor', u'management', u'asrc', u'reviews'], [u'technical', u'solarcity', u'niche', u'vegas', u'overview', u'resolving', u'customer', u'clients', u'expanding', u'engineers', u'developers', u'responsibilities', u'knowledge', u'planning', u'adapt', u'dynamic', u'environment', u'inventive', u'creative', u'solarcity', u'lifecycle', u'responsibilities', u'technical', u'analyzing', u'diagnosing', u'troubleshooting', u'customers', u'ticketing', u'console', u'escalate', u'knowledge', u'engineering', u'timely', u'basic', u'phone', u'functionality', u'customer', u'tracking', u'knowledgebase', u'rotation', u'configure', u'deployment', u'sccm', u'technical', u'deployment', u'deploy', u'hardware', u'solarcity', u'bachelor', u'knowledge', u'dell', u'laptops', u'analytical', u'troubleshooting', u'solving', u'skills', u'knowledge', u'databases', u'preferably', u'server', u'preferably', u'monitoring', u'suites', u'documentation', u'procedures', u'knowledge', u'entries', u'verbal', u'skills', u'customer', u'skills', u'competitive', u'solar', u'package', u'insurance', u'vacation', u'savings', u'referral', u'eligibility', u'equity', u'performers', u'solarcity', u'affirmative', u'diversity', u'workplace', u'applicants', u'orientation', u'disability', u'veteran', u'careerrookie'], [u'embedded', u'exelis', u'junction', u'exelis', u'embedded', u'acquisition', u'networking', u'capabilities', u'classified', u'customer', u'motivated', u'develops', u'tests', u'innovative', u'solutions', u'minimal', u'supervision', u'paced', u'environment', u'enjoys', u'assignments', u'interact', u'multi', u'disciplined', u'challenging', u'focused', u'embedded', u'developments', u'spanning', u'engineering', u'lifecycle', u'specification', u'enhancement', u'applications', u'embedded', u'freescale', u'applications', u'android', u'platforms', u'interface', u'customers', u'developers', u'refine', u'specifications', u'architectures', u'java', u'programming', u'scripts', u'python', u'debug', u'debugging', u'emulators', u'regression', u'revisions', u'specialized', u'setups', u'capabilities', u'subversion', u'technical', u'documentation', u'multiple', u'engineering', u'techexpousa', u'reviews'], [u'modeler', u'semantic', u'modeling', u'models', u'skills', u'ontology', u'resource', u'framework', u'schema', u'technologies', u'hadoop', u'warehouse', u'oracle', u'relational', u'artifacts', u'models', u'dictionaries', u'models', u'interface', u'specifications', u'documentation', u'harmonization', u'mappings', u'aligned', u'coordinate', u'technical', u'peer', u'reviews', u'stakeholder', u'communities', u'impact', u'domains', u'relationships', u'interdependencies', u'models', u'define', u'analyze', u'legacy', u'models', u'corporate', u'databases', u'architectural', u'alignment', u'customer', u'expertise', u'harmonization', u'modeling', u'modeling', u'consulting', u'stakeholders', u'quality', u'models', u'storage', u'agile', u'specifically', u'focus', u'modeling', u'qualifications', u'bachelors', u'accredited', u'modeler', u'encompass', u'evaluation', u'skills', u'knowledge', u'modeling', u'techniques', u'resource', u'framework', u'schema', u'technologies', u'unified', u'modeling', u'technologies', u'schemas', u'ontologies', u'sybase', u'knowledge', u'skills', u'interpersonal', u'skills', u'customers', u'clearance', u'applicants', u'eligibility', u'classified', u'clearance', u'polygraph', u'techexpousa', u'solutions', u'partnership', u'solutions', u'integration'], [u'technologies', u'junction', u'develops', u'maintains', u'enhances', u'complex', u'diverse', u'intensive', u'analytics', u'algorithm', u'manipulation', u'management', u'documented', u'individually', u'reviews', u'tests', u'components', u'adherence', u'resolves', u'utilizes', u'methodologies', u'environment', u'input', u'components', u'hardware', u'offs', u'reuse', u'cots', u'gots', u'synthesis', u'components', u'tasks', u'individually', u'analyzes', u'modifies', u'debugs', u'corrects', u'integrates', u'operating', u'environments', u'develops', u'queries', u'databases', u'repositories', u'recommendations', u'improving', u'documentation', u'develops', u'implements', u'algorithms', u'functional', u'assists', u'developing', u'executing', u'procedures', u'components', u'reviews', u'documentation', u'solutions', u'analyzing', u'conferring', u'users', u'engineers', u'analyzing', u'investigating', u'areas', u'adapt', u'hardware', u'mathematical', u'models', u'predict', u'outcome', u'implement', u'complex', u'database', u'repository', u'interfaces', u'queries', u'bachelors', u'accredited', u'substituted', u'bachelors', u'firewalls', u'ipsec', u'vpns', u'technology', u'administering', u'servers', u'apache', u'jboss', u'tomcat', u'developing', u'interfaces', u'firefox', u'internet', u'explorer', u'operating', u'mainframe', u'linux', u'solaris', u'virtual', u'scripting', u'programming', u'oriented', u'programming', u'ajax', u'script', u'procedures', u'cobol', u'cognos', u'fusion', u'focus', u'html', u'java', u'java', u'script', u'jquery', u'perl', u'visual', u'basic', u'powershell', u'cots', u'cots', u'oracle', u'apex', u'integration', u'competitive', u'package', u'bonus', u'corporate', u'equity', u'tuition', u'reimbursement', u'referral', u'bonus', u'holidays', u'insurance', u'flexible', u'disability', u'insurance', u'technologies', u'disability', u'accommodation', u'recruiter', u'techexpousa']]
time_seq = [3, 7] # first 3 documents are from time slice one
# and the other 7 are from the second time slice.
class DTMcorpus(corpora.textcorpus.TextCorpus):
def get_texts(self):
return self.input
def __len__(self):
return len(self.input)
corpus = DTMcorpus(documents)
# path to dtm home folder
dtm_home = os.environ.get('DTM_HOME', "dtm-master")
# path to the binary. on my PC the executable file is dtm-master/bin/dtm
dtm_path = os.path.join(dtm_home, 'bin', 'dtm') if dtm_home else None
# you can also copy the path down directly. Change this variable to your DTM executable before running.
dtm_path = "/home/bhargav/dtm/main"
model = DtmModel(dtm_path, corpus, time_seq, num_topics=2,
id2word=corpus.dictionary, initialize_lda=True)
topics = model.show_topic(topicid=1, time=1, num_words=10)
topics
doc_number = 1
num_topics = 2
for i in range(0, num_topics):
print ("Distribution of Topic %d %f" % (i, model.gamma_[doc_number, i]))
model = DtmModel(dtm_path, corpus, time_seq, num_topics=2,
id2word=corpus.dictionary, initialize_lda=True, model='fixed')
document_no = 1 #document 2
topic_no = 1 #topic number 2
time_slice = 0 #time slice 1
model.influences_time[time_slice][document_no][topic_no]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we wil setup logging
Step2: Now lets load a set of documents
Step3: This corpus contains 10 documents. Now lets say we would like to model this with DTM.
Step4: A simple corpus wrapper to load a premade corpus. You can use this with your own data.
Step5: So now we have to generate the path to DTM executable, here I have already set an ENV variable for the DTM_HOME
Step6: That is basically all we need to be able to invoke the Training.
Step7: If everything worked we should be able to print out the topics
Step8: Document-Topic proportions
Step9: DIM Example
Step10: The main difference between the DTM and DIM models are the addition of Influence files for each time-slice, which is interpreted with the influences_time variable.
|
5,664
|
<ASSISTANT_TASK:>
Python Code:
import ipyvolume as ipv
import numpy as np
s = 1/2**0.5
# 4 vertices for the tetrahedron
x = np.array([1., -1, 0, 0])
y = np.array([0, 0, 1., -1])
z = np.array([-s, -s, s, s])
# and 4 surfaces (triangles), where the number refer to the vertex index
triangles = [(0, 1, 2), (0, 1, 3), (0, 2, 3), (1,3,2)]
ipv.figure()
# we draw the tetrahedron
mesh = ipv.plot_trisurf(x, y, z, triangles=triangles, color='orange')
# and also mark the vertices
ipv.scatter(x, y, z, marker='sphere', color='blue')
ipv.xyzlim(-2, 2)
ipv.show()
# f(u, v) -> (u, v, u*v**2)
a = np.arange(-5, 5)
U, V = np.meshgrid(a, a)
X = U
Y = V
Z = X*Y**2
ipv.figure()
ipv.plot_surface(X, Z, Y, color="orange")
ipv.plot_wireframe(X, Z, Y, color="red")
ipv.show()
X = np.arange(-5, 5, 0.25*1)
Y = np.arange(-5, 5, 0.25*1)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
from matplotlib import cm
colormap = cm.coolwarm
znorm = Z - Z.min()
znorm /= znorm.ptp()
znorm.min(), znorm.max()
color = colormap(znorm)
ipv.figure()
mesh = ipv.plot_surface(X, Z, Y, color=color[...,:3])
ipv.show()
# import PIL.Image
# image = PIL.Image.open('data/jupyter.png')
# example how put a png as texture
import PIL.Image
import requests
import io
url = 'https://vaex.io/img/logos/spiral-small.png'
r = requests.get(url, stream=True)
f = io.BytesIO(r.content)
image = PIL.Image.open(f)
import ipyvolume as ipv
import numpy as np
fig = ipv.figure()
ipv.style.use('dark')
# we create a sequence of 8 u v coordinates so that the texture moves across the surface.
u = np.array([X/5 +np.sin(k/8*np.pi)*4. for k in range(8)])
v = np.array([-Y/5*(1-k/7.) + Z*(k/7.) for k in range(8)])
mesh = ipv.plot_mesh(X, Z, Y, u=u, v=v, texture=image, wireframe=False)
ipv.animation_control(mesh, interval=800, sequence_length=8)
ipv.show()
# this doesn't work anymore with modern ipykernel
# frames = 30
# ipv.movie('movie.gif', frames=frames)
# so we also need to skip this
# ipv.figure()
# x = np.array([-1., 1, 1, -1])
# y = np.array([-1, -1, 1., 1])
# z = np.array([0., 0, 0., 0])
# u = x / 2 + 0.5
# v = y / 2 + 0.5
# # square
# triangles = [(0, 1, 2), (0, 2, 3)]
# m = ipv.plot_trisurf(x, y, z, triangles=triangles, u=u, v=v, texture=PIL.Image.open('movie.gif'))
# ipv.animation_control(m, sequence_length=frames)
# ipv.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Triangle meshes
Step2: Surfaces
Step3: Colors
Step4: Texture mapping
Step5: We now make a small movie / animated gif of 30 frames.
Step6: And play that movie on a square
|
5,665
|
<ASSISTANT_TASK:>
Python Code:
# <!-- collapse=True -->
# Importando las librerías que vamos a utilizar
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
# graficos incrustados
%matplotlib inline
# parametros esteticos de seaborn
sns.set_palette("deep", desat=.6)
sns.set_context(rc={"figure.figsize": (8, 4)})
# importando el dataset a un Dataframe de Pandas
ONG_data = pd.read_csv('LEARNING.csv', header=0)
# Examinando las primeras 10 filas y 10 columnas del dataset
ONG_data.ix[:10, :10]
# Controlando la cantidad de registros
ONG_data['DONOR_AMOUNT'].count()
# Controlando valores nulos
ONG_data.isnull().any().any()
# Agrupando columnas por tipo de datos
tipos = ONG_data.columns.to_series().groupby(ONG_data.dtypes).groups
# Armando lista de columnas categóricas
ctext = tipos[np.dtype('object')]
len(ctext) # cantidad de columnas con datos categóricos.
# Armando lista de columnas numéricas
columnas = ONG_data.columns # lista de todas las columnas
cnum = list(set(columnas) - set(ctext))
len(cnum)
# Completando valores faltantas datos cuantititavos
for c in cnum:
mean = ONG_data[c].mean()
ONG_data[c] = ONG_data[c].fillna(mean)
# Completando valores faltantas datos categóricos
for c in ctext:
mode = ONG_data[c].mode()[0]
ONG_data[c] = ONG_data[c].fillna(mode)
# Controlando que no hayan valores faltantes
ONG_data.isnull().any().any()
# Guardando el dataset preprocesado
# Save transform datasets
ONG_data.to_csv("LEARNING_procesado.csv", index=False)
# Calculando el porcentaje de donantes sobre toda la base de datos
porcent_donantes = (ONG_data[ONG_data.DONOR_AMOUNT
> 0]['DONOR_AMOUNT'].count() * 1.0
/ ONG_data['DONOR_AMOUNT'].count()) * 100.0
print("El procentaje de donantes de la base de datos es {0:.2f}%"
.format(porcent_donantes))
# Grafico de totas del porcentaje de donantes
# Agrupando por DONOR_FLAG
donantes = ONG_data.groupby('DONOR_FLAG').IDX.count()
# Creando las leyendas del grafico.
labels = [ 'Donante\n' + str(round(x * 1.0 / donantes.sum() *
100.0, 2)) + '%' for x in donantes ]
labels[0] = 'No ' + labels[0]
plt.pie(donantes, labels=labels)
plt.title('Porcion de donantes')
plt.show()
# Creando subset con solo los donates
ONG_donantes = ONG_data[ONG_data.DONOR_AMOUNT > 0]
# cantidad de donantes
len(ONG_donantes)
# Analizando el importe de donanciones
# Creando un segmentos de importes
imp_segm = pd.cut(ONG_donantes['DONOR_AMOUNT'],
[0, 10, 20, 30, 40, 50, 60, 100, 200])
# Creando el grafico de barras desde pandas
plot = pd.value_counts(imp_segm).plot(kind='bar',
title='Importes de donacion')
plot.set_ylabel('Cant de donantes')
plot.set_xlabel('Rango de importes')
plt.show()
# Agrupación por segmento segun importe donado.
pd.value_counts(imp_segm)
# importe de donación promedio
ONG_donantes['DONOR_AMOUNT'].mean()
# Gráfico de cajas del importe de donación
sns.boxplot(list(ONG_donantes['DONOR_AMOUNT']))
plt.title('importe de donación')
plt.show()
# Grafico del género de los donantes
ONG_donantes.groupby('GENDER').size().plot(kind='bar')
plt.title('Distribución por género')
plt.show()
# Donaciones segun el género
ONG_donantes[(ONG_donantes.DONOR_AMOUNT <= 50)
& (ONG_donantes.GENDER.isin(['F', 'M'])
)][['DONOR_AMOUNT', 'GENDER']].boxplot(by='GENDER')
plt.title('Donantes segun sexo')
plt.show()
# Media de impote donado por mujeres
ONG_donantes[ONG_donantes.GENDER == 'F'][['DONOR_AMOUNT']].mean()
# Media de impote donado por hombres
ONG_donantes[ONG_donantes.GENDER == 'M'][['DONOR_AMOUNT']].mean()
# Distribución de la edad de los donantes
ONG_donantes['AGE'].hist().set_title('Distribución de donantes segun edad')
plt.show()
# Agrupando la edad por rango de a 10
AGE2 = pd.cut(ONG_donantes['AGE'], range(0, 100, 10))
ONG_donantes['AGE2'] = AGE2
# Gráfico de barras de donaciones por edad
pd.value_counts(AGE2).plot(kind='bar', title='Donaciones por edad')
plt.show()
# Importes de donación por grango de edad
ONG_donantes[ONG_donantes.DONOR_AMOUNT <= 50][['DONOR_AMOUNT',
'AGE2']].boxplot(by='AGE2')
plt.title('Importe de donación por edad')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Como podemos ver, utilizando simples expresiones de Python, podemos cargar la base de datos de la ONG en un Dataframe de Pandas; lo que nos va a permitir manipular los datos con suma facilidad. Comenzemos a explorar un poco más en detalle este dataset!
Step2: Como podemos ver, el método nos devuelve el valor "True", lo que indica que existen valores nulos en nuestro dataset. Estos valores pueden tener una influencia significativa en nuestro modelo predictivo, por lo que siempre es una decisión importante determinar la forma en que los vamos a manejar. Las alternativas que tenemos son
Step3: Ahora ya logramos separar a las 481 columnas que tiene nuestro dataset. 68 columnas contienen datos categóricos y 413 contienen datos cuantitativos. Procedamos a inferir los valores faltantes.
Step4: Perfecto! Ahora tenemos un dataset limpio de valores faltantes. Ya estamos listos para comenzar a explorar los datos, comencemos por determinar el porcentaje de personas que alguna vez fue donante de la ONG y están incluidos en la base de datos con la que estamos trabajando.
Step5: Aquí podemos ver que el porcentaje de personas que fueron donantes en el pasado es realmente muy bajo, solo un 5 % del total de la base de datos (2423 personas). Este es un dato importante a tener en cuenta ya que al existir tanta diferencia entre las clases a clasificar, esto puede afectar considerablemente a nuestro algoritmo de aprendizaje.
Step6: Este análisis nos muestra que la mayor cantidad de donaciones caen en un rango de importes entre 0 y 30, siendo la donación promedio 15.60. También podemos ver que donaciones que superen un importe de 50 son casos realmente poco frecuentes, por lo que constituyen valores atípicos y sería prudente eliminar estos casos al entrenar nuestro modelo para que no distorsionen los resultados.
Step7: Aquí vemos que las mujeres suelen estar más propensas a donar, aunque donan un importe promedio menor (14.61) al que donan los hombres (16.82). Veamos ahora como se comportan las donaciones respecto a la edad.
|
5,666
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
sys.path = [os.path.abspath("../../")] + sys.path
from deep_learning4e import *
from notebook4e import *
from learning4e import *
raw_net = [InputLayer(input_size), DenseLayer(input_size, output_size)]
iris = DataSet(name="iris")
classes = ["setosa", "versicolor", "virginica"]
iris.classes_to_numbers(classes)
pl = perceptron_learner(iris, epochs=500, learning_rate=0.01, verbose=50)
print(err_ratio(pl, iris))
tests = [([5.0, 3.1, 0.9, 0.1], 0),
([5.1, 3.5, 1.0, 0.0], 0),
([4.9, 3.3, 1.1, 0.1], 0),
([6.0, 3.0, 4.0, 1.1], 1),
([6.1, 2.2, 3.5, 1.0], 1),
([5.9, 2.5, 3.3, 1.1], 1),
([7.5, 4.1, 6.2, 2.3], 2),
([7.3, 4.0, 6.1, 2.4], 2),
([7.0, 3.3, 6.1, 2.5], 2)]
print(grade_learner(pl, tests))
train_img, train_lbl, test_img, test_lbl = load_MNIST(path="../../aima-data/MNIST/Digits")
import numpy as np
import matplotlib.pyplot as plt
train_examples = [np.append(train_img[i], train_lbl[i]) for i in range(len(train_img))]
test_examples = [np.append(test_img[i], test_lbl[i]) for i in range(len(test_img))]
print("length of training dataset:", len(train_examples))
print("length of test dataset:", len(test_examples))
mnist = DataSet(examples=train_examples[:1000])
pl = perceptron_learner(mnist, epochs=10, verbose=1)
print(err_ratio(pl, mnist))
test_mnist = DataSet(examples=test_examples[:100])
print(err_ratio(pl, test_mnist))
# initialize the network
raw_net = [InputLayer(input_size)]
# add hidden layers
hidden_input_size = input_size
for h_size in hidden_layer_sizes:
raw_net.append(DenseLayer(hidden_input_size, h_size))
hidden_input_size = h_size
raw_net.append(DenseLayer(hidden_input_size, output_size))
nn = neural_net_learner(iris, epochs=100, learning_rate=0.15, optimizer=gradient_descent, verbose=10)
print("error ration on training set:",err_ratio(nn, iris))
tests = [([5.0, 3.1, 0.9, 0.1], 0),
([5.1, 3.5, 1.0, 0.0], 0),
([4.9, 3.3, 1.1, 0.1], 0),
([6.0, 3.0, 4.0, 1.1], 1),
([6.1, 2.2, 3.5, 1.0], 1),
([5.9, 2.5, 3.3, 1.1], 1),
([7.5, 4.1, 6.2, 2.3], 2),
([7.3, 4.0, 6.1, 2.4], 2),
([7.0, 3.3, 6.1, 2.5], 2)]
print("accuracy on test set:",grade_learner(nn, tests))
nn = neural_net_learner(mnist, epochs=100, verbose=10)
print(err_ratio(nn, mnist))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perceptron Learner
Step2: Where input_size and output_size are calculated from dataset examples. In the perceptron learner, the gradient descent optimizer is used to update the weights of the network. we return a function predict which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.
Step3: We can see from the printed lines that the final total loss is converged to around 10.50. If we check the error ratio of perceptron learner on the dataset after training, we will see it is much higher than randomly guess
Step4: If we test the trained learner with some test cases
Step5: It seems the learner is correct on all the test examples.
Step6: Now let's train the perceptron learner on the first 1000 examples of the dataset
Step7: It looks like we have a near 90% error ratio on training data after the network is trained on it. Then we can investigate the model's performance on the test dataset which it never has seen before
Step8: It seems a single layer perceptron learner cannot simulate the structure of the MNIST dataset. To improve accuracy, we may not only increase training epochs but also consider changing to a more complicated network structure.
Step9: Where hidden_layer_sizes are the sizes of each hidden layer in a list which can be specified by user. Neural network learner uses gradient descent as default optimizer but user can specify any optimizer when calling neural_net_learner. The other special attribute that can be changed in neural_net_learner is batch_size which controls the number of examples used in each round of update. neural_net_learner also returns a predict function which calculates prediction by multiplying weight to inputs and applying activation functions.
Step10: Similarly we check the model's accuracy on both training and test dataset
Step11: We can see that the error ratio on the training set is smaller than the perceptron learner. As the error ratio is relatively small, let's try the model on the MNIST dataset to see whether there will be a larger difference.
|
5,667
|
<ASSISTANT_TASK:>
Python Code:
import sys
niftynet_path = '/Users/bar/Documents/Niftynet/'
sys.path.insert(0, niftynet_path)
from niftynet.io.image_reader import ImageReader
from niftynet.utilities.download import download
download('anisotropic_nets_brats_challenge_model_zoo')
from niftynet.io.image_reader import ImageReader
data_param = {'MR': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG'}}
reader = ImageReader().initialise(data_param)
reader.shapes, reader.tf_dtypes
# read data using the initialised reader
idx, image_data, interp_order = reader(idx=0)
image_data['MR'].shape, image_data['MR'].dtype
# randomly sample the list of images
for _ in range(3):
idx, image_data, _ = reader()
print('{} image: {}'.format(idx, image_data['MR'].shape))
from niftynet.io.image_reader import ImageReader
data_param = {'image': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T2'},
'label': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'Label'}}
reader = ImageReader().initialise(data_param)
# image file information (without loading the volumes)
reader.get_subject(0)
idx, image_data, interp_order = reader(idx=0)
image_data['image'].shape, image_data['label'].shape
from niftynet.io.image_reader import ImageReader
data_param = {'T1': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T1', 'filename_not_contains': 'T1c'},
'T1c': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T1c'},
'T2': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T2'},
'Flair': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'Flair'},
'label': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'Label'}}
grouping_param = {'image': ('T1', 'T1c', 'T2', 'Flair'), 'label':('label',)}
reader = ImageReader().initialise(data_param, grouping_param)
_, image_data, _ = reader(idx=0)
image_data['image'].shape, image_data['label'].shape
from niftynet.io.image_reader import ImageReader
from niftynet.layer.rand_rotation import RandomRotationLayer as Rotate
data_param = {'MR': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG'}}
reader = ImageReader().initialise(data_param)
rotation_layer = Rotate()
rotation_layer.init_uniform_angle([-10.0, 10.0])
reader.add_preprocessing_layers([rotation_layer])
_, image_data, _ = reader(idx=0)
image_data['MR'].shape
# import matplotlib.pyplot as plt
# plt.imshow(image_data['MR'][:, :, 50, 0, 0])
# plt.show()
import tensorflow as tf
from niftynet.io.image_reader import ImageReader
# initialise multi-modal image and label reader
data_param = {'T1': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T1', 'filename_not_contains': 'T1c'},
'T1c': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T1c'},
'T2': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'T2'},
'Flair': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'Flair'},
'label': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
'filename_contains': 'Label'}}
grouping_param = {'image': ('T1', 'T1c', 'T2', 'Flair'), 'label':('label',)}
reader = ImageReader().initialise(data_param, grouping_param)
# reader as a generator
def image_label_pair_generator():
A generator wrapper of an initialised reader.
:yield: a dictionary of images (numpy arrays).
while True:
_, image_data, _ = reader()
yield image_data
# tensorflow dataset
dataset = tf.data.Dataset.from_generator(
image_label_pair_generator,
output_types=reader.tf_dtypes)
#output_shapes=reader.shapes)
dataset = dataset.batch(1)
iterator = dataset.make_initializable_iterator()
# run the tensorlfow graph
with tf.Session() as sess:
sess.run(iterator.initializer)
for _ in range(3):
data_dict = sess.run(iterator.get_next())
print(data_dict.keys())
print('image: {}, label: {}'.format(
data_dict['image'].shape,
data_dict['label'].shape))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For demonstration purpose we download some demo data to ~/niftynet/data/
Step2: Use case
Step3: The images are always read into a 5D-array, representing
Step4: User case
Step5: More properties
Step7: Using ImageReader with tf.data.Dataset
|
5,668
|
<ASSISTANT_TASK:>
Python Code:
%%latex
\begin{align}
a = \frac{1}{2}\\
\end{align}
print 'hello world'
for i in range(10):
print i
# get a list of all the available magics
% lsmagic
% env
# to list your environment variables.
%prun
%time range(10)
%timeit range(100)
! cd /Users/chengjun/github/
% matplotlib inline
# to show matplotlib plots inline the notebook.
import matplotlib.pyplot as plt
plt.plot(range(10), range(10), 'r-o')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 程序代码
Step2: !
|
5,669
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a):
return np.sqrt(a**2-x**2)
def integral_approx(a):
I,e=integrate.quad(integrand, 0,a,args=(a,))
return I
def integral_exact(a):
return np.pi*a**2/4
print("Numerical: ", integral_approx(1.0))
print("Exact: ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return 1.0/(a+b*np.sin(x))
def integral_approx(a,b):
I,e=integrate.quad(integrand,0,2*np.pi,args=(a,b))
return I
def integral_exact(a,b):
return 2*np.pi/(np.sqrt(a**2-b**2))
print("Numerical: ", integral_approx(2.0,1.0))
print("Exact: ", integral_exact(2.0, 1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.exp(-a*x)*np.cos(b*x)
def integral_approx(a,b):
I,e = integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integral_exact(a,b):
return a/(a**2+b**2)
print("Numerical: ", integral_approx(1.0,1.0))
print("Exact: ", integral_exact(1.0,1.0))
assert True # leave this cell to grade the above integral
def integrand(x):
return np.exp(-x**2)
def integral_approx(x):
I,e= integrate.quad(integrand,-1*np.inf,np.inf)
return I
def integral_exact():
return np.sqrt(np.pi)
print("Numerical: ", integral_approx(1.0))
print("Exact: ", integral_exact())
assert True # leave this cell to grade the above integral
def integrand(x,a):
return np.exp(-a*x**2)
def integral_approx(a):
I,e=integrate.quad(integrand, 0 , np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.sqrt(np.pi/a)
print("Numerical: ", integral_approx(2.0))
print("Exact: ", integral_exact(2.0))
assert True # leave this cell to grade the above integral
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Indefinite integrals
Step2: Integral 1
Step3: Integral 2
Step4: Integral 3
Step5: Integral 4
Step6: Integral 5
|
5,670
|
<ASSISTANT_TASK:>
Python Code:
def binarySearch(searchSpace , s , e , num ) :
while(s <= e ) :
mid =(s + e ) // 2
if searchSpace[mid ] >= num :
ans = mid
e = mid - 1
else :
s = mid + 1
return ans
def longestSubArr(arr , n ) :
searchSpace =[None ] * n
index =[None ] * n
j = 0
ans = 0
for i in range(n ) :
if(j == 0 or searchSpace[j - 1 ] < arr[i ] ) :
searchSpace[j ] = arr[i ]
index[j ] = i
j += 1
idx = binarySearch(searchSpace , 0 , j - 1 , arr[i ] )
ans = max(ans , i - index[idx ] + 1 )
return ans
if __name__== "__main __":
arr =[- 5 , - 1 , 7 , 5 , 1 , - 2 ]
n = len(arr )
print(longestSubArr(arr , n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
5,671
|
<ASSISTANT_TASK:>
Python Code:
%%bash
BUCKET=<your-bucket-here> # Change to your bucket name
JOB_NAME=dqn_on_gcp_$(date -u +%y%m%d_%H%M%S)
REGION='us-central1' # Change to your bucket region
IMAGE_URI=gcr.io/qwiklabs-resources/rl-qwikstart/dqn_on_gcp@sha256:326427527d07f30a0486ee05377d120cac1b9be8850b05f138fc9b53ac1dd2dc
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=gs://$BUCKET/$JOB_NAME \
--config=hyperparam.yaml
!python3 -m pip freeze | grep gym || python3 -m pip install --user gym==0.12.5
!python3 -m pip freeze | grep 'tensorflow==2\|tensorflow-gpu==2' || \
python3 -m pip install --user tensorflow==2
from collections import deque
import random
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models
env = gym.make('CartPole-v0')
print("The observation space is", env.observation_space)
print("The observation dimensions are", env.observation_space.shape)
print("The action space is", env.action_space)
print("The number of possible actions is", env.action_space.n)
def print_state(state, step, reward=None):
format_string = 'Step {0} - Cart X: {1:.3f}, Cart V: {2:.3f}, Pole A: {3:.3f}, Pole V:{4:.3f}, Reward:{5}'
print(format_string.format(step, *tuple(state), reward))
state = env.reset()
step = 0
print_state(state, step)
action = 0
state_prime, reward, done, info = env.step(action)
step += 1
print_state(state_prime, step, reward)
print("The game is over." if done else "The game can continue.")
print("Info:", info)
action = 1 # Change me: 0 Left, 1 Right
state_prime, reward, done, info = env.step(action)
step += 1
print_state(state_prime, step, reward)
print("The game is over." if done else "The game can continue.")
# [0, 1, 0, 1, 0, 1, ...]
actions = [x % 2 for x in range(200)]
state = env.reset()
step = 0
episode_reward = 0
done = False
while not done and step < len(actions):
action = actions[step] # In the future, our agents will define this.
state_prime, reward, done, info = env.step(action)
episode_reward += reward
step += 1
state = state_prime
print_state(state, step, reward)
end_statement = "Game over!" if done else "Ran out of actions!"
print(end_statement, "Score =", episode_reward)
def deep_q_network(
state_shape, action_size, learning_rate, hidden_neurons):
Creates a Deep Q Network to emulate Q-learning.
Creates a two hidden-layer Deep Q Network. Similar to a typical nueral
network, the loss function is altered to reduce the difference between
predicted Q-values and Target Q-values.
Args:
space_shape: a tuple of ints representing the observation space.
action_size (int): the number of possible actions.
learning_rate (float): the nueral network's learning rate.
hidden_neurons (int): the number of neurons to use per hidden
layer.
state_input = layers.Input(state_shape, name='frames')
actions_input = layers.Input((action_size,), name='mask')
hidden_1 = layers.Dense(hidden_neurons, activation='relu')(state_input)
hidden_2 = layers.Dense(hidden_neurons, activation='relu')(hidden_1)
q_values = layers.Dense(action_size)(hidden_2)
masked_q_values = layers.Multiply()([q_values, actions_input])
model = models.Model(
inputs=[state_input, actions_input], outputs=masked_q_values)
optimizer = tf.keras.optimizers.RMSprop(lr=learning_rate)
model.compile(loss='mse', optimizer=optimizer)
return model
class Memory():
Sets up a memory replay buffer for a Deep Q Network.
A simple memory buffer for a DQN. This one randomly selects state
transitions with uniform probability, but research has gone into
other methods. For instance, a weight could be given to each memory
depending on how big of a difference there is between predicted Q values
and target Q values.
Args:
memory_size (int): How many elements to hold in the memory buffer.
batch_size (int): The number of elements to include in a replay batch.
gamma (float): The "discount rate" used to assess Q values.
def __init__(self, memory_size, batch_size, gamma):
self.buffer = deque(maxlen=memory_size)
self.batch_size = batch_size
self.gamma = gamma
def add(self, experience):
Adds an experience into the memory buffer.
Args:
experience: a (state, action, reward, state_prime, done) tuple.
self.buffer.append(experience)
def sample(self):
Uniformally selects from the replay memory buffer.
Uniformally and randomly selects experiences to train the nueral
network on. Transposes the experiences to allow batch math on
the experience components.
Returns:
(list): A list of lists with structure [
[states], [actions], [rewards], [state_primes], [dones]
]
buffer_size = len(self.buffer)
index = np.random.choice(
np.arange(buffer_size), size=self.batch_size, replace=False)
# Columns have different data types, so numpy array would be awkward.
batch = np.array([self.buffer[i] for i in index]).T.tolist()
states_mb = tf.convert_to_tensor(np.array(batch[0], dtype=np.float32))
actions_mb = np.array(batch[1], dtype=np.int8)
rewards_mb = np.array(batch[2], dtype=np.float32)
states_prime_mb = np.array(batch[3], dtype=np.float32)
dones_mb = batch[4]
return states_mb, actions_mb, rewards_mb, states_prime_mb, dones_mb
test_memory_size = 20
test_batch_size = 4
test_gamma = .9 # Unused here. For learning.
test_memory = Memory(test_memory_size, test_batch_size, test_gamma)
actions = [x % 2 for x in range(200)]
state = env.reset()
step = 0
episode_reward = 0
done = False
while not done and step < len(actions):
action = actions[step] # In the future, our agents will define this.
state_prime, reward, done, info = env.step(action)
episode_reward += reward
test_memory.add((state, action, reward, state_prime, done)) # New line here
step += 1
state = state_prime
print_state(state, step, reward)
end_statement = "Game over!" if done else "Ran out of actions!"
print(end_statement, "Score =", episode_reward)
test_memory.sample()
class Partial_Agent():
Sets up a reinforcement learning agent to play in a game environment.
def __init__(self, network, memory, epsilon_decay, action_size):
Initializes the agent with DQN and memory sub-classes.
Args:
network: A neural network created from deep_q_network().
memory: A Memory class object.
epsilon_decay (float): The rate at which to decay random actions.
action_size (int): The number of possible actions to take.
self.network = network
self.action_size = action_size
self.memory = memory
self.epsilon = 1 # The chance to take a random action.
self.epsilon_decay = epsilon_decay
def act(self, state, training=False):
Selects an action for the agent to take given a game state.
Args:
state (list of numbers): The state of the environment to act on.
traning (bool): True if the agent is training.
Returns:
(int) The index of the action to take.
if training:
# Random actions until enough simulations to train the model.
if len(self.memory.buffer) >= self.memory.batch_size:
self.epsilon *= self.epsilon_decay
if self.epsilon > np.random.rand():
print("Exploration!")
return random.randint(0, self.action_size-1)
# If not acting randomly, take action with highest predicted value.
print("Exploitation!")
state_batch = np.expand_dims(state, axis=0)
predict_mask = np.ones((1, self.action_size,))
action_qs = self.network.predict([state_batch, predict_mask])
return np.argmax(action_qs[0])
state = env.reset()
# Define "brain"
space_shape = env.observation_space.shape
action_size = env.action_space.n
# Feel free to play with these
test_learning_rate = .2
test_hidden_neurons = 10
test_epsilon_decay = .95
test_network = deep_q_network(
space_shape, action_size, test_learning_rate, test_hidden_neurons)
test_agent = Partial_Agent(
test_network, test_memory, test_epsilon_decay, action_size)
action = test_agent.act(state, training=True)
print("Push Right" if action else "Push Left")
def learn(self):
Trains the Deep Q Network based on stored experiences.
batch_size = self.memory.batch_size
if len(self.memory.buffer) < batch_size:
return None
# Obtain random mini-batch from memory.
state_mb, action_mb, reward_mb, next_state_mb, done_mb = (
self.memory.sample())
# Get Q values for next_state.
predict_mask = np.ones(action_mb.shape + (self.action_size,))
next_q_mb = self.network.predict([next_state_mb, predict_mask])
next_q_mb = tf.math.reduce_max(next_q_mb, axis=1)
# Apply the Bellman Equation
target_qs = (next_q_mb * self.memory.gamma) + reward_mb
target_qs = tf.where(done_mb, reward_mb, target_qs)
# Match training batch to network output:
# target_q where action taken, 0 otherwise.
action_mb = tf.convert_to_tensor(action_mb, dtype=tf.int32)
action_hot = tf.one_hot(action_mb, self.action_size)
target_mask = tf.multiply(tf.expand_dims(target_qs, -1), action_hot)
return self.network.train_on_batch(
[state_mb, action_hot], target_mask, reset_metrics=False
)
Partial_Agent.learn = learn
test_agent = Partial_Agent(
test_network, test_memory, test_epsilon_decay, action_size)
state = env.reset()
step = 0
episode_reward = 0
done = False
while not done:
action = test_agent.act(state, training=True)
state_prime, reward, done, info = env.step(action)
episode_reward += reward
test_agent.memory.add((state, action, reward, state_prime, done)) # New line here
step += 1
state = state_prime
print_state(state, step, reward)
print(test_agent.learn())
print("Game over! Score =", episode_reward)
def _parse_arguments(argv):
Parses command-line arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
'--game',
help='Which open ai gym game to play',
type=str,
default='CartPole-v0')
parser.add_argument(
'--episodes',
help='The number of episodes to simulate',
type=int,
default=200)
parser.add_argument(
'--learning_rate',
help='Learning rate for the nueral network',
type=float,
default=0.2)
parser.add_argument(
'--hidden_neurons',
help='The number of nuerons to use per layer',
type=int,
default=30)
parser.add_argument(
'--gamma',
help='The gamma or "discount" factor to discount future states',
type=float,
default=0.5)
parser.add_argument(
'--explore_decay',
help='The rate at which to decay the probability of a random action',
type=float,
default=0.1)
parser.add_argument(
'--memory_size',
help='Size of the memory buffer',
type=int,
default=100000)
parser.add_argument(
'--memory_batch_size',
help='The amount of memories to sample from the buffer while training',
type=int,
default=8)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str,
default='models/')
parser.add_argument(
'--print_rate',
help='How often to print the score, 0 if never',
type=int,
default=0)
parser.add_argument(
'--eval_rate',
help=While training, perform an on-policy simulation and record
metrics to tensorboard every <record_rate> steps, 0 if never. Use
higher values to avoid hyperparameter tuning "too many metrics"
error,
type=int,
default=20)
return parser.parse_known_args(argv)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The above command sends a hyperparameter tuning job to the Google Cloud AI Platform. It's a service that sets up scaling distributed training so data scientists and machine learning engineers do not have to worry about technical infrastructure. Usually, it automatically selects the container environment, but we're going to take advantage of a feature to specify our own environment with Docker. Not only will this allow us to install our game environment to be deployed to the cloud, but it will also significantly speed up hyperparameter tuning time as each worker can skip the library installation steps.
Step2: Note
Step3: The reset method will restart the environment and return a starting state.
Step4: Run the cell below repeatedly until the game is over, changing the action to push the cart left (0) or right (1). The game is considered "won" when the pole can stay up for an average of steps 195 over 100 games. How far can you get? An agent acting randomly can only survive about 10 steps.
Step5: We can make our own policy and create a loop to play through an episode (one full simulation) of the game. Below, actions are generated to alternate between pushing the cart left and right. The code is very similar to how our agents will be interacting with the game environment.
Step7: It's a challenge to get to 200! We could repeatedly experiment to find the best heuristics to beat the game, or we could leave all that work to the robot. Let's create an intelligence to figure this out for us.
Step11: Notice any other atypical aspects of this network?
Step12: Let's make a fake buffer and play around with it! We'll add the memory into our game play code to start collecting experiences.
Step13: Now, let's sample the memory by running the cell below multiple times. It's different each call, and that's on purpose. Just like with other neural networks, it's important to randomly sample so that our agent can learn from many different situations.
Step17: But before the agent has any memories and has learned anything, how is it supposed to act? That comes down to Exploration vs Exploitation. The trouble is that in order to learn, risks with the unknown need to be made. There's no right answer, but there is a popular answer. We'll start by acting randomly, and over time, we will slowly decay our chance to act randomly.
Step18: Let's define the agent and get a starting state to see how it would act without any training.
Step19: Run the cell below multiple times. Since we're decaying the random action rate after every action, it's only a matter a time before the agent exploits more than it explores.
Step21: Memories, a brain, and a healthy dose of curiosity. We finally have all the ingredient for our agent to learn. After all, as the Scarecrow from the Wizard of Oz said
Step22: Nice! We finally have an intelligence that can walk and talk and... well ok, this intelligence is too simple to be able to do those things, but maybe it can learn to push a cart with a pole on it. Let's update our training loop to use our new agent.
Step25: Hypertuning
|
5,672
|
<ASSISTANT_TASK:>
Python Code:
!ogr2ogr ../scratch/deelbekkens_wgs84 -t_srs "EPSG:4326" ../data/deelbekkens/Deelbekken.shp
!ogr2ogr --help
!ogr2ogr -f 'Geojson' ../scratch/provinces.geojson WFS:"https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs" Refprv
provinces = gpd.read_file("../scratch/provinces.geojson")
provinces.plot()
!ogr2ogr -f 'Geojson' ../scratch/antwerp_prov.geojson WFS:"https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs" Refprv -where "NAAM = 'Antwerpen'"
antwerp = gpd.read_file("../scratch/antwerp_prov.geojson")
antwerp.plot()
!ogr2ogr -f 'Geojson' ../scratch/metingen_fytoplankton.geojson WFS:"https://geoservices.informatievlaanderen.be/overdrachtdiensten/MeetplOppervlwaterkwal/wfs" Mtploppw -where "FYTOPLANKT = '1'"
import mplleaflet
fyto = gpd.read_file("../scratch/metingen_fytoplankton.geojson")
fyto.head()
fyto.to_crs('+init=epsg:4326').plot(markersize=5)
mplleaflet.display()
fyto.head()
!ogr2ogr ../scratch/subcat.shp ../data/deelbekkens/Deelbekken.shp -where "DEELBEKKEN = '10-10'"
import zipfile
zip_ref = zipfile.ZipFile("../data/NE1_50m_SR.zip", 'r')
zip_ref.extractall("../scratch")
zip_ref.close()
!gdalwarp ../scratch/NE1_50M_SR/NE1_50M_SR.tif ../scratch/cliptest.tif -cutline "../scratch/subcat.shp" -crop_to_cutline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/cliptest.tif')
plt.imshow(img)
import subprocess
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
outraster = inraster.replace('.tif', '{}.tif'.format("_out")) # same location, but adding _out to the output
inshape = "../scratch/subcat.shp"
subprocess.call(['gdalwarp', inraster, outraster, '-cutline', inshape,
'-crop_to_cutline', '-overwrite'])
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/NE1_50M_SR/NE1_50M_SR_out.tif')
plt.imshow(img)
def clip_raster(inraster, outraster, invector):
clip a raster image with a vector file
Parameters
----------
inraster : GDAL compatible raster format
outraster : GDAL compatible raster format
invector : GDAL compatible vector format
response = subprocess.call(['gdalwarp', inraster, outraster, '-cutline',
invector, '-crop_to_cutline', '-overwrite'])
return(response)
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
outraster = inraster.replace('.tif', '{}.tif'.format("_out")) # same location, but adding _out to the output
inshape = "../scratch/subcat.shp"
clip_raster(inraster, outraster, inshape)
provinces
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
outraster = inraster.replace('.tif', '{}.tif'.format("_OostVlaanderen"))
invector = "../scratch/provinces.geojson"
subprocess.call(['gdalwarp', inraster, outraster, '-cutline', invector,
'-cwhere', "NAAM='OOST-VLAANDEREN'",
'-crop_to_cutline',
'-overwrite'])
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/NE1_50M_SR/NE1_50M_SR_OostVlaanderen.tif')
plt.imshow(img)
import ogr
inraster = '../scratch/NE1_50M_SR/NE1_50M_SR.tif'
invector = "../scratch/provinces.geojson"
# GDAL magic...
ds = ogr.Open(inshape)
lyr = ds.GetLayer(0)
lyr.ResetReading()
ft = lyr.GetNextFeature()
# clipping for each of the features (provincesin this case)
while ft:
province_name = ft.GetFieldAsString('NAAM')
print(province_name)
outraster = inraster.replace('.tif', '_%s.tif' % province_name.replace('-', '_'))
subprocess.call(['gdalwarp', inraster, outraster, '-cutline', inshape,
'-crop_to_cutline', '-cwhere', "NAAM='%s'" %province_name])
ft = lyr.GetNextFeature()
ds = None
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
img=mpimg.imread('../scratch/NE1_50M_SR/NE1_50M_SR_West_Vlaanderen.tif') # check also Antwerpen,...
plt.imshow(img)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is this combination of commands?
Step2: but there are great online resources with good examples you can easily copy paste for your own applications...
Step3: We can start working with this date...
Step4: Actually, GDAL can directly query the WFS data
Step5: I do know that the Meetplaatsen Oppervlaktewaterkwaliteit are also available as a WFS web service. However, I'm only interested in the locations for fytoplankton
Step6: Actually, the same type of subselection are also possible on shapefiles,...
Step7: (If you're wondering how I know how to setup these commands and arguments, check the (draft) introduction_webservices.ipynb in the scratch folder. <br>
Step8: The GDAL function that support the clipping of a raster file, is called gdalwarp. Again, the documentation looks rather overwhelming... Let's start with an example execution
Step9: ! this is a jupyter notebook-thing, telling it we're running something on the command line instead of in Python
Step10: This is off course a dummy example (to keep runtime low), but it illustrates the concept.
Step11: Doing the same as above, but actually using Python code to run the command with given variables as input
Step12: Remark when GDAL provides a zero as return statement, this is a GOOD sign!
Step14: Hence, the result is the same, but calling the command from Python. By writing a Python function for this routine, I do have a reusable functionality in my toolbox that I can load in any other Python script
Step15: More advanced clipping
Step16: We can actually use a selection of the provinces data set to execute the clipping
Step17: By having it as a Python call, we can do the same action for each of the individual provinces in the dataset and create for each of the provinces a clipped raster data set
|
5,673
|
<ASSISTANT_TASK:>
Python Code:
!conda install pytest pytest-cov
!mkdir #complete
!touch #complete
%%file <yourpackage>/tests/test_something.py
def test_something_func():
assert #complete
from <yourpackage>.tests import test_something
test_something.test_something_func()
!py.test
!py.test
!py.test --cov=<yourproject> tests/ #complete
#%%file <yourproject>/<filename>.py #complete, or just use your editor
# `math` here is for *scalar* math... normally you'd use numpy but this makes it a bit simpler to debug
import math
inf = float('inf') # this is a quick-and-easy way to get the "infinity" value
def function_a(angle=180):
anglerad = math.radians(angle)
return math.sin(anglerad/2)/math.sin(anglerad)
#%%file <yourproject>/<filename>.py #complete, or just use your editor
def function_b(value):
if value < 0:
return value - 1
else:
value2 = subfunction_b(value + 1)
return value + value2
def subfunction_b(inp):
vals_to_accum = []
for i in range(10):
vals_to_accum.append(inp ** (i/10))
if vals_to_accum[-1] > 2:
vals.append(100)
# really you would use numpy to do this kind of number-crunching... but we're doing this for the sake of example right now
return sum(vals_to_accum)
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import math
# know that to not have to worry about this, you should just use `astropy.coordinates`.
def angle_to_sexigesimal(angle_in_degrees, decimals=3):
Convert the given angle to a sexigesimal string of hours of RA.
Parameters
----------
angle_in_degrees : float
A scalar angle, expressed in degrees
Returns
-------
hms_str : str
The sexigesimal string giving the hours, minutes, and seconds of RA for the given `angle_in_degrees`
if math.floor(decimals) != decimals:
raise ValueError('decimals should be an integer!')
hours_num = angle_in_degrees*24/180
hours = math.floor(hours_num)
min_num = (hours_num - hours)*60
minutes = math.floor(min_num)
seconds = (min_num - minutes)*60
format_string = '{}:{}:{:.' + str(decimals) + 'f}'
return format_string.format(hours, minutes, seconds)
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import numpy as np
def function_d(array1=np.arange(10)*2, array2=np.arange(10), operation='-'):
Makes a matrix where the [i,j]th element is array1[i] <operation> array2[j]
if operation == '+':
return array1[:, np.newaxis] + array2
elif operation == '-':
return array1[:, np.newaxis] - array2
elif operation == '*':
return array1[:, np.newaxis] * array2
elif operation == '/':
return array1[:, np.newaxis] / array2
else:
raise ValueError('Unrecognized operation "{}"'.format(operation))
!py.test
%%file .travis.yml
language: python
python:
- "3.6"
# command to install dependencies
#install: "pip install numpy" #uncomment this if your code depends on numpy or similar
# command to run tests
script: pytest
!git #complete
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1b
Step2: 1d
Step3: 1e
Step4: 1f
Step5: 1g
Step6: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Step7: 2b
Step9: 2c
Step11: 2d
Step12: Problem 3
Step13: 3b
Step14: Be sure to commit and push this to github before proceeding
|
5,674
|
<ASSISTANT_TASK:>
Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# load beam distribution
# this function convert CSRtrack beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.csrtrack2ocelot import *
# load and convert CSRtrack file to OCELOT beam distribution
# p_array_i = csrtrackBeam2particleArray("in.fmt1", orient="H")
# save ParticleArray to compresssed numpy array
# save_particle_array("test.npz", p_array_i)
p_array_i = load_particle_array("csr_beam.npz")
# show the longitudinal phase space
plt.plot(-p_array_i.tau()*1000, p_array_i.p(), "r.")
plt.xlabel("S, mm")
plt.ylabel("dE/pc")
b1 = Bend(l = 0.500094098121, angle=-0.03360102249639, e1=0.0, e2=-0.03360102249639, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.393.B2')
b2 = Bend(l = 0.500094098121, angle=0.03360102249639, e1=0.03360102249639, e2=0.0, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.402.B2')
b3 = Bend(l = 0.500094098121, angle=0.03360102249639, e1=0.0, e2=0.03360102249639, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.404.B2')
b4 = Bend(l = 0.500094098121, angle=-0.03360102249639, e1=-0.03360102249639, e2=0.0, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.413.B2')
d_slope = Drift(l=8.5/np.cos(b2.angle))
start_csr = Marker()
stop_csr = Marker()
# define cell frome the bends and drifts
cell = [start_csr, Drift(l=0.1), b1 , d_slope , b2, Drift(l=1.5) , b3, d_slope, b4, Drift(l= 1.), stop_csr]
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
lat = MagneticLattice(cell, method=method)
csr = CSR()
csr.n_bin = 300
csr.m_bin = 5
csr.sigma_min = 0.2e-6
navi = Navigator(lat)
# track witout CSR effect
p_array_no = deepcopy(p_array_i)
print("\n tracking without CSR effect .... ")
start = time()
tws_no, p_array_no = track(lat, p_array_no, navi)
print("\n time exec:", time() - start, "sec")
# again create Navigator with needed step in [m]
navi = Navigator(lat)
navi.unit_step = 0.1 # m
# add csr process to navigator with start and stop elements
navi.add_physics_proc(csr, start_csr, stop_csr)
# tracking
start = time()
p_array_csr = deepcopy(p_array_i)
print("\n tracking with CSR effect .... ")
tws_csr, p_array_csr = track(lat, p_array_csr, navi)
print("\n time exec:", time() - start, "sec")
# recalculate reference particle
from ocelot.cpbd.beam import *
recalculate_ref_particle(p_array_csr)
recalculate_ref_particle(p_array_no)
# load and convert CSRtrack file to OCELOT beam distribution
# distribution after BC2
# p_array_out = csrtrackBeam2particleArray("out.fmt1", orient="H")
# save ParticleArray to compresssed numpy array
# save_particle_array("scr_track.npz", p_array_out)
p_array_out = load_particle_array("scr_track.npz")
# standard matplotlib functions
plt.figure(2, figsize=(10, 6))
plt.subplot(121)
plt.plot(p_array_no.tau()*1000, p_array_no.p(), 'g.', label="OCELOT no CSR")
plt.plot(p_array_csr.tau()*1000, p_array_csr.p(), 'r.', label="OCELOT CSR")
plt.plot(p_array_out.tau()*1000, p_array_out.p(), 'b.', label="CSRtrack")
plt.legend(loc=3)
plt.xlabel("s, mm")
plt.ylabel("dE/pc")
plt.grid(True)
plt.subplot(122)
plt.plot(p_array_no.tau()*1000, p_array_no.p(), 'g.', label="Ocelot no CSR")
plt.plot(p_array_out.tau()*1000, p_array_out.p(), 'b.', label="CSRtrack")
plt.plot(p_array_csr.tau()*1000, p_array_csr.p(), 'r.', label="OCELOT CSR")
plt.legend(loc=3)
plt.xlabel("s, mm")
plt.ylabel("dE/pc")
plt.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load beam distribution from CSRtrack format
Step2: create BC2 lattice
Step3: Initialization tracking method and MagneticLattice object
Step4: Create CSR object
Step5: Track particles with and without CSR effect
|
5,675
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import sklearn
df = load_data()
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
df_out = df.join(
pd.DataFrame(
mlb.fit_transform(df.pop('Col4')),
index=df.index,
columns=mlb.classes_))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
5,676
|
<ASSISTANT_TASK:>
Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} google-cloud-aiplatform
! pip3 install {USER_FLAG} google-cloud-storage
! pip3 install {USER_FLAG} numpy
! pip3 install {USER_FLAG} cloudml-hypertune
! pip3 install {USER_FLAG} --upgrade tensorflow
! pip3 install {USER_FLAG} --upgrade pillow
! pip3 install {USER_FLAG} --upgrade tf-agents
! pip3 install {USER_FLAG} --upgrade tensorboard-plugin-profile
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} The bucket should be in same region as uCAIP. The bucket should not be multi-regional for custom training jobs to work.
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import functools
import json
import os
from collections import defaultdict
from typing import Callable, Dict, List, Optional, TypeVar
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from google.cloud import aiplatform, storage
from tf_agents.agents import TFAgent
from tf_agents.bandits.agents import lin_ucb_agent
from tf_agents.bandits.agents.examples.v2 import trainer
from tf_agents.bandits.environments import (environment_utilities,
movielens_py_environment)
from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import TFEnvironment, tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.metrics.tf_metric import TFStepMetric
from tf_agents.policies import policy_saver
if tf.__version__[0] != "2":
raise Exception("The trainer only runs with TensorFlow version 2.")
T = TypeVar("T")
ROOT_DIR = f"{BUCKET_NAME}/artifacts" # @param {type:"string"} Root directory for writing logs/summaries/checkpoints.
ARTIFACTS_DIR = f"{BUCKET_NAME}/artifacts" # @param {type:"string"} Where the trained model will be saved and restored.
PROFILER_DIR = f"{BUCKET_NAME}/profiler" # @param {type:"string"} Directory for TensorBoard Profiler artifacts.
DATA_PATH = f"{BUCKET_NAME}/artifacts/u.data" # Location of the MovieLens 100K dataset's "u.data" file.
RAW_BUCKET_NAME = BUCKET_NAME[5:] # Remove the prefix `gs://`.
# Copy the sample data into your DATA_PATH
! gsutil cp "gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data" $DATA_PATH
# Set hyperparameters.
BATCH_SIZE = 8 # @param {type:"integer"} Training and prediction batch size.
TRAINING_LOOPS = 5 # @param {type:"integer"} Number of training iterations.
STEPS_PER_LOOP = 2 # @param {type:"integer"} Number of driver steps per training iteration.
# Set MovieLens simulation environment parameters.
RANK_K = 20 # @param {type:"integer"} Rank for matrix factorization in the MovieLens environment; also the observation dimension.
NUM_ACTIONS = 20 # @param {type:"integer"} Number of actions (movie items) to choose from.
PER_ARM = False # Use the non-per-arm version of the MovieLens environment.
# Set agent parameters.
TIKHONOV_WEIGHT = 0.001 # @param {type:"number"} LinUCB Tikhonov regularization weight.
AGENT_ALPHA = 10.0 # @param {type:"number"} LinUCB exploration parameter that multiplies the confidence intervals.
# Define RL environment.
env = movielens_py_environment.MovieLensPyEnvironment(
DATA_PATH, RANK_K, BATCH_SIZE, num_movies=NUM_ACTIONS, csv_delimiter="\t")
environment = tf_py_environment.TFPyEnvironment(env)
# Define RL agent/algorithm.
agent = lin_ucb_agent.LinearUCBAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
tikhonov_weight=TIKHONOV_WEIGHT,
alpha=AGENT_ALPHA,
dtype=tf.float32,
accepts_per_arm_features=PER_ARM)
print("TimeStep Spec (for each batch):\n", agent.time_step_spec, "\n")
print("Action Spec (for each batch):\n", agent.action_spec, "\n")
print("Reward Spec (for each batch):\n", environment.reward_spec(), "\n")
# Define RL metric.
optimal_reward_fn = functools.partial(
environment_utilities.compute_optimal_reward_with_movielens_environment,
environment=environment)
regret_metric = tf_bandit_metrics.RegretMetric(optimal_reward_fn)
metrics = [regret_metric]
def train(
root_dir: str,
agent: TFAgent,
environment: TFEnvironment,
training_loops: int,
steps_per_loop: int,
additional_metrics: Optional[List[TFStepMetric]] = None,
training_data_spec_transformation_fn: Optional[Callable[[T], T]] = None,
) -> Dict[str, List[float]]:
Performs `training_loops` iterations of training on the agent's policy.
Uses the `environment` as the problem formulation and source of immediate
feedback and the agent's algorithm, to perform `training-loops` iterations
of on-policy training on the policy.
If one or more baseline_reward_fns are provided, the regret is computed
against each one of them. Here is example baseline_reward_fn:
def baseline_reward_fn(observation, per_action_reward_fns):
rewards = ... # compute reward for each arm
optimal_action_reward = ... # take the maximum reward
return optimal_action_reward
Args:
root_dir: Path to the directory where training artifacts are written.
agent: An instance of `TFAgent`.
environment: An instance of `TFEnvironment`.
training_loops: An integer indicating how many training loops should be run.
steps_per_loop: An integer indicating how many driver steps should be
executed and presented to the trainer during each training loop.
additional_metrics: Optional; list of metric objects to log, in addition to
default metrics `NumberOfEpisodes`, `AverageReturnMetric`, and
`AverageEpisodeLengthMetric`.
training_data_spec_transformation_fn: Optional; function that transforms
the data items before they get to the replay buffer.
Returns:
A dict mapping metric names (eg. "AverageReturnMetric") to a list of
intermediate metric values over `training_loops` iterations of training.
if training_data_spec_transformation_fn is None:
data_spec = agent.policy.trajectory_spec
else:
data_spec = training_data_spec_transformation_fn(
agent.policy.trajectory_spec)
replay_buffer = trainer.get_replay_buffer(data_spec, environment.batch_size,
steps_per_loop)
# `step_metric` records the number of individual rounds of bandit interaction;
# that is, (number of trajectories) * batch_size.
step_metric = tf_metrics.EnvironmentSteps()
metrics = [
tf_metrics.NumberOfEpisodes(),
tf_metrics.AverageEpisodeLengthMetric(batch_size=environment.batch_size)
]
if additional_metrics:
metrics += additional_metrics
if isinstance(environment.reward_spec(), dict):
metrics += [tf_metrics.AverageReturnMultiMetric(
reward_spec=environment.reward_spec(),
batch_size=environment.batch_size)]
else:
metrics += [
tf_metrics.AverageReturnMetric(batch_size=environment.batch_size)]
# Store intermediate metric results, indexed by metric names.
metric_results = defaultdict(list)
if training_data_spec_transformation_fn is not None:
def add_batch_fn(data): return replay_buffer.add_batch(training_data_spec_transformation_fn(data))
else:
add_batch_fn = replay_buffer.add_batch
observers = [add_batch_fn, step_metric] + metrics
driver = dynamic_step_driver.DynamicStepDriver(
env=environment,
policy=agent.collect_policy,
num_steps=steps_per_loop * environment.batch_size,
observers=observers)
training_loop = trainer.get_training_loop_fn(
driver, replay_buffer, agent, steps_per_loop)
saver = policy_saver.PolicySaver(agent.policy)
for _ in range(training_loops):
training_loop()
metric_utils.log_metrics(metrics)
for metric in metrics:
metric.tf_summaries(train_step=step_metric.result())
metric_results[type(metric).__name__].append(metric.result().numpy())
saver.save(root_dir)
return metric_results
tf.profiler.experimental.start(PROFILER_DIR)
metric_results = train(
root_dir=ROOT_DIR,
agent=agent,
environment=environment,
training_loops=TRAINING_LOOPS,
steps_per_loop=STEPS_PER_LOOP,
additional_metrics=metrics)
tf.profiler.experimental.stop()
def plot(metric_results, metric_name):
plt.plot(metric_results[metric_name])
plt.ylabel(metric_name)
plt.xlabel("Step")
plt.title("{} versus Step".format(metric_name))
plot(metric_results, "RegretMetric")
plot(metric_results, "AverageReturnMetric")
# If on Google Cloud Notebooks, then don't execute this code.
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
# Load the TensorBoard notebook extension.
%load_ext tensorboard
# If on Google Cloud Notebooks, then don't execute this code.
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
%tensorboard --logdir $PROFILER_DIR
! python3 -m unittest src/tests/test_policy_util.py
! python3 -m unittest src/tests/test_task.py
HPTUNING_TRAINING_CONTAINER = "hptuning-training-custom-container" # @param {type:"string"} Name of the container image.
cloudbuild_yaml = steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args: ['--destination=gcr.io/{PROJECT_ID}/{HPTUNING_TRAINING_CONTAINER}:latest',
'--cache=true',
'--cache-ttl=99h']
options:
machineType: 'E2_HIGHCPU_8'.format(
PROJECT_ID=PROJECT_ID,
HPTUNING_TRAINING_CONTAINER=HPTUNING_TRAINING_CONTAINER,
)
with open("cloudbuild.yaml", "w") as fp:
fp.write(cloudbuild_yaml)
%%writefile Dockerfile
# Specifies base image and tag.
FROM gcr.io/google-appengine/python
WORKDIR /root
# Installs additional packages.
RUN pip3 install cloudml-hypertune==0.1.0.dev6
RUN pip3 install google-cloud-storage==1.39.0
RUN pip3 install tensorflow==2.5.0
RUN pip3 install tensorboard-plugin-profile==2.5.0
RUN pip3 install tf-agents==0.8.0
RUN pip3 install matplotlib==3.4.2
# Copies training code to the Docker image.
COPY src/training /root/src/training
# Sets up the entry point to invoke the task.
ENTRYPOINT ["python3", "-m", "src.training.task"]
! gcloud builds submit --config cloudbuild.yaml
RUN_HYPERPARAMETER_TUNING = True # Execute hyperparameter tuning instead of regular training.
TRAIN_WITH_BEST_HYPERPARAMETERS = False # Do not train.
HPTUNING_RESULT_DIR = "hptuning/" # @param {type: "string"} Directory to store the best hyperparameter(s) in `BUCKET_NAME` and locally (temporarily).
HPTUNING_RESULT_PATH = os.path.join(HPTUNING_RESULT_DIR, "result.json") # @param {type: "string"} Path to the file containing the best hyperparameter(s).
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
def create_hyperparameter_tuning_job_sample(
project: str,
display_name: str,
image_uri: str,
args: List[str],
location: str = "us-central1",
api_endpoint: str = "us-central1-aiplatform.googleapis.com"
) -> None:
Creates a hyperparameter tuning job using a custom container.
Args:
project: GCP project ID.
display_name: GCP console display name for the hyperparameter tuning job in
Vertex AI.
image_uri: URI to the hyperparameter tuning container image in Container
Registry.
args: Arguments passed to the container.
location: Service location.
api_endpoint: API endpoint, eg. `<location>-aiplatform.googleapis.com`.
Returns:
A string of the hyperparameter tuning job ID.
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.JobServiceClient(client_options=client_options)
# study_spec
# Metric based on which to evaluate which combination of hyperparameter(s) to choose
metric = {
"metric_id": "final_average_return", # Metric you report to Vertex AI.
"goal": aiplatform.gapic.StudySpec.MetricSpec.GoalType.MAXIMIZE,
}
# Hyperparameter(s) to tune
training_loops = {
"parameter_id": "training-loops",
"discrete_value_spec": {"values": [4, 16]},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
steps_per_loop = {
"parameter_id": "steps-per-loop",
"discrete_value_spec": {"values": [1, 2]},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
# trial_job_spec
machine_spec = {
"machine_type": "n1-standard-4",
"accelerator_type": aiplatform.gapic.AcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
"accelerator_count": None,
}
worker_pool_spec = {
"machine_spec": machine_spec,
"replica_count": 1,
"container_spec": {
"image_uri": image_uri,
"args": args,
},
}
# hyperparameter_tuning_job
hyperparameter_tuning_job = {
"display_name": display_name,
"max_trial_count": 4,
"parallel_trial_count": 2,
"study_spec": {
"metrics": [metric],
"parameters": [training_loops, steps_per_loop],
"algorithm": aiplatform.gapic.StudySpec.Algorithm.RANDOM_SEARCH,
},
"trial_job_spec": {"worker_pool_specs": [worker_pool_spec]},
}
parent = f"projects/{project}/locations/{location}"
# Create job
response = client.create_hyperparameter_tuning_job(
parent=parent,
hyperparameter_tuning_job=hyperparameter_tuning_job)
job_id = response.name.split("/")[-1]
print("Job ID:", job_id)
print("Job config:", response)
return job_id
args = [
f"--data-path={DATA_PATH}",
f"--batch-size={BATCH_SIZE}",
f"--rank-k={RANK_K}",
f"--num-actions={NUM_ACTIONS}",
f"--tikhonov-weight={TIKHONOV_WEIGHT}",
f"--agent-alpha={AGENT_ALPHA}",
]
if RUN_HYPERPARAMETER_TUNING:
args.append("--run-hyperparameter-tuning")
elif TRAIN_WITH_BEST_HYPERPARAMETERS:
args.append("--train-with-best-hyperparameters")
job_id = create_hyperparameter_tuning_job_sample(
project=PROJECT_ID,
display_name="movielens-hyperparameter-tuning-job",
image_uri=f"gcr.io/{PROJECT_ID}/{HPTUNING_TRAINING_CONTAINER}:latest",
args=args,
location=REGION,
api_endpoint=f"{REGION}-aiplatform.googleapis.com")
def get_hyperparameter_tuning_job_sample(
project: str,
hyperparameter_tuning_job_id: str,
location: str = "us-central1",
api_endpoint: str = "us-central1-aiplatform.googleapis.com",
) -> aiplatform.HyperparameterTuningJob:
Gets the current status of a hyperparameter tuning job.
Args:
project: GCP project ID.
hyperparameter_tuning_job_id: Hyperparameter tuning job ID.
location: Service location.
api_endpoint: API endpoint, eg. `<location>-aiplatform.googleapis.com`.
Returns:
Details of the hyperparameter tuning job, such as its running status,
results of its trials, etc.
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.JobServiceClient(client_options=client_options)
name = client.hyperparameter_tuning_job_path(
project=project,
location=location,
hyperparameter_tuning_job=hyperparameter_tuning_job_id)
response = client.get_hyperparameter_tuning_job(name=name)
return response
trials = None
while True:
response = get_hyperparameter_tuning_job_sample(
project=PROJECT_ID,
hyperparameter_tuning_job_id=job_id,
location=REGION,
api_endpoint=f"{REGION}-aiplatform.googleapis.com")
if response.state.name == 'JOB_STATE_SUCCEEDED':
print("Job succeeded.\nJob Time:", response.update_time - response.create_time)
trials = response.trials
print("Trials:", trials)
break
elif response.state.name == "JOB_STATE_FAILED":
print("Job failed.")
break
elif response.state.name == "JOB_STATE_CANCELLED":
print("Job cancelled.")
break
else:
print(f"Current job status: {response.state.name}.")
time.sleep(60)
if trials:
# Dict mapping from metric names to the best metric values seen so far
best_objective_values = dict.fromkeys(
[metric.metric_id for metric in trials[0].final_measurement.metrics],
-np.inf)
# Dict mapping from metric names to a list of the best combination(s) of
# hyperparameter(s). Each combination is a dict mapping from hyperparameter
# names to their values.
best_params = defaultdict(list)
for trial in trials:
# `final_measurement` and `parameters` are `RepeatedComposite` objects.
# Reference the structure above to extract the value of your interest.
for metric in trial.final_measurement.metrics:
params = {
param.parameter_id: param.value for param in trial.parameters}
if metric.value > best_objective_values[metric.metric_id]:
best_params[metric.metric_id] = [params]
elif metric.value == best_objective_values[metric.metric_id]:
best_params[param.parameter_id].append(params) # Handle cases where multiple hyperparameter values lead to the same performance.
print("Best hyperparameter value(s):")
for metric, params in best_params.items():
print(f"Metric={metric}: {sorted(params)}")
else:
print("No hyperparameter tuning job trials found.")
! mkdir $HPTUNING_RESULT_DIR
with open(HPTUNING_RESULT_PATH, "w") as f:
json.dump(best_params["final_average_return"][0], f)
storage_client = storage.Client(project=PROJECT_ID)
bucket = storage_client.bucket(RAW_BUCKET_NAME)
blob = bucket.blob(HPTUNING_RESULT_PATH)
blob.upload_from_filename(HPTUNING_RESULT_PATH)
PREDICTION_CONTAINER = "prediction-custom-container" # @param {type:"string"} Name of the container image.
cloudbuild_yaml = steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args: ['--destination=gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest',
'--cache=true',
'--cache-ttl=99h']
env: ['AIP_STORAGE_URI={ARTIFACTS_DIR}']
options:
machineType: 'E2_HIGHCPU_8'.format(
PROJECT_ID=PROJECT_ID,
PREDICTION_CONTAINER=PREDICTION_CONTAINER,
ARTIFACTS_DIR=ARTIFACTS_DIR
)
with open("cloudbuild.yaml", "w") as fp:
fp.write(cloudbuild_yaml)
%%writefile requirements.txt
numpy~=1.19.2
six~=1.15.0
typing-extensions~=3.7.4
pillow==9.0.0
tf-agents==0.8.0
tensorflow==2.5.0
%%writefile Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY src/prediction /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install -r /app/requirements.txt
! gcloud builds submit --config cloudbuild.yaml
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
RUN_HYPERPARAMETER_TUNING = False # Execute regular training instead of hyperparameter tuning.
TRAIN_WITH_BEST_HYPERPARAMETERS = True # @param {type:"bool"} Whether to use learned hyperparameters in training.
args = [
f"--artifacts-dir={ARTIFACTS_DIR}",
f"--profiler-dir={PROFILER_DIR}",
f"--data-path={DATA_PATH}",
f"--batch-size={BATCH_SIZE}",
f"--rank-k={RANK_K}",
f"--num-actions={NUM_ACTIONS}",
f"--tikhonov-weight={TIKHONOV_WEIGHT}",
f"--agent-alpha={AGENT_ALPHA}",
]
if RUN_HYPERPARAMETER_TUNING:
args.append("--run-hyperparameter-tuning")
elif TRAIN_WITH_BEST_HYPERPARAMETERS:
args.append("--train-with-best-hyperparameters")
args.append(f"--best-hyperparameters-bucket={RAW_BUCKET_NAME}")
args.append(f"--best-hyperparameters-path={HPTUNING_RESULT_PATH}")
job = aiplatform.CustomContainerTrainingJob(
display_name="train-movielens",
container_uri=f"gcr.io/{PROJECT_ID}/{HPTUNING_TRAINING_CONTAINER}:latest",
command=["python3", "-m", "src.training.task"] + args, # Pass in training arguments, including hyperparameters.
model_serving_container_image_uri=f"gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest",
model_serving_container_predict_route="/predict",
model_serving_container_health_route="/health")
print("Training Spec:", job._managed_model)
model = job.run(
model_display_name="movielens-model",
replica_count=1,
machine_type="n1-standard-4",
accelerator_type="ACCELERATOR_TYPE_UNSPECIFIED",
accelerator_count=0)
print("Model display name:", model.display_name)
print("Model ID:", model.name)
endpoint = model.deploy(machine_type="n1-standard-4")
print("Endpoint display name:", endpoint.display_name)
print("Endpoint ID:", endpoint.name)
endpoint.predict(
instances=[
{"observation": [list(np.ones(20)) for _ in range(8)]},
]
)
# Delete endpoint resource
! gcloud ai endpoints delete $endpoint.name --quiet --region $REGION
# Delete model resource
! gcloud ai models delete $model.name --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $ARTIFACTS_DIR
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Before you begin
Step3: Otherwise, set your project ID here.
Step4: Timestamp
Step5: Authenticate your Google Cloud account
Step6: Create a Cloud Storage bucket
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Step10: Implement and execute locally (optional)
Step12: Train the model [locally]
Step13: Train the RL policy and gather intermediate metric results. At the same time, use TensorBoard Profiler to profile the training process and resources.
Step14: Evaluate RL metrics [locally]
Step15: Profile training [optional]
Step16: [1] For Google Cloud Notebooks, you can do the following
Step17: Create hyperparameter tuning and training custom container
Step19: Create a Cloud Build YAML file
Step20: Write a Dockerfile
Step21: Build the custom container with Cloud Build
Step23: Submit hyperparameter tuning job [optional]
Step25: It will take ~20 minutes to complete.
Step26: Find the best combination(s) hyperparameter(s) for each metric
Step27: Convert a combination of best hyperparameter(s) for a metric of interest to JSON
Step28: Upload the best hyperparameter(s) to GCS for use in training
Step29: Create custom prediction container
Step31: Create a Cloud Build YAML file
Step32: Define dependencies
Step33: Write a Dockerfile
Step34: Build the prediction container with Cloud Build
Step35: Submit custom container training job
Step36: Deploy trained model to an Endpoint
Step37: Predict on the Endpoint
Step38: Summary
|
5,677
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import cv2
from geospatial_learn import raster
from geospatial_learn.utilities import do_phasecong, houghseg
from math import ceil
import matplotlib.pyplot as plt
from skimage.color import rgb2gray, label2rgb
from skimage.feature import canny
from skimage.exposure import rescale_intensity
inRas = 'figures/weetestorig.tif'
img = raster.raster2array(inRas, bands=[1,2,3])
# for testing below
gray = rgb2gray(img)
plt.imshow(img)
plt.show()
def icanny(high_threshold, *args, **kwargs): #...do it
inIm = gray#.astype(np.float)
low_threshold = high_threshold / 2
edge = canny(inIm, low_threshold=low_threshold, high_threshold=high_threshold, *args, **kwargs)
# Comment the first 2 lines if you want more space
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
plt.imshow(edge)
plt.show()
# return edge
from ipywidgets import widgets
cTester = widgets.interact(icanny,
#k=widgets.IntSlider(min=3, max=100, step=2, continuous_update=False),
sigma=widgets.IntSlider(min=0, max=100, step=1, continuous_update=False),
#low_threshold=widgets.IntSlider(min=0, max=255, step=1, continuous_update=False),
high_threshold=widgets.FloatSlider(min=0, max=1, step=0.01, continuous_update=False))
def iphase(*args, **kwargs):
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
edge = do_phasecong(gray, *args, **kwargs)
plt.imshow(edge)
plt.show()
from ipywidgets import widgets
cTester = widgets.interact(iphase,
sigma=widgets.IntSlider(min=0, max=50, step=1, continuous_update=False),
low_t=widgets.IntSlider(min=0, max=256, step=1, continuous_update=False),
hi_t=widgets.IntSlider(min=0, max=256, step=1, continuous_update=False))
outShp = 'mytest.shp'
segments = houghseg(inRas, outShp, edge='phase', sigma=4, min_area=4)
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
plt.imshow(segments, cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in a test image subset. Replace with your own if required parameters will need to be adjusted, needless to say complete segmentation is not guaranteed - will be dependent upon your image.
Step2: The classical Canny edge detection.
Step3: Phase congruency edge detection
Step4: Segment the plots
|
5,678
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import RNN
from keras.utils import np_utils
sample_poem = open('sample_sonnets.txt').read().lower()
sample_poem[77:99]
characters = sorted(list(set(sample_poem)))
n_to_char = {n:char for n, char in enumerate(characters)} # store characters and their index
char_to_n = {char:n for n, char in enumerate(characters)}
print(n_to_char[7])
print(n_to_char[9])
X = []
y = []
total_len = len(sample_poem)
seq_len = 100 # each time we choose 100 character as a sequence and predict the next character after the sequence
for i in range(total_len - seq_len):
seq = sample_poem[i:i+seq_len]
label = sample_poem[i+seq_len]
X.append([char_to_n[char] for char in seq])
y.append(char_to_n[label])
# LSTM acceptable format, (number of sequneces(batch size), sequnece length (timesteps), number of features)
X_modified = np.reshape(X, (len(X), seq_len, 1))
X_modified = X_modified / float(len(characters)) # normalize the value
y_modified = np_utils.to_categorical(y) # convert to one-hot format, there are 36 distinct characters in total
print(X_modified.shape)
print(y_modified[4:10])
model = Sequential()
model.add(LSTM(700, input_shape=(X_modified.shape[1], X_modified.shape[2]), return_sequences=True))
model.add(Dropout(0.2)) # dropout is used for regularization
model.add(LSTM(700, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(700))
model.add(Dropout(0.2))
model.add(Dense(y_modified.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(X_modified, y_modified, epochs=10, batch_size=100)
model.save_weights('poem_generator_gigantic.h5') # save weights, so that later we can use without re-running the model
model.load_weights('poem_generator_gigantic.h5')
new_poem_lst = []
for j in range(77, 99): # randomly choose some records and predict the sequence (generate the poem)
string_mapped = X[j]
full_string = [n_to_char[value] for value in string_mapped]
for i in range(10): # predict the next 10 character
x = np.reshape(string_mapped,(1,len(string_mapped), 1))
x = x / float(len(characters))
# predict the next character
pred_index = np.argmax(model.predict(x, verbose=0))
seq = [n_to_char[value] for value in string_mapped]
full_string.append(n_to_char[pred_index])
# predicted character will be added to support the next prediction
string_mapped.append(pred_index)
string_mapped = string_mapped[1:len(string_mapped)]
new_poem_lst.extend(full_string)
generated_poem = ''.join(new_poem_lst)
print(generated_poem)
words = sorted(list(set(sample_poem.split())))
n_to_word = {n:word for n, word in enumerate(words)} # store characters and their index
word_to_n = {word:n for n, word in enumerate(words)}
print(n_to_word[7])
print(n_to_word[9])
X = []
y = []
all_words = sample_poem.split()
total_len = len(all_words)
seq_len = 100 # each time we choose 100 character as a sequence and predict the next character after the sequence
for i in range(total_len - seq_len):
seq = all_words[i:i+seq_len]
label = all_words[i+seq_len]
X.append([word_to_n[word] for word in seq])
y.append(word_to_n[label])
# LSTM acceptable format, (number of sequneces(batch size), sequnece length (timesteps), number of features)
X_modified = np.reshape(X, (len(X), seq_len, 1))
X_modified = X_modified / float(len(words)) # normalize the value
y_modified = np_utils.to_categorical(y) # convert to one-hot format, there are 36 distinct characters in total
print(X_modified.shape)
print(y_modified[4:10])
model = Sequential()
model.add(LSTM(700, input_shape=(X_modified.shape[1], X_modified.shape[2]), return_sequences=True))
model.add(Dropout(0.2)) # dropout is used for regularization
model.add(LSTM(700, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(700))
model.add(Dropout(0.2))
model.add(Dense(y_modified.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(X_modified, y_modified, epochs=10, batch_size=100)
model.save_weights('poem_generator_gigantic_word.h5')
model.load_weights('poem_generator_gigantic_word.h5')
new_poem_lst = []
for j in range(77, 99): # randomly choose some records and predict the sequence (generate the poem)
string_mapped = X[j]
full_string = [] # different from character based, here not recording the original sequence
for i in range(10): # predict the next 10 character
x = np.reshape(string_mapped,(1,len(string_mapped), 1))
x = x / float(len(words))
# predict the next character
pred_index = np.argmax(model.predict(x, verbose=0))
seq = [n_to_word[value] for value in string_mapped]
full_string.append(n_to_word[pred_index])
# predicted character will be added to support the next prediction
string_mapped.append(pred_index)
string_mapped = string_mapped[1:len(string_mapped)]
new_poem_lst.extend(full_string)
generated_poem = ' '.join(new_poem_lst)
print(generated_poem)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Method 1 - Character Based Poem Generation
Step2: Observation...
|
5,679
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd #数据分析
import numpy as np #科学计算
from pandas import Series,DataFrame
data_train = pd.read_csv("./Titanic/train.csv")
data_train.tail() #pandas是常用的python数据处理包,把csv文件读入成dataframe各式,我们在ipython notebook中,看到data_train如下所示:
data_train.info()
data_train.describe()
import matplotlib.pyplot as plt
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
plt.subplot2grid((2,3),(0,0)) # 在一张大图里分列几个小图
data_train.Survived.value_counts().plot(kind='bar')# 柱状图
plt.title(u"Survived?") # 存活?
plt.ylabel(u"num")
plt.subplot2grid((2,3),(0,1))
data_train.Pclass.value_counts().plot(kind="bar")
plt.ylabel(u"num")
plt.title(u"Pclass")# 乘客等级分布
plt.subplot2grid((2,3),(0,2))
plt.scatter(data_train.Survived, data_train.Age)
plt.ylabel(u"age") # 设定纵坐标名称
plt.title(u"Survived? by age")
plt.subplot2grid((2,3),(1,0), colspan=2)
data_train.Age[data_train.Pclass == 1].plot(kind='kde')
data_train.Age[data_train.Pclass == 2].plot(kind='kde')
data_train.Age[data_train.Pclass == 3].plot(kind='kde')
plt.xlabel(u"age")# plots an axis lable
plt.ylabel(u"rate")
plt.title(u"age by rate")
plt.legend((u'first', u'second',u'third'),loc='best') # sets our legend for our graph.
plt.subplot2grid((2,3),(1,2))
data_train.Embarked.value_counts().plot(kind='bar')
plt.title(u"Embarked")
plt.ylabel(u"num")
plt.show()
#bingo,图还是比数字好看多了。所以我们在图上可以看出来,被救的人300多点,不到半数;3等舱乘客灰常多;遇难和获救的人年龄似乎跨度都很广;3个不同的舱年龄总体趋势似乎也一致,2/3等舱乘客20岁多点的人最多,1等舱40岁左右的最多(→_→似乎符合财富和年龄的分配哈,咳咳,别理我,我瞎扯的);登船港口人数按照S、C、Q递减,而且S远多于另外俩港口。
#看看各乘客等级的获救情况
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_0 = data_train.Pclass[data_train.Survived == 0].value_counts()
Survived_1 = data_train.Pclass[data_train.Survived == 1].value_counts()
df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0})
df.plot(kind='bar', stacked=True)
plt.title(u"各乘客等级的获救情况")
plt.xlabel(u"乘客等级")
plt.ylabel(u"人数")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 就把它想象成Excel里面的列好了。
Step2: 这个时候我们可能会有一些想法了:
|
5,680
|
<ASSISTANT_TASK:>
Python Code:
path = Config().data_path()/'giga-fren'
#! wget https://s3.amazonaws.com/fast-ai-nlp/giga-fren.tgz -P {path}
#! tar xf {path}/giga-fren.tgz -C {path}
# with open(path/'giga-fren.release2.fixed.fr') as f:
# fr = f.read().split('\n')
# with open(path/'giga-fren.release2.fixed.en') as f:
# en = f.read().split('\n')
# re_eq = re.compile('^(Wh[^?.!]+\?)')
# re_fq = re.compile('^([^?.!]+\?)')
# en_fname = path/'giga-fren.release2.fixed.en'
# fr_fname = path/'giga-fren.release2.fixed.fr'
# lines = ((re_eq.search(eq), re_fq.search(fq))
# for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
# qs = [(e.group(), f.group()) for e,f in lines if e and f]
# qs = [(q1,q2) for q1,q2 in qs]
# df = pd.DataFrame({'fr': [q[1] for q in qs], 'en': [q[0] for q in qs]}, columns = ['en', 'fr'])
# df.to_csv(path/'questions_easy.csv', index=False)
# del en, fr, lines, qs, df # free RAM or restart the nb
### fastText pre-trained word vectors https://fasttext.cc/docs/en/crawl-vectors.html
#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.fr.300.bin.gz -P {path}
#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.en.300.bin.gz -P {path}
#! gzip -d {path}/cc.fr.300.bin.gz
#! gzip -d {path}/cc.en.300.bin.gz
path.ls()
df = pd.read_csv(path/'questions_easy.csv')
df.head()
df['en'] = df['en'].apply(lambda x:x.lower())
df['fr'] = df['fr'].apply(lambda x:x.lower())
def seq2seq_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True, backwards:bool=False) -> Tuple[LongTensor, LongTensor]:
"Function that collect samples and adds padding. Flips token order if needed"
samples = to_data(samples)
max_len_x,max_len_y = max([len(s[0]) for s in samples]),max([len(s[1]) for s in samples])
res_x = torch.zeros(len(samples), max_len_x).long() + pad_idx
res_y = torch.zeros(len(samples), max_len_y).long() + pad_idx
if backwards: pad_first = not pad_first
for i,s in enumerate(samples):
if pad_first:
res_x[i,-len(s[0]):],res_y[i,-len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])
else:
res_x[i,:len(s[0]):],res_y[i,:len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])
if backwards: res_x,res_y = res_x.flip(1),res_y.flip(1)
return res_x,res_y
class Seq2SeqDataBunch(TextDataBunch):
"Create a `TextDataBunch` suitable for training an RNN classifier."
@classmethod
def create(cls, train_ds, valid_ds, test_ds=None, path:PathOrStr='.', bs:int=32, val_bs:int=None, pad_idx=1,
pad_first=False, device:torch.device=None, no_check:bool=False, backwards:bool=False, **dl_kwargs) -> DataBunch:
"Function that transform the `datasets` in a `DataBunch` for classification. Passes `**dl_kwargs` on to `DataLoader()`"
datasets = cls._init_ds(train_ds, valid_ds, test_ds)
val_bs = ifnone(val_bs, bs)
collate_fn = partial(seq2seq_collate, pad_idx=pad_idx, pad_first=pad_first, backwards=backwards)
train_sampler = SortishSampler(datasets[0].x, key=lambda t: len(datasets[0][t][0].data), bs=bs//2)
train_dl = DataLoader(datasets[0], batch_size=bs, sampler=train_sampler, drop_last=True, **dl_kwargs)
dataloaders = [train_dl]
for ds in datasets[1:]:
lengths = [len(t) for t in ds.x.items]
sampler = SortSampler(ds.x, key=lengths.__getitem__)
dataloaders.append(DataLoader(ds, batch_size=val_bs, sampler=sampler, **dl_kwargs))
return cls(*dataloaders, path=path, device=device, collate_fn=collate_fn, no_check=no_check)
class Seq2SeqTextList(TextList):
_bunch = Seq2SeqDataBunch
_label_cls = TextList
src = Seq2SeqTextList.from_df(df, path = path, cols='fr').split_by_rand_pct().label_from_df(cols='en', label_cls=TextList)
np.percentile([len(o) for o in src.train.x.items] + [len(o) for o in src.valid.x.items], 90)
np.percentile([len(o) for o in src.train.y.items] + [len(o) for o in src.valid.y.items], 90)
src = src.filter_by_func(lambda x,y: len(x) > 30 or len(y) > 30)
len(src.train) + len(src.valid)
data = src.databunch()
data.save()
data = load_data(path)
data.show_batch()
# Installation: https://github.com/facebookresearch/fastText#building-fasttext-for-python
import fastText as ft
fr_vecs = ft.load_model(str((path/'cc.fr.300.bin')))
en_vecs = ft.load_model(str((path/'cc.en.300.bin')))
def create_emb(vecs, itos, em_sz=300, mult=1.):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
vec_dic = {w:vecs.get_word_vector(w) for w in vecs.get_words()}
miss = []
for i,w in enumerate(itos):
try: wgts[i] = tensor(vec_dic[w])
except: miss.append(w)
return emb
emb_enc = create_emb(fr_vecs, data.x.vocab.itos)
emb_dec = create_emb(en_vecs, data.y.vocab.itos)
torch.save(emb_enc, path/'models'/'fr_emb.pth')
torch.save(emb_dec, path/'models'/'en_emb.pth')
del fr_vecs
del en_vecs
from fastai.text.models.qrnn import QRNN, QRNNLayer
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)
self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
bs,sl = inp.size()
self.encoder.reset()
self.decoder.reset()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = self.out_enc(self.hid_dp(hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
outs = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
out, hid = self.decoder(emb, hid)
out = self.out(self.out_drop(out[:,0]))
outs.append(out)
dec_inp = out.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
return torch.stack(outs, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)
def seq2seq_loss(out, targ, pad_idx=1):
bs,targ_len = targ.size()
_,out_len,vs = out.size()
if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)
if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)
return CrossEntropyFlat()(out, targ)
def seq2seq_acc(out, targ, pad_idx=1):
bs,targ_len = targ.size()
_,out_len,vs = out.size()
if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)
if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)
out = out.argmax(2)
return (out==targ).float().mean()
class NGram():
def __init__(self, ngram, max_n=5000): self.ngram,self.max_n = ngram,max_n
def __eq__(self, other):
if len(self.ngram) != len(other.ngram): return False
return np.all(np.array(self.ngram) == np.array(other.ngram))
def __hash__(self): return int(sum([o * self.max_n**i for i,o in enumerate(self.ngram)]))
def get_grams(x, n, max_n=5000):
return x if n==1 else [NGram(x[i:i+n], max_n=max_n) for i in range(len(x)-n+1)]
def get_correct_ngrams(pred, targ, n, max_n=5000):
pred_grams,targ_grams = get_grams(pred, n, max_n=max_n),get_grams(targ, n, max_n=max_n)
pred_cnt,targ_cnt = Counter(pred_grams),Counter(targ_grams)
return sum([min(c, targ_cnt[g]) for g,c in pred_cnt.items()]),len(pred_grams)
class CorpusBLEU(Callback):
def __init__(self, vocab_sz):
self.vocab_sz = vocab_sz
self.name = 'bleu'
def on_epoch_begin(self, **kwargs):
self.pred_len,self.targ_len,self.corrects,self.counts = 0,0,[0]*4,[0]*4
def on_batch_end(self, last_output, last_target, **kwargs):
last_output = last_output.argmax(dim=-1)
for pred,targ in zip(last_output.cpu().numpy(),last_target.cpu().numpy()):
self.pred_len += len(pred)
self.targ_len += len(targ)
for i in range(4):
c,t = get_correct_ngrams(pred, targ, i+1, max_n=self.vocab_sz)
self.corrects[i] += c
self.counts[i] += t
def on_epoch_end(self, last_metrics, **kwargs):
precs = [c/t for c,t in zip(self.corrects,self.counts)]
len_penalty = exp(1 - self.targ_len/self.pred_len) if self.pred_len < self.targ_len else 1
bleu = len_penalty * ((precs[0]*precs[1]*precs[2]*precs[3]) ** 0.25)
return add_metrics(last_metrics, bleu)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))])
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 1e-2)
def get_predictions(learn, ds_type=DatasetType.Valid):
learn.model.eval()
inputs, targets, outputs = [],[],[]
with torch.no_grad():
for xb,yb in progress_bar(learn.dl(ds_type)):
out = learn.model(xb)
for x,y,z in zip(xb,yb,out):
inputs.append(learn.data.train_ds.x.reconstruct(x))
targets.append(learn.data.train_ds.y.reconstruct(y))
outputs.append(learn.data.train_ds.y.reconstruct(z.argmax(1)))
return inputs, targets, outputs
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[2513], targets[2513], outputs[2513]
inputs[4000], targets[4000], outputs[4000]
class TeacherForcing(LearnerCallback):
def __init__(self, learn, end_epoch):
super().__init__(learn)
self.end_epoch = end_epoch
def on_batch_begin(self, last_input, last_target, train, **kwargs):
if train: return {'last_input': [last_input, last_target]}
def on_epoch_begin(self, epoch, **kwargs):
self.learn.model.pr_force = 1 - 0.5 * epoch/self.end_epoch
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)
self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = self.out_enc(self.hid_dp(hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
outp, hid = self.decoder(emb, hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.fit_one_cycle(8, 1e-2)
inputs, targets, outputs = get_predictions(learn)
inputs[700],targets[700],outputs[700]
inputs[2513], targets[2513], outputs[2513]
inputs[4000], targets[4000], outputs[4000]
#get_bleu(learn)
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)
self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()
hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
outp, hid = self.decoder(emb, hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 1e-2)
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[4001], targets[4001], outputs[4001]
#get_bleu(learn)
def init_param(*sz): return nn.Parameter(torch.randn(sz)/math.sqrt(sz[0]))
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)
self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
emb_sz = emb_dec.weight.size(1)
self.decoder = QRNN(emb_sz + 2*n_hid, emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_sz, emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data #Try tying
self.enc_att = nn.Linear(2*n_hid, emb_sz, bias=False)
self.hid_att = nn.Linear(emb_sz, emb_sz)
self.V = init_param(emb_sz)
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()
hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
enc_att = self.enc_att(enc_out)
for i in range(self.max_len):
hid_att = self.hid_att(hid[-1])
u = torch.tanh(enc_att + hid_att[:,None])
attn_wgts = F.softmax(u @ self.V, 1)
ctx = (attn_wgts[...,None] * enc_out).sum(1)
emb = self.emb_dec(dec_inp)
outp, hid = self.decoder(torch.cat([emb, ctx], 1)[:,None], hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 3e-3)
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[4002], targets[4002], outputs[4002]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You only need to execute the setup cells once, uncomment to run. The dataset can be downloaded here.
Step2: Put them in a DataBunch
Step3: To make it simple, we lowercase everything.
Step4: The first thing is that we will need to collate inputs and targets in a batch
Step5: Then we create a special DataBunch that uses this collate function.
Step6: And a subclass of TextList that will use this DataBunch class in the call .databunch and will use TextList to label (since our targets are other texts).
Step7: Thats all we need to use the data block API!
Step8: We remove the items where one of the target is more than 30 tokens long.
Step9: Model
Step10: We create an embedding module with the pretrained vectors and random data for the missing parts.
Step11: Free some RAM
Step12: QRNN seq2seq
Step13: The model in itself consists in an encoder and a decoder
Step14: Loss function
Step15: Bleu metric (see dedicated notebook)
Step16: We load our pretrained embeddings to create the model.
Step17: So how good is our model? Let's see a few predictions.
Step18: It's usually beginning well, but falls into easy word at the end of the question.
Step19: Bidir
Step20: Attention
|
5,681
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
X_train_feats.shape,X_val_feats.shape
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate= learning_rate, reg=regularization_strength,
num_iters=1500)
y_train_predict = svm.predict(X_train_feats)
y_val_predict = svm.predict(X_val_feats)
accuracy_train = np.mean(y_train_predict == y_train)
accuracy_validation = np.mean(y_val_predict == y_val)
results[(learning_rate,regularization_strength)] = (accuracy_train,accuracy_validation)
if accuracy_validation > best_val:
best_val = accuracy_validation
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
learning_rates = np.logspace(-8,0,5)
regularization_strengths = np.logspace(-2,5,5)
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats, y_train,X_val_feats,y_val, learning_rate= learning_rate, reg=regularization_strength,
num_iters=1500)
y_train_predict = net.predict(X_train_feats)
y_val_predict = net.predict(X_val_feats)
accuracy_train = np.mean(y_train_predict == y_train)
accuracy_validation = np.mean(y_val_predict == y_val)
results[(learning_rate,regularization_strength)] = (accuracy_train,accuracy_validation)
if accuracy_validation > best_val:
best_val = accuracy_validation
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Extract Features
Step3: Train SVM on features
Step4: Inline question 1
|
5,682
|
<ASSISTANT_TASK:>
Python Code:
# import everything from skyofstars
from skyofstars.examples import *
example = create_test_catalog()
example.coordinates = example.coordinates.transform_to('icrs')
print (example.coordinates.icrs.ra.rad)
# we can also access all the functions we defined *inside* the Catalog
example.plot_celestial(s=5, alpha=0.2)
# we can list all the stuff defined inside the Catalog like this:
dir(example)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We defined a new kind of Python variable, by writing a class definition for a Catalog. We can create a new one of these objects as follows.
|
5,683
|
<ASSISTANT_TASK:>
Python Code:
# change directory
%cd ../../../Projects/starspot/starspot/
from color import bolcor as bc
bc.utils.log_init('table_limits.log') # initialize bolometric correction log file
FeH = 0.0 # dex; atmospheric [Fe/H]
aFe = 0.0 # dex; atmospheric [alpha/Fe]
brand = 'marcs' # use theoretical corrections from MARCS atmospheres
phot_filters = ['U', 'B', 'V', 'R', 'I', 'J', 'H', 'K'] # select a subset of filters
bc.bolcorrection.bc_init(FeH, aFe, brand, phot_filters) # initialize tables
bc.bolcorrection.bc_eval(5300.0, 4.61, -0.353, len(phot_filters)) # approx. 0.9 Msun star at 120 Myr.
bc.bolcorrection.bc_eval(3000.0, 4.94, -2.65, len(phot_filters)) # approx. 0.1 Msun star at 120 Myr.
bc.bolcorrection.bc_eval(2204.0, 4.83, -3.47, len(phot_filters)) # outside of grid -- should return garbage.
bc.bolcorrection.bc_clean()
bc.utils.log_close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before requesting bolometric corrections, we need to first initialize the package, which loads the appropriate bolometric corrections tables into memory to permit faster computation of corrections hereafter. The procedure for initializing the tables can be found in the README file in the starspot.color repository. First we need to initialize a log file, where various procedural steps in the code are tracked.
Step2: Next, we need to load the appropriate tables.
Step3: Now that we have established which set of bolometric correction tables we wish to use, it's possible to compute bolometric correction using either the Isochrone.colorize feature, or by submitting individual requests. First, let's take a look at a couple valid queries. Note that the query is submitted as bc_eval(Teff, logg, logL/Lsun, N_filers)
Step4: The extremely large (or small) magnitudes for the 5300 K star is very strange. These issues do not occur for the same command execution in the terminal shell.
Step5: Curiously, it returns values that do not appear to be out of line. It's quite possible that the code is somehow attempting to extrapolate since we use a 4-point Lagrange inteprolation, which may unknowingly provide an extrapolation beyond the defined grid space. Comparing to Phoenix model atmospheres with the Caffau et al. (2011) solar abundances for the Johnson-Cousins and 2MASS systems, our optical $BV(RI)_C$ magnitudes are systematically 1.0 mag fainter than Phoenix at 120 Myr, and our $JHK$ magnitudes are fainter by approximately 0.04 mag at the same age.
|
5,684
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
import warnings
warnings.filterwarnings('ignore')
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
get_external_effect = lambda n: np.random.normal(0.0, 0.5, n) ** 3
n_samples = 300
x5 = get_external_effect(n_samples)
x6 = get_external_effect(n_samples)
x1 = 0.6*x5 + get_external_effect(n_samples)
x3 = 0.5*x5 + get_external_effect(n_samples)
x0 = 1.0*x1 + 1.0*x3 + get_external_effect(n_samples)
x2 = 0.8*x0 - 0.6*x6 + get_external_effect(n_samples)
x4 = 1.0*x0 - 0.5*x6 + get_external_effect(n_samples)
# The latent variable x6 is not included.
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T, columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X.head()
m = np.array([[ 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.6, 0.0],
[ 0.8, 0.0, 0.0, 0.0, 0.0, 0.0,-0.6],
[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0],
[ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0,-0.5],
[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
dot = make_dot(m, labels=['x0', 'x1', 'x2', 'x3', 'x4', 'x5', 'f1(x6)'])
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot
model = lingam.RCD()
model.fit(X)
ancestors_list = model.ancestors_list_
for i, ancestors in enumerate(ancestors_list):
print(f'M{i}={ancestors}')
model.adjacency_matrix_
make_dot(model.adjacency_matrix_)
p_values = model.get_error_independence_p_values(X)
print(p_values)
import warnings
warnings.filterwarnings('ignore', category=UserWarning)
model = lingam.RCD()
result = model.bootstrap(X, n_sampling=100)
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
print_causal_directions(cdc, 100)
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
print_dagc(dagc, 100)
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
causal_effects = result.get_total_causal_effects(min_causal_effect=0.01)
# Assign to pandas.DataFrame for pretty display
df = pd.DataFrame(causal_effects)
labels = [f'x{i}' for i in range(X.shape[1])]
df['from'] = df['from'].apply(lambda x : labels[x])
df['to'] = df['to'].apply(lambda x : labels[x])
df
df.sort_values('effect', ascending=False).head()
df.sort_values('probability', ascending=True).head()
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from_index = 5 # index of x5
to_index = 0 # index of x0
plt.hist(result.total_effects_[:, to_index, from_index])
from_index = 5 # index of x5
to_index = 4 # index of x4
pd.DataFrame(result.get_paths(from_index, to_index))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test data
Step2: Causal Discovery
Step3: Using the ancestors_list_ properties, we can see the list of ancestors sets as a result of the causal discovery.
Step4: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.
Step5: Independence between error variables
Step6: Bootstrapping
Step7: Causal Directions
Step8: We can check the result by utility function.
Step9: Directed Acyclic Graphs
Step10: We can check the result by utility function.
Step11: Probability
Step12: Total Causal Effects
Step13: We can easily perform sorting operations with pandas.DataFrame.
Step14: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
Step15: Bootstrap Probability of Path
|
5,685
|
<ASSISTANT_TASK:>
Python Code:
from copy import deepcopy
import math
import scipy.sparse.linalg as spsl
import projectq
from projectq.backends import Simulator
from projectq.meta import Compute, Control, Dagger, Uncompute
from projectq.ops import All, H, Measure, Ph, QubitOperator, R, StatePreparation, X, Z
num_qubits = 3
hamiltonian = QubitOperator()
hamiltonian += QubitOperator("X0", -1/12.)
hamiltonian += QubitOperator("X1", -1/12.)
hamiltonian += QubitOperator("X2", -1/12.)
hamiltonian += QubitOperator("Z0 Z1", -1/12.)
hamiltonian += QubitOperator("Z0 Z2", -1/12.)
hamiltonian += QubitOperator("", 7/12.)
hamiltonian_norm = 0.
for term in hamiltonian.terms:
hamiltonian_norm += abs(hamiltonian.terms[term])
normalized_hamiltonian = deepcopy(hamiltonian)
normalized_hamiltonian /= hamiltonian_norm
def get_eigenvalue_and_eigenvector(n_sites, hamiltonian, k, which='SA'):
Returns k eigenvalues and eigenvectors of the hamiltonian.
Args:
n_sites(int): Number of qubits/sites in the hamiltonian
hamiltonian(QubitOperator): QubitOperator representating the Hamiltonian
k: num of eigenvalue and eigenvector pairs (see spsl.eigsh k)
which: see spsl.eigsh which
def mv(v):
eng = projectq.MainEngine(backend=Simulator(), engine_list=[])
qureg = eng.allocate_qureg(n_sites)
eng.flush()
eng.backend.set_wavefunction(v, qureg)
eng.backend.apply_qubit_operator(hamiltonian, qureg)
order, output = deepcopy(eng.backend.cheat())
for i in order:
assert i == order[i]
eng.backend.set_wavefunction([1]+[0]*(2**n_sites-1), qureg)
return output
A = spsl.LinearOperator((2**n_sites,2**n_sites), matvec=mv)
eigenvalues, eigenvectormatrix = spsl.eigsh(A, k=k, which=which)
eigenvectors = []
for i in range(k):
eigenvectors.append(list(eigenvectormatrix[:, i]))
return eigenvalues, eigenvectors
eigenvalues, eigenvectors = get_eigenvalue_and_eigenvector(
n_sites=num_qubits,
hamiltonian=normalized_hamiltonian,
k=4)
print(eigenvalues)
def W(eng, individual_terms, initial_wavefunction, ancilla_qubits, system_qubits):
Applies the W operator as defined in arXiv:1711.11025.
Args:
eng(MainEngine): compiler engine
individual_terms(list<QubitOperator>): list of individual unitary
QubitOperators. It applies
individual_terms[0] if ancilla
qubits are in state |0> where
ancilla_qubits[0] is the least
significant bit.
initial_wavefunction: Initial wavefunction of the ancilla qubits
ancilla_qubits(Qureg): ancilla quantum register in state |0>
system_qubits(Qureg): system quantum register
# Apply V:
for ancilla_state in range(len(individual_terms)):
with Compute(eng):
for bit_pos in range(len(ancilla_qubits)):
if not (ancilla_state >> bit_pos) & 1:
X | ancilla_qubits[bit_pos]
with Control(eng, ancilla_qubits):
individual_terms[ancilla_state] | system_qubits
Uncompute(eng)
# Apply S: 1) Apply B^dagger
with Compute(eng):
with Dagger(eng):
StatePreparation(initial_wavefunction) | ancilla_qubits
# Apply S: 2) Apply I-2|0><0|
with Compute(eng):
All(X) | ancilla_qubits
with Control(eng, ancilla_qubits[:-1]):
Z | ancilla_qubits[-1]
Uncompute(eng)
# Apply S: 3) Apply B
Uncompute(eng)
# Could also be omitted and added when calculating the eigenvalues:
Ph(math.pi) | system_qubits[0]
eng = projectq.MainEngine()
system_qubits = eng.allocate_qureg(num_qubits)
# Create a normalized equal superposition of the two eigenstates for numerical testing:
initial_state_norm =0.
initial_state = [i+j for i,j in zip(eigenvectors[0], eigenvectors[1])]
for amplitude in initial_state:
initial_state_norm += abs(amplitude)**2
normalized_initial_state = [amp / math.sqrt(initial_state_norm) for amp in initial_state]
#initialize system qubits in this state:
StatePreparation(normalized_initial_state) | system_qubits
individual_terms = []
initial_ancilla_wavefunction = []
for term in normalized_hamiltonian.terms:
coefficient = normalized_hamiltonian.terms[term]
initial_ancilla_wavefunction.append(math.sqrt(abs(coefficient)))
if coefficient < 0:
individual_terms.append(QubitOperator(term, -1))
else:
individual_terms.append(QubitOperator(term))
# Calculate the number of ancilla qubits required and pad
# the ancilla wavefunction with zeros:
num_ancilla_qubits = int(math.ceil(math.log(len(individual_terms), 2)))
required_padding = 2**num_ancilla_qubits - len(initial_ancilla_wavefunction)
initial_ancilla_wavefunction.extend([0]*required_padding)
# Initialize ancillas by applying B
ancillas = eng.allocate_qureg(num_ancilla_qubits)
StatePreparation(initial_ancilla_wavefunction) | ancillas
# Semiclassical iterative phase estimation
bits_of_precision = 8
pe_ancilla = eng.allocate_qubit()
measurements = [0] * bits_of_precision
for k in range(bits_of_precision):
H | pe_ancilla
with Control(eng, pe_ancilla):
for i in range(2**(bits_of_precision-k-1)):
W(eng=eng,
individual_terms=individual_terms,
initial_wavefunction=initial_ancilla_wavefunction,
ancilla_qubits=ancillas,
system_qubits=system_qubits)
#inverse QFT using one qubit
for i in range(k):
if measurements[i]:
R(-math.pi/(1 << (k - i))) | pe_ancilla
H | pe_ancilla
Measure | pe_ancilla
eng.flush()
measurements[k] = int(pe_ancilla)
# put the ancilla in state |0> again
if measurements[k]:
X | pe_ancilla
est_phase = sum(
[(measurements[bits_of_precision - 1 - i]*1. / (1 << (i + 1)))
for i in range(bits_of_precision)])
print("We measured {} corresponding to energy {}".format(est_phase, math.cos(2*math.pi*est_phase)))
eng.backend.get_expectation_value(normalized_hamiltonian, system_qubits)
with Dagger(eng):
StatePreparation(initial_ancilla_wavefunction) | ancillas
measure_qb = eng.allocate_qubit()
with Compute(eng):
All(X) | ancillas
with Control(eng, ancillas):
X | measure_qb
Uncompute(eng)
eng.flush()
eng.backend.get_probability('1', measure_qb)
eng.backend.collapse_wavefunction(measure_qb, [1])
eng.backend.get_expectation_value(normalized_hamiltonian, system_qubits)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's use a simple Hamiltonian acting on 3 qubits for which we want to know the eigenvalues
Step2: For this quantum algorithm, we need to normalize the hamiltonian
Step4: 1. We implement a short helper function which uses the ProjectQ simulator to numerically calculate some eigenvalues and eigenvectors of Hamiltonians stored in ProjectQ's QubitOperator in order to check our implemenation of the quantum algorithm. This function is particularly fast because it doesn't need to build the matrix of the hamiltonian but instead uses implicit matrix vector multiplication by using our simulator
Step5: We use this function to find the 4 lowest eigenstates of the normalized hamiltonian
Step7: We see that the eigenvalues are all positive as required (otherwise increase identity term in hamiltonian)
Step8: 3. For testing this algorithm, let's initialize the qubits in a superposition state of the lowest and second lowest eigenstate of the hamiltonian
Step9: 4. Split the normalized_hamiltonian into individual terms and build the wavefunction for the ancilla qubits
Step10: 5. Perform an iterative phase estimation of the unitary W to collapse to one of the eigenvalues of the normalized_hamiltonian
Step11: 6. We measured the lowest eigenstate. You can verify that this happens with 50% probability as we chose our initial state to have 50% overlap with the ground state. As the paper notes, the system_qubits are not in an eigenstate and one can easily test that using our simulator to get the energy of the current state
Step12: 7. As explained in the paper, one can change this state into an eigenstate by undoing the StatePreparation of the ancillas and then by measuring if the ancilla qubits in are state 0. The paper says that this should be the case with 50% probability. So let's check this (we require an ancilla to measure this)
Step13: As we can see, we would measure 1 (corresponding to the ancilla qubits in state 0) with probability 50% as explained in the paper. Let's assume we measure 1, then we can easily check that we are in an eigenstate of the normalized_hamiltonian by numerically calculating its energy
|
5,686
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday',y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
data.head()
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
def sigmoid(x):
return 1/(1 + np.exp(-x))
class NeuralNetwork(object):
def __init__(self, input_nodes,hidden_nodes,output_nodes,learning_rate):
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weidghts
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
self.activation_function = sigmoid
def train(self,inputs_list,targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)
hidden_outputs = sigmoid(hidden_inputs)
# TODO: Output layer
final_inputs =np.dot(self.weights_hidden_to_output, hidden_outputs)
output = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
o_error = targets - output
# print('weights_hidden_to_output.shape is %d,%d '%self.weights_hidden_to_output.shape)
# print('output.shape is %d,%d' %output.shape)
h_error = np.dot(self.weights_hidden_to_output.T,o_error)
h_grad = hidden_outputs * (1 - hidden_outputs)
# TODO: Update the weight
# print('o_error.shape is %d,%d '%o_error.shape)
# print('h_ouputs.shape is %d,%d' %hidden_outputs.shape)
self.weights_hidden_to_output += self.lr * np.dot(o_error, hidden_outputs.T) * 1
self.weights_input_to_hidden += self.lr * np.dot(h_error, inputs.T) * h_grad
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)# signals into hidden layer
hidden_outputs = sigmoid(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)# signals into final output layer
final_outputs = final_inputs# signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 200
learning_rate = 0.2
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Splitting the data into training, testing, and validation sets
Step4: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step5: Time to build the network
Step6: Training the network
|
5,687
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image,Latex
from numpy import array, cross,dot, sqrt
AB = array([4.0,2.,0.]) - array([1., 0.,0.])
AC = array([0.,0.,3.]) - array([1., 0.,0.])
print 'AB=', AB, ',', 'AC=', AC
Nor = cross(AB,AC)
MNor = sqrt(dot(Nor,Nor))
n = Nor / MNor
print 'Normal=',Nor, ',' 'Magnitud',MNor
Sigma = ([[80., 40., 30.],
[40.,50., 10.],
[30., 10., 60.]])
t = dot(Sigma,n)
print t.round(2)
Sigmann = dot(t,n)
print Sigmann.round(2)
tau = sqrt(dot(t,t) - Sigmann**2)
print tau.round(2)
from IPython.core.display import HTML
def css_styling():
styles = open('./custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Si el tensor de esfuerzos en un punto $P$, en el sistema de referencia $X,Y,Z$ está definidido por
Step2: El vector unitarion $\hat{n}$ es entonces obtenido como $$ \vec{n} = \frac{\vec{AB} \times \vec{AC}}{\|\vec{AB} \times \vec{AC}\|} $$
Step3: Por otro lado, el tensor $[\sigma]$ sería
Step4: Con esto, el vector de tracciones $\vec{t}$ sería entonces
Step5: 2.Calcular el esfuerzo normal al que está sometida la cara del punto $P$, que es paralela al plano que contiene los puntos $A$ , $B$ y $C \\$.
Step6: 3.Calcular la magnitud del esfuerzo tangencial al que está sometida la cara del punto $P$, que es paralela al plano que contiene los puntos $A$ , $B$ y $C \\$
|
5,688
|
<ASSISTANT_TASK:>
Python Code:
import os
os.getcwd()
%matplotlib inline
%pylab inline
import pandas as pd
import numpy as np
from collections import Counter, OrderedDict
import json
import matplotlib
import matplotlib.pyplot as plt
import re
from scipy.misc import imread
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from pandas.io.json import json_normalize
import pickle
training_json = pd.DataFrame()
with open('data/training_data.nldjson') as data_file:
for line in data_file:
training_json = training_json.append(json_normalize(json.loads(line)))
with open('data/test.nldjson') as data_file:
for line in data_file:
out_test_json = json_normalize(json.loads(line))
out_test = out_test_json
training = training_json
out_test[0:1]
training['querytime'] = pd.to_datetime(training['querytime'])
out_test['querytime'] = pd.to_datetime(out_test['querytime'])
training = training.dropna()
training['post.occupancy'] = training['post.occupancy'].apply(lambda x: x.split("http://api.irail.be/terms/",1)[1])
training['post.vehicle'] = training['post.vehicle'].apply(lambda x: x.split("http://irail.be/vehicle/",1)[1])
out_test['post.vehicle'] = out_test['post.vehicle'].apply(lambda x: x.split("http://irail.be/vehicle/",1)[1])
#create class column, eg IC058 -> IC
training['post.class'] = training['post.vehicle'].apply(lambda x: " ".join(re.findall("[a-zA-Z]+", x)))
out_test['post.class'] = out_test['post.vehicle'].apply(lambda x: " ".join(re.findall("[a-zA-Z]+", x)))
#reset the index because you have duplicate indexes now because you appended DFs in a for loop
training = training.reset_index()
stations_df = pd.read_csv('data/stations.csv')
stations_df['from'] = stations_df.index
stations_df['destination'] = stations_df['from']
stations_df[0:4]
#post.from en post.to are in the some format of URI
stations_df["zoekterm"]=stations_df["name"]+" trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Zaventem"), "zoekterm"] = "Zaventem trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Charleroi"), "zoekterm"] = "Charleroi trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Brussel"), "zoekterm"] = "Brussel trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Gent"), "zoekterm"] = "Gent trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Liège"), "zoekterm"] = "Luik trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Antwerpen"), "zoekterm"] = "Antwerpen trein"
druktes_df = pd.read_csv('data/station_druktes.csv')
druktes_df[0:4]
training = pd.merge(training,stations_df[["URI","from"]], left_on = 'post.from', right_on = 'URI')
training = pd.merge(training,stations_df[["URI","destination"]], left_on = 'post.to', right_on = 'URI')
training = training.drop(['URI_y','URI_x'],1)
out_test = pd.merge(out_test,stations_df[["URI","from"]], left_on = 'post.from', right_on = 'URI')
out_test = pd.merge(out_test,stations_df[["URI","destination"]], left_on = 'post.to', right_on = 'URI')
out_test = out_test.drop(['URI_y','URI_x'],1)
fig, ax = plt.subplots(1,1, figsize=(5,5))
training['post.class'].value_counts().plot(kind='pie', ax=ax, autopct='%1.1f%%')
#we have a lot of null/undefined, especially in our test set, we can't simply throw them away
training['weekday'] = training['querytime'].apply(lambda l: l.weekday())
out_test['weekday'] = out_test['querytime'].apply(lambda l: l.weekday())
print("timerange from training data:",training['querytime'].min(),training['querytime'].max())
print(training['querytime'].describe())
print(out_test['querytime'].describe())
date_training = training.set_index('querytime')
date_test = out_test.set_index('querytime')
grouper = pd.TimeGrouper("1d")
date_training = date_training.groupby(grouper).size()
date_test = date_test.groupby(grouper).size()
# plot
fig, ax = plt.subplots(1,1, figsize=(10,7))
ax.plot(date_training)
ax.plot(date_test)
fig, ax = plt.subplots(1,1, figsize=(6,6))
training['weekday'].value_counts().plot(kind='pie', ax=ax, autopct='%1.1f%%')
training['post.occupancy'].value_counts()
training[0:1]
occup = pd.crosstab(training['post.class'], training['post.occupancy'])
weekday = pd.crosstab(training['post.class'], training['weekday'])
occup = occup.drop(['BUS', 'EUR', 'EXT', 'ICE', 'ICT', 'P', 'TGV', 'THA', 'TRN', 'ic', 'null', ''])
occup = occup.apply(lambda r: r/r.sum(), axis=1)
occup[0:4]
weekday = weekday.drop(['BUS', 'EUR', 'EXT', 'ICE', 'ICT', 'P', 'TGV', 'THA', 'TRN', 'ic', 'null', ''])
weekday = weekday.apply(lambda r: r/r.sum(), axis=1)
df_occup = pd.DataFrame(occup)
df_occup.plot.bar(stacked=True);
df_weekday = pd.DataFrame(weekday)
df_weekday.plot.bar(stacked=True);
stops = stations_df[['URI','longitude','latitude']]
dest_count = training['post.to'].value_counts()
dest_count_df = pd.DataFrame({'id':dest_count.index, 'count':dest_count.values})
dest_loc = pd.merge(dest_count_df, stops, left_on = 'id', right_on = 'URI')
dest_loc = dest_loc[['id', 'count', 'latitude','longitude']]
fig, ax = plt.subplots(figsize=(12,10))
ax.scatter(dest_loc.longitude, dest_loc.latitude, s=dest_loc['count'] )
def get_seconds_since_midnight(x):
midnight = x.replace(hour=0, minute=0, second=0, microsecond=0)
return (x - midnight).seconds
def get_line_number(x):
pattern = re.compile("^[A-Z]+([0-9]+)$")
if pattern.match(x):
return int(pattern.match(x).group(1))
else:
return x
training['seconds_since_midnight'] = training['querytime'].apply(get_seconds_since_midnight)
training['month'] = training['querytime'].apply(lambda x: x.month)
training['occupancy'] = training['post.occupancy'].map({'low': 0, 'medium': 1, 'high': 2})
out_test['seconds_since_midnight'] = out_test['querytime'].apply(get_seconds_since_midnight)
out_test['month'] = out_test['querytime'].apply(lambda x: x.month)
fig, ax = plt.subplots(figsize=(5, 5))
corr_frame = training[['seconds_since_midnight', 'month', 'occupancy']].corr()
cax = ax.matshow(abs(corr_frame))
fig.colorbar(cax)
tickpos = np.array(range(0,len(corr_frame.columns)))
plt.xticks(tickpos,corr_frame.columns, rotation='vertical')
plt.yticks(tickpos,corr_frame.columns, rotation='horizontal')
plt.grid(None)
pd.tools.plotting.scatter_matrix(training[['seconds_since_midnight', 'month', 'occupancy']],
alpha=0.2, diagonal='kde', figsize=(10,10))
plt.grid(None)
skf = StratifiedKFold(n_splits=5, random_state=1337)
X = training[['seconds_since_midnight', 'month']]
y = training['occupancy']
cms = []
accs = []
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y[train_index], y[test_index]
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
predictions = log_reg.predict(X_test)
cm = confusion_matrix(y_test, predictions)
cms.append(cm)
accs.append(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
#accs.append(sum([float(cm[i][i]) for i in range(len(cm))])/np.sum(cm))
print('Confusion matrix:\n', np.mean(cms, axis=0))
print('Avg accuracy', np.mean(accs), '+-', np.std(accs))
print('Predict all lows', float(len(y[y == 0]))/float(len(y)))
training_class = training_holidays[training_holidays.class_enc != 0]
training_class = training_class[training_class.class_enc != 14]
test_class = training_holidays[(training_holidays.class_enc == 0)|(training_holidays.class_enc == 14)]
training_class["class_pred"]=training_class["class_enc"]
training_holidays_enc = pd.concat([training_class,test_class])
X_train = training_class[['seconds_since_midnight','weekday', 'month','id','id_2']]
X_test = test_class[['seconds_since_midnight','weekday', 'month','id','id_2']]
y_train = training_class['class_enc']
train.occupancy.value_counts()/train.shape[0]
test.occupancy.value_counts()/test.shape[0]
out_test_holidays_druktes.occupancy.value_counts()/out_test_holidays_druktes.shape[0]
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.cross_validation import train_test_split
train, test = train_test_split(training_holidays_druktes, test_size=0.2, random_state=42)
X_train = train[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8', 'temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']]
X_test = test[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8', 'temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']]
y_train = train['occupancy']
y_test = test['occupancy']
#month uit de set halen als we ongeziene willen predicten
from xgboost import XGBClassifier
xgb = XGBClassifier(n_estimators=5000, max_depth=3, min_child_weight=6, learning_rate=0.01,
colsample_bytree=0.5, subsample=0.6, gamma=0., nthread=-1,
max_delta_step=1, objective='multi:softmax')
xgb.fit(X_train, y_train, sample_weight=[1]*len(y_train))
print(xgb.score(X_train,y_train))
print(xgb.score(X_test, y_test))
ac = AdaBoostClassifier()
ada_param_grid = {'n_estimators': [10, 30, 100, 300, 1000],
'learning_rate': [0.1, 0.3, 1.0, 3.0]}
ac_grid = GridSearchCV(ac,ada_param_grid,cv=3,
scoring='accuracy')
ac_grid.fit(X_train, y_train)
ac = ac_grid.best_estimator_
#ac.fit(X_train, y_train)
#print(ac_grid.score(X_train,y_train))
#print(ac_grid.score(X_test, y_test))
rf = RandomForestClassifier()
param_dist = {"n_estimators": [20],
"max_depth": [7, None],
"max_features": range(4, 6),
"min_samples_split": range(2, 7),
"min_samples_leaf": range(1, 7),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
rand = GridSearchCV(rf,param_dist,cv=3,
scoring='accuracy')
rand.fit(X_train, y_train)
rf = rand.best_estimator_
print(rand.best_estimator_)
# rf.fit(X_train, y_train)
# print(rf.score(X_train,y_train))
# print(rf.score(X_test, y_test))
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(random_state=0)
dtc.fit(X_train, y_train)
print(dtc.score(X_train,y_train))
print(dtc.score(X_test, y_test))
rf2 = rand.best_estimator_
rf3 = rand.best_estimator_
rf4 = rand.best_estimator_
# voting_clf = VotingClassifier(
# estimators=[('ac', ac), ('rf', rf), ('dtc', dtc),('rf2', rf2), ('rf3', rf3), ('rf4', rf4), ('xgb', xgb)],
# voting='hard'
# )
voting_clf = VotingClassifier(
estimators=[('ac', ac), ('rf', rf), ('xgb', xgb)],
voting='hard'
)
from sklearn.metrics import accuracy_score
for clf in (ac, rf, xgb, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
ac.fit(X_train, y_train)
voting_clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
pd.DataFrame([X_train.columns, rf.feature_importances_])
y_predict_test = voting_clf.predict(out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8','temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']])
y_predict_test = rf.predict(out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8','temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']])
out_test_holidays_druktes["occupancy"] = y_predict_test
out_test_holidays_druktes.occupancy.value_counts()/out_test_holidays_druktes.shape[0]
train.occupancy.value_counts()/train.shape[0]
out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','name_enc','class_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','trend','occupancy']][0:100]
out_test_holidays_druktes[["id","occupancy"]].to_csv('predictions.csv',index=False)
skf = StratifiedKFold(n_splits=5, random_state=1337)
X = training[['seconds_since_midnight', 'month']]
y = training['occupancy']
cms = []
accs = []
parameters = {#'penalty': ['l1', 'l2'], # No penalty tuning, cause 'l1' is only supported by liblinear
# It can be interesting to manually take a look at 'l1' with 'liblinear', since LASSO
# provides sparse solutions (boils down to the fact that LASSO does some feature selection for you)
'solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag'],
'tol': [1e-4, 1e-6, 1e-8],
'C': [1e-2, 1e-1, 1.0, 1e1],
'max_iter': [1e2, 1e3]
}
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y[train_index], y[test_index]
tuned_log_reg = GridSearchCV(LogisticRegression(penalty='l2'), parameters, cv=3,
scoring='accuracy')
tuned_log_reg.fit(X_train, y_train)
print(tuned_log_reg.best_params_)
predictions = tuned_log_reg.predict(X_test)
cm = confusion_matrix(y_test, predictions)
cms.append(cm)
accs.append(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
print('Confusion matrix:\n', np.mean(cms, axis=0))
print('Avg accuracy', np.mean(accs), '+-', np.std(accs))
print('Predict all lows', float(len(y[y == 0]))/float(len(y)))
holiday_pops = pd.read_json('data/holidays.json')
holidays = pd.read_json( (holiday_pops['holidays']).to_json(), orient='index')
holidays['date'] = pd.to_datetime(holidays['date'])
holidays.head(1)
training["date"] = training["querytime"].values.astype('datetime64[D]')
out_test["date"] = out_test["querytime"].values.astype('datetime64[D]')
training_holidays = pd.merge(training,holidays, how="left", on='date')
training_holidays.school = training_holidays.school.fillna(0)
training_holidays.name = training_holidays.name.fillna("geen")
training_holidays[0:1]
out_test_holidays = pd.merge(out_test,holidays, how="left", on='date')
out_test_holidays.school = out_test_holidays.school.fillna(0)
out_test_holidays.name = out_test_holidays.name.fillna("geen")
out_test_holidays[0:1]
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
#encode the names from the holidays (Summer,Christmas...)
training_holidays["name_enc"] = encoder.fit_transform(training_holidays["name"])
out_test_holidays["name_enc"] = encoder.fit_transform(out_test_holidays["name"])
#encode the classes (IC,TGV,L...)
training_holidays["class_enc"] = encoder.fit_transform(training_holidays["post.class"])
out_test_holidays["class_enc"] = encoder.fit_transform(out_test_holidays["post.class"])
training_holidays=training_holidays.rename(columns = {'too':'destination'})
out_test_holidays=out_test_holidays.rename(columns = {'too':'destination'})
stations_df = pd.merge(stations_df,druktes_df.drop(['Unnamed: 0','station'],1), left_on = 'name', right_on = 'station_link')
def transform_druktes(row):
start = row['from']
destination = row['destination']
day = row['weekday']
row['from_lat']=stations_df[stations_df["from"] == start]["latitude"].values[0]
row['from_lng']=stations_df[stations_df["destination"] == destination]["longitude"].values[0]
row['des_lat']=stations_df[stations_df["from"] == start]["latitude"].values[0]
row['des_lng']=stations_df[stations_df["destination"] == destination]["longitude"].values[0]
row['zoekterm']=stations_df[stations_df["destination"] == destination]["zoekterm"].values[0]
if day == 5:
row['drukte_from']=stations_df[stations_df["from"] == start]["zaterdag"].values[0]
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zaterdag"].values[0]
elif day == 6:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]
elif day == 4:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*1.11
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*1.11
elif day == 3:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*1.21
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*1.21
elif day == 2:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*0.736
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*0.736
elif day == 1:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*0.92
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.*0.92
else:
row['drukte_from']=stations_df[stations_df["from"] == start]["week"].values[0]/5.0*1.016
row['drukte_to']=stations_df[stations_df["destination"] == destination]["week"].values[0]/5.0*1.016
return row
training_holidays_druktes = training_holidays_druktes.apply(transform_druktes, axis=1)
out_test_holidays_druktes = training_holidays_druktes.apply(transform_druktes, axis=1)
training_holidays_druktes = pd.concat([training_holidays_druktes,
pd.get_dummies(training_holidays_druktes['weekday'], prefix="day_"),
],1)
out_test_holidays_druktes = pd.concat([out_test_holidays_druktes,
pd.get_dummies(out_test_holidays_druktes['weekday'], prefix="day_"),
],1)
trends_df = pd.DataFrame()
real_trend_df = pd.DataFrame()
from pytrends.request import TrendReq
import pandas as pd
# enter your own credentials
google_username = "davidjohansmolders@gmail.com"
google_password = "*******"
#path = ""
# Login to Google. Only need to run this once, the rest of requests will use the same session.
pytrend = TrendReq(google_username, google_password, custom_useragent='My Pytrends Script')
for i in range(0,645):
if i % 4 != 0:
continue
try:
pytrend.build_payload(kw_list=[stations_df[stations_df.destination == i].zoekterm.values[0], stations_df[stations_df.destination == i+1].zoekterm.values[0], stations_df[stations_df.destination == i+2].zoekterm.values[0], stations_df[stations_df.destination == i+3].zoekterm.values[0], "Brussel trein"],geo="BE",timeframe='2016-07-27 2017-04-05')
real_trend_df = pd.concat([real_trend_df,pytrend.interest_over_time()], axis=1)
except:
continue
no_dup_trends = trends_df.T.groupby(level=0).first().T
training_holidays_druktes = pd.merge(training_holidays_druktes,stations_df[["destination","zoekterm"]], left_on = 'destination', right_on = 'destination')
out_test_holidays_druktes = pd.merge(out_test_holidays_druktes,stations_df[["destination","zoekterm"]], left_on = 'destination', right_on = 'destination')
int(real_trend_df.loc["2016-07-28"]["Brussel trein"])
training_holidays_druktes_copy = training_holidays_druktes
out_test_holidays_druktes_copy = out_test_holidays_druktes
training_holidays_druktes = training_holidays_druktes_copy
out_test_holidays_druktes = out_test_holidays_druktes_copy
def get_trends(row):
zoek = str(row.zoekterm)
datum = str(row["date"])[0:10]
try:
row["real_trend"] = int(real_trend_df.loc[datum][zoek])
except:
row["real_trend"] = 0
return row
training_holidays_druktes = training_holidays_druktes.apply(get_trends, axis=1)
out_test_holidays_druktes = out_test_holidays_druktes.apply(get_trends, axis=1)
training_holidays_druktes = training_holidays_druktes.drop(['post.date','post.from','post.vehicle','querytype','user_agent','post.to','name','post.class'],1)
out_test_holidays_druktes = out_test_holidays_druktes.drop(['post.date','post.from','post.vehicle','querytype','user_agent','post.to','name','post.class'],1)
training_holidays_druktes["hour"] = training_holidays_druktes["querytime"].values.astype('datetime64[h]').astype('str')
out_test_holidays_druktes["hour"] = out_test_holidays_druktes["querytime"].values.astype('datetime64[h]').astype('str')
training_holidays_druktes["hour_lag"] = (training_holidays_druktes["querytime"].values.astype('datetime64[h]')-2).astype('str')
out_test_holidays_druktes["hour_lag"] = (out_test_holidays_druktes["querytime"].values.astype('datetime64[h]')-2).astype('str')
training_holidays_druktes["timeframe"] = training_holidays_druktes["hour_lag"]+" "+training_holidays_druktes["hour"]
out_test_holidays_druktes["timeframe"] = out_test_holidays_druktes["hour_lag"]+" "+out_test_holidays_druktes["hour"]
# enter your own credentials
google_username = "davidjohansmolders@gmail.com"
google_password = "*******"
#path = ""
# Login to Google. Only need to run this once, the rest of requests will use the same session.
pytrend = TrendReq(google_username, google_password, custom_useragent='My Pytrends Script')
def get_hour_trends(row):
zoek = str(row.zoekterm_x)
tijd = str(row["timeframe"])
try:
pytrend.build_payload(kw_list=[zoek],timeframe=tijd)
row["hour_trend"] = int(pytrend.interest_over_time()[zoek].sum())
except:
row["hour_trend"] = 0
return row
training_holidays_druktes = training_holidays_druktes.apply(get_hour_trends, axis=1)
out_test_holidays_druktes = out_test_holidays_druktes.apply(get_hour_trends, axis=1)
training_holidays_druktes = pd.concat([training_holidays_druktes,
pd.get_dummies(training_holidays_druktes['class_enc'], prefix="class_"),
],1)
out_test_holidays_druktes = pd.concat([out_test_holidays_druktes,
pd.get_dummies(out_test_holidays_druktes['class_enc'], prefix="class_"),
],1)
# file names for all csv files containing weather information per month
weather_csv = ['weather_data_apr_1', 'weather_data_apr_2', 'weather_data_aug_1', 'weather_data_aug_2', 'weather_data_dec_1', 'weather_data_dec_2', 'weather_data_feb_1', 'weather_data_feb_2', 'weather_data_jan_1', 'weather_data_jan_2', 'weather_data_july_1', 'weather_data_july_2', 'weather_data_mar_1', 'weather_data_mar_2', 'weather_data_nov_1', 'weather_data_nov_2', 'weather_data_oct_1', 'weather_data_oct_2', 'weather_data_sep_1', 'weather_data_sep_2']
for i in range(len(weather_csv)):
weather_csv[i] = 'data/weather_data/' + weather_csv[i] + '.csv'
# create column of station index
stations_df['station_index'] = stations_df.index
# put all weather data in an array
weather_months = []
for csv in weather_csv:
weather_month = pd.read_csv(csv)
# convert date_time to a datetime object
weather_month['date_time'] = pd.to_datetime(weather_month['date_time'])
weather_month = weather_month.drop(['Unnamed: 0','lat','lng'], 1)
weather_months.append(weather_month)
# concatenate all weather data
weather = pd.concat(weather_months)
# merge weather month with station to convert station name to index (that can be found in holiday_druktes)
weather = pd.merge(weather, stations_df[["name", "station_index"]], left_on = 'station_name', right_on = 'name')
weather = weather.drop(['station_name', 'name'], 1)
# truncate querytime to the hour in new column
training_holidays_druktes['querytime_hour'] = training_holidays_druktes['querytime'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour))
# join training with weather data
training_holidays_druktes_weather = pd.merge(training_holidays_druktes, weather, how='left', left_on = ['destination', 'querytime_hour'], right_on = ['station_index', 'date_time'])
training_holidays_druktes_weather = training_holidays_druktes_weather.drop(['querytime_hour', 'date_time', 'station_index'], 1)
training_holidays_druktes_weather = training_holidays_druktes_weather.drop_duplicates()
# fill null rows of weather data with there mean
training_holidays_druktes_weather['temperature'].fillna(training_holidays_druktes_weather['temperature'].mean(), inplace=True)
training_holidays_druktes_weather['humidity'].fillna(training_holidays_druktes_weather['humidity'].mean(), inplace=True)
training_holidays_druktes_weather['windspeed'].fillna(training_holidays_druktes_weather['windspeed'].mean(), inplace=True)
training_holidays_druktes_weather['visibility'].fillna(training_holidays_druktes_weather['visibility'].mean(), inplace=True)
training_holidays_druktes_weather['weather_type'].fillna(training_holidays_druktes_weather['weather_type'].mean(), inplace=True)
#cast weather type to int
training_holidays_druktes_weather['weather_type'] = training_holidays_druktes_weather['weather_type'].astype(int)
# Add weather data to test data
# truncate querytime to the hour in new column
out_test_holidays_druktes['querytime_hour'] = out_test_holidays_druktes['querytime'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour))
# join test with weather data
out_test_holidays_druktes_weather = pd.merge(out_test_holidays_druktes, weather, how='left', left_on = ['destination', 'querytime_hour'], right_on = ['station_index', 'date_time'])
out_test_holidays_druktes_weather = out_test_holidays_druktes_weather.drop(['querytime_hour', 'date_time', 'station_index'], 1)
out_test_holidays_druktes_weather = out_test_holidays_druktes_weather.drop_duplicates()
# fill null rows of weather data with there mean
out_test_holidays_druktes_weather['temperature'].fillna(out_test_holidays_druktes_weather['temperature'].mean(), inplace=True)
out_test_holidays_druktes_weather['humidity'].fillna(out_test_holidays_druktes_weather['humidity'].mean(), inplace=True)
out_test_holidays_druktes_weather['windspeed'].fillna(out_test_holidays_druktes_weather['windspeed'].mean(), inplace=True)
out_test_holidays_druktes_weather['visibility'].fillna(out_test_holidays_druktes_weather['visibility'].mean(), inplace=True)
out_test_holidays_druktes_weather['weather_type'].fillna(out_test_holidays_druktes_weather['weather_type'].mean(), inplace=True)
#cast weather type to int
out_test_holidays_druktes_weather['weather_type'] = out_test_holidays_druktes_weather['weather_type'].astype(int)
# set out_test_holidays_druktes and training_holidays_druktes equal to weather counterpart such that we don't need to change all variable names above
out_test_holidays_druktes = out_test_holidays_druktes_weather
training_holidays_druktes = training_holidays_druktes_weather
pickle.dump(training_holidays_druktes,open("temp_data/training_holidays_druktes.pkl","wb"))
pickle.dump(out_test_holidays_druktes,open("temp_data/out_test_holidays_druktes.pkl","wb"))
training_holidays_druktes = pd.read_pickle("temp_data/training_holidays_druktes.pkl")
out_test_holidays_druktes = pd.read_pickle("temp_data/out_test_holidays_druktes.pkl")
training_holidays_druktes[0:5]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. Create a kaggle account! https
Step2: Processing the data
Step3: 1.2
Step4: 2.2
Step5: 2.3
Step6: 2.4
Step7: 3. Predictive modeling
Step8: We train our model on a 'training set' and evaluate it on the testset. Functionality for making this split automatically can be found <a href="http
Step9: Since we have a lot of 'Null' (+-1/3th) values for our 'class' feature, and we don't want to throw that away, we can try to predict these labels based on the other features, we get +75% accuracy so that seems sufficient. But we can't forgot to do the same thing for the test set!
Step10: 4. 'Advanced' predictive modeling
Step11: 5. Data augmentation with external data sources
Step12: Transform all null classes to one null class, maybe try to predict the class? Based on to and from and time
Step13: 6. Generating a Kaggle submission and comparing your methodology to others
|
5,689
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sp
import pandas as pd
import mlutils
import matplotlib.pyplot as plt
%pylab inline
from sklearn.svm import SVC
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, -1, -1, -1])
fit = SVC(gamma = 'scale', kernel = 'linear').fit(seven_X, seven_y)
mlutils.plot_2d_svc_problem(seven_X, seven_y, fit)
mlutils.plot_2d_clf_problem(seven_X, seven_y, fit.predict)
from sklearn.metrics import hinge_loss
def hinge(model, x, y):
return max(0, 1-y*model.decision_function(x))
x1b = np.array([[3,2], [3.5, 2], [4,2]])
y1b = np.array([1, 1, -1])
suma = 0
for i in range(0, len(seven_X)):
suma += hinge(fit, seven_X[i].reshape(1,-1), seven_y[i])
my_hinge_loss = suma / len(seven_X)
print('my hinge loss: ' + str(my_hinge_loss[0]))
print('inbuild hinge loss: ' + str(hinge_loss(seven_y, fit.decision_function(seven_X))))
from sklearn.metrics import accuracy_score
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, -1)
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, -1)
outlier = SVC(kernel = 'linear').fit(outlier_X, outlier_y)
outlier_accuracy = accuracy_score(outlier_y, outlier.predict(outlier_X))
print('outlier acc:')
print(outlier_accuracy)
unsep = SVC(kernel = 'linear').fit(unsep_X, unsep_y)
unsep_accuracy = accuracy_score(unsep_y, unsep.predict(unsep_X))
print('unsep acc:')
print(unsep_accuracy)
figure(figsize(12,4))
subplot(1,2,1)
mlutils.plot_2d_svc_problem(outlier_X, outlier_y, outlier)
mlutils.plot_2d_clf_problem(outlier_X, outlier_y, outlier.predict)
subplot(1,2,2)
mlutils.plot_2d_svc_problem(unsep_X, unsep_y, unsep)
mlutils.plot_2d_clf_problem(unsep_X, unsep_y, unsep.predict)
C = [10**(-2), 1, 10**2]
jezgra = ['linear', 'poly', 'rbf']
k = 1
figure(figsize(12, 10))
subplots_adjust(wspace=0.1, hspace = 0.2)
for i in C:
for j in jezgra:
uns = SVC(C = i, kernel = j, gamma='scale').fit(unsep_X, unsep_y)
h = uns.predict
subplot(3,3,k)
mlutils.plot_2d_svc_problem(unsep_X, unsep_y, uns)
mlutils.plot_2d_clf_problem(unsep_X, unsep_y, uns.predict)
title(str(i) + ', ' + j);
k+=1
from sklearn.metrics import accuracy_score, zero_one_loss
def grid_search(X_train, X_validate, y_train, y_validate, c_range=(0,5), g_range=(0,5), error_surface=False):
# Vaš kôd ovdje
C = []
G = []
for c in range(c_range[0], c_range[1] + 1):
C.append(2**c)
for g in range(g_range[0], g_range[1] + 1):
G.append(2**g)
err_train = 0
err_validate = 0
err_minimum = inf
C_optimal = 0
G_optimal = 0
error_train_all = np.zeros((len(C), len(G)));
error_validate_all = np.zeros((len(C), len(G)));
for c in C:
for g in G:
svm = SVC(C=c, gamma=g).fit(X_train, y_train)
h_train = svm.predict(X_train)
err_train = zero_one_loss(y_train, h_train)
h_validate = svm.predict(X_validate)
err_validate = zero_one_loss(y_validate, h_validate)
error_train_all[C.index(c)][G.index(g)] = err_train
error_validate_all[C.index(c)][G.index(g)] = err_validate
if err_validate < err_minimum:
err_minimum = err_validate
C_optimal = c
G_optimal = g
if error_surface:
return C_optimal, G_optimal, error_train_all, error_validate_all
else:
return C_optimal, G_optimal
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Vaš kôd ovdje
X1, y1 = make_classification(n_samples=200, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=2)
X1_train, X1_validate, y1_train, y1_validate = train_test_split(X1, y1, test_size = 0.5)
X2, y2 = make_classification(n_samples=200, n_features=1000, n_informative=1000, n_redundant=0, n_clusters_per_class=2)
X2_train, X2_validate, y2_train, y2_validate = train_test_split(X2, y2, test_size = 0.5)
c_range=(-5, 15)
g_range=(-15, 3)
C1_opt, G1_opt, err_train1, err_validate1 = grid_search(X1_train, X1_validate, y1_train, y1_validate, c_range, g_range, True)
C2_opt, G2_opt, err_train2, err_validate2 = grid_search(X2_train, X2_validate, y2_train, y2_validate, c_range, g_range, True)
print("C1 optimal:", C1_opt, "C2 optimal:", C2_opt)
print("gamma1 optimal:", G1_opt, "G2 optimal:", G2_opt)
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(2, 2, 1)
ax.grid()
mlutils.plot_error_surface(err_train1, c_range, g_range)
title("TRAIN1 error")
ax = fig.add_subplot(2, 2, 2)
ax.grid()
mlutils.plot_error_surface(err_train2, c_range, g_range)
title("TRAIN2 error")
ax = fig.add_subplot(2, 2, 3)
ax.grid()
mlutils.plot_error_surface(err_validate1, c_range, g_range)
title("VALIDATION1 errors")
ax = fig.add_subplot(2, 2, 4)
ax.grid()
mlutils.plot_error_surface(err_validate2, c_range, g_range)
title("VALIDATION2 errors")
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=500,n_features=2,n_classes=2,n_redundant=0,n_clusters_per_class=1, random_state=69)
X[:,1] = X[:,1]*100+1000
X[0,1] = 3000
mlutils.plot_2d_svc_problem(X, y)
figure(figsize(10, 5))
subplot(1,2,1)
hist(X[:,0], bins = 50);
subplot(1,2,2)
hist(X[:,1], bins = 50);
from sklearn.preprocessing import MinMaxScaler
x0b = MinMaxScaler().fit_transform(X[:,0].reshape(-1,1), y)
x1b = MinMaxScaler().fit_transform(X[:,1].reshape(-1,1), y)
figure(figsize(10, 5))
subplot(1,2,1)
hist(x0b, bins = 50)
subplot(1,2,2)
hist(x1b, bins = 50)
from sklearn.preprocessing import StandardScaler
x0b = StandardScaler().fit_transform(X[:,0].reshape(-1,1), y)
x1b = StandardScaler().fit_transform(X[:,1].reshape(-1,1), y)
figure(figsize(10, 5))
subplot(1,2,1)
hist(x0b, bins = 50)
subplot(1,2,2)
hist(x1b, bins = 50)
from sklearn.model_selection import train_test_split
err_unscaled = []
err_std = []
err_minmax = []
for i in range(0, 30):
X_train, X_validate, y_train, y_validate = train_test_split(X, y, test_size = 0.5)
model_unscaled = SVC(gamma = 'scale').fit(X_train, y_train)
prediction_unscaled = model_unscaled.predict(X_validate)
err_unscaled.append(accuracy_score(y_validate, prediction_unscaled))
ss = StandardScaler()
X_std_train = ss.fit_transform(X_train)
X_std_valid = ss.transform(X_validate)
model_standard = SVC(gamma = 'scale').fit(X_std_train, y_train)
prediction_standard = model_standard.predict(X_std_valid)
err_std.append(accuracy_score(y_validate, prediction_standard))
mm = MinMaxScaler()
X_minmax_train = mm.fit_transform(X_train)
X_minmax_valid = mm.transform(X_validate)
model_minmax = SVC(gamma = 'scale').fit(X_minmax_train, y_train)
prediction_minmax = model_minmax.predict(X_minmax_valid)
err_minmax.append(accuracy_score(y_validate, prediction_minmax))
print('Unscaled')
print(mean(err_unscaled))
print('Std')
print(mean(err_std))
print('Min max')
print(mean(err_minmax))
from numpy.linalg import norm
from collections import Counter
class KNN(object):
def __init__(self, n_neighbors=3):
self.n_neighbors = n_neighbors
self.domain = []
def fit(self, X_train, y_train):
for x,y in zip(X_train, y_train):
self.domain.append([x, y])
return self.domain
def predict(self, X_test):
pred = []
for x in X_test:
dist = []
y = []
counter = []
for xd, yd in self.domain:
dist.append(norm(x-xd))
y.append(yd)
dat = sorted(zip(dist, y))[0: self.n_neighbors]
for i in range(0, self.n_neighbors):
counter.append(dat[i][1])
pred.append((Counter(counter).most_common()[0])[0])
return pred
from sklearn.datasets import make_classification
X_art, y_art = make_classification(n_samples=100, n_features=2, n_classes=2,
n_redundant=0, n_clusters_per_class=2,
random_state=69)
mlutils.plot_2d_clf_problem(X_art, y_art)
from sklearn.neighbors import KNeighborsClassifier
knn = KNN()
knn.fit(X_art, y_art)
predict_knn = knn.predict(X_art)
builtin = KNeighborsClassifier(algorithm = 'brute', n_neighbors = 3).fit(X_art, y_art)
predict_knc = builtin.predict(X_art)
print('diff')
print(norm(predict_knn - predict_knc))
def knn_eval(n_instances=100, n_features=2, n_classes=2, n_informative=2, test_size=0.3, k_range=(1, 20), n_trials=100):
best_k = 0; train_errors = []; test_errors = [];
for k in range(k_range[0],k_range[1]+1):
train_error = []
test_error = []
for j in range(0, n_trials):
X, y = make_classification(n_samples=n_instances, n_features=n_features, n_classes=n_classes, n_informative=n_informative, n_redundant=0, n_clusters_per_class=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = test_size)
mod = KNeighborsClassifier(algorithm = 'brute', n_neighbors = k).fit(X_train, y_train)
prediction_train = mod.predict(X_train)
prediction_test = mod.predict(X_test)
train_error.append(zero_one_loss(y_train, prediction_train))
test_error.append(zero_one_loss(y_test, prediction_test))
train_errors.append(mean(train_error))
test_errors.append(mean(test_error))
best_k = k_range[0] + test_errors.index(min(test_errors))
return (best_k, train_errors, test_errors)
hiperparams = range(1, 21)
N = [100, 500, 1000, 3000]
figure(figsize(11, 8))
i = 1
for n in N:
[k, err_tr, err_tst] = knn_eval(n_instances=n, k_range=(1, 20))
subplot(2,2,i)
plot(np.array(range(1,21)), err_tr)
plot(np.array(range(1,21)), err_tst)
scatter(k,err_tst[hiperparams.index(k)])
legend(['train error', 'test error', 'min test err'], loc = 'best', prop={'size':10})
title('\nN = ' + str(n) + '\nk = ' + str(k))
xticks(rng)
grid()
i+=1
from sklearn.metrics.pairwise import pairwise_distances
from numpy.random import random, random_sample
dims = []
vals = []
cosine = []
for dim in range(1, 50):
arrs = random_sample((50, dim))
arrs2 = random_sample((50, dim))
# print(arrs)
dists = norm(pairwise_distances(X=arrs, Y=arrs2))
#print(dists)
dims.append(dim)
vals.append(dists)
dists = norm(pairwise_distances(X=arrs, Y=arrs2, metric='cosine'))
cosine.append(dists)
#print(dims)
#print(vals)
plot(dims, vals)
plot(dims, cosine)
grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Klasifikator stroja potpornih vektora (SVM)
Step2: Q
Step3: (c)
Step4: Q
Step5: 3. Optimizacija hiperparametara SVM-a
Step6: (b)
Step7: Q
Step8: (a)
Step9: (b)
Step10: Q
Step11: Q
Step12: Q
Step13: (b)
Step14: 6. Analiza algoritma k-najbližih susjeda
Step15: Q
Step16: Q
|
5,690
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
sn.node_size = 10
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
sn.node_label_position = 'top center'
g1 = sn.load_graph('renaissance.gml', has_pos=True)
g2 = sn.load_graph('../encontro02/1-introducao.gml', has_pos=True)
sn.show_graph(g1, nlab=True)
sn.show_graph(g2, nlab=True)
from random import choice
TIMES = 1000
def simulate_closeness_transfer_geodesic(g):
# Inicialização das médias.
for n in g.nodes():
g.node[n]['simulated_closeness'] = 0
for _ in range(TIMES):
for s in g.nodes():
# Inicialização do closeness de s.
g.node[s]['closeness'] = 0
for t in g.nodes():
if s != t:
# Função caixa-preta que calcula, para cada nó, seu subconjunto
# de vizinhos que pertencem a geodésicas de s a t. Esse subconjunto
# é armazenado no atributo shortest_neighbors. Vocês vão aprender
# a abrir essa caixa-preta em encontros posteriores.
sn.build_shortest_paths(g, s, t)
# Dependendo do processo, o fluxo pode não ter sucesso, ou seja,
# pode ficar preso em uma parte do grafo sem nunca atingir t.
# Quando isso acontece, simplesmente tenta-se novamente.
success = False
while not success:
# Chamamos de "dono" um nó que possui o bem conduzido pelo
# fluxo. No início do processo, sabemos que o único dono é s.
for n in g.nodes():
g.node[n]['owner'] = False
g.node[s]['owner'] = True
time = 1
while True:
# O conjunto nodes_reached indica os nós que o fluxo
# alcança ao "avançar mais um passo".
nodes_reached = set()
for n in g.nodes():
if g.node[n]['owner']:
# TRANSFERÊNCIA: Escolhemos aleatoriamente um dos vizinhos válidos.
# GEODÉSICA: Os vizinhos válidos são os que pertencem a geodésicas.
m = choice(g.node[n]['shortest_neighbors'])
nodes_reached.add(m)
# TRANSFERÊNCIA: O fluxo transfere o bem para os nós que o fluxo
# alcança, portanto o nó deixa de ser dono. Nos processos baseados
# em duplicação, a linha abaixo não pode ser executada.
g.node[n]['owner'] = False
# Todos os nós que o fluxo alcança tornam-se donos.
for n in nodes_reached:
g.node[n]['owner'] = True
# Se alcançamos t, interrompemos o fluxo e paramos de tentar.
if t in nodes_reached:
success = True
break
# Se não alcançamos ninguém, interrompemos o fluxo e tentamos novamente.
if not nodes_reached:
break
time += 1
# Soma do tempo de s a t ao closeness de s.
g.node[s]['closeness'] += time
# Incremento das médias.
for n in g.nodes():
g.node[n]['simulated_closeness'] += g.node[n]['closeness']
# Finalização das médias.
for n in g.nodes():
g.node[n]['simulated_closeness'] /= TIMES
from math import inf, isinf
from queue import Queue
def build_closeness(g):
for s in g.nodes():
# início da busca em largura
q = Queue()
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
q.put(s)
while not q.empty():
n = q.get()
for m in g.neighbors(n):
if isinf(g.node[m]['d']):
g.node[m]['d'] = g.node[n]['d'] + 1
q.put(m)
# fim da busca em largura
g.node[s]['theoretical_closeness'] = 0
for n in g.nodes():
g.node[s]['theoretical_closeness'] += g.node[n]['d']
build_closeness(g1)
simulate_closeness_transfer_geodesic(g1)
for n in g1.nodes():
print(g1.node[n]['label'], g1.node[n]['theoretical_closeness'], g1.node[n]['simulated_closeness'])
build_closeness(g2)
simulate_closeness_transfer_geodesic(g2)
for n in g2.nodes():
print(g2.node[n]['label'], g2.node[n]['theoretical_closeness'], g2.node[n]['simulated_closeness'])
def simulate_betweenness_transfer_geodesic(g):
# Inicialização das médias.
for n in g.nodes():
g.node[n]['simulated_betweenness'] = 0
for _ in range(TIMES):
# Inicialização de todos os betweenness.
for n in g.nodes():
g.node[n]['betweenness'] = 0
for s in g.nodes():
for t in g.nodes():
if s != t:
# Função caixa-preta que calcula, para cada nó, seu subconjunto
# de vizinhos que pertencem a geodésicas de s a t. Esse subconjunto
# é armazenado no atributo shortest_neighbors. Vocês vão aprender
# a abrir essa caixa-preta em encontros posteriores.
sn.build_shortest_paths(g, s, t)
# Dependendo do processo, o fluxo pode não ter sucesso, ou seja,
# pode ficar preso em uma parte do grafo sem nunca atingir t.
# Quando isso acontece, simplesmente tenta-se novamente.
success = False
while not success:
# Chamamos de "dono" um nó que possui o bem conduzido pelo
# fluxo. No início do processo, sabemos que o único dono é s.
for n in g.nodes():
g.node[n]['owner'] = False
g.node[s]['owner'] = True
for n in g.nodes():
if n != s and n != t:
g.node[n]['partial_betweenness'] = 0
while True:
# O conjunto nodes_reached indica os nós que o fluxo
# alcança ao "avançar mais um passo".
nodes_reached = set()
for n in g.nodes():
if g.node[n]['owner']:
# TRANSFERÊNCIA: Escolhemos aleatoriamente um dos vizinhos válidos.
# GEODÉSICA: Os vizinhos válidos são os que pertencem a geodésicas.
m = choice(g.node[n]['shortest_neighbors'])
nodes_reached.add(m)
# TRANSFERÊNCIA: O fluxo transfere o bem para os nós que o fluxo
# alcança, portanto o nó deixa de ser dono. Nos processos baseados
# em duplicação, a linha abaixo não pode ser executada.
g.node[n]['owner'] = False
# Todos os nós que o fluxo alcança tornam-se donos.
for n in nodes_reached:
g.node[n]['owner'] = True
# Se alcançamos t, interrompemos o fluxo e paramos de tentar.
if t in nodes_reached:
success = True
break
# Se não alcançamos ninguém, interrompemos o fluxo e tentamos novamente.
if not nodes_reached:
break
for n in nodes_reached:
if n != s and n != t:
g.node[n]['partial_betweenness'] += 1
# Soma de todos os betweenness parciais dos intermediários.
for n in g.nodes():
if n != s and n != t:
g.node[n]['betweenness'] += g.node[n]['partial_betweenness']
# Incremento das médias. Divide-se o valor por 2 para
# desconsiderar a simetria de um grafo não-dirigido.
for n in g.nodes():
g.node[n]['simulated_betweenness'] += g.node[n]['betweenness'] / 2
# Finalização das médias.
for n in g.nodes():
g.node[n]['simulated_betweenness'] /= TIMES
sn.build_betweenness(g1)
simulate_betweenness_transfer_geodesic(g1)
for n in g1.nodes():
print(g1.node[n]['label'], g1.node[n]['theoretical_betweenness'], g1.node[n]['simulated_betweenness'])
sn.build_betweenness(g2)
simulate_betweenness_transfer_geodesic(g2)
for n in g2.nodes():
print(g2.node[n]['label'], g2.node[n]['theoretical_betweenness'], g2.node[n]['simulated_betweenness'])
def simulate_closeness_serial_path(g):
pass
def simulate_closeness_transfer_path(g):
pass
def simulate_closeness_serial_trail(g):
pass
def simulate_closeness_transfer_trail(g):
pass
def simulate_closeness_serial_walk(g):
pass
def simulate_closeness_transfer_walk(g):
pass
def simulate_betweenness_serial_path(g):
pass
def simulate_betweenness_transfer_path(g):
pass
def simulate_betweenness_serial_trail(g):
pass
def simulate_betweenness_transfer_trail(g):
pass
def simulate_betweenness_serial_walk(g):
pass
def simulate_betweenness_transfer_walk(g):
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configurando a biblioteca
Step2: O objetivo desta atividade é realizar $24$ simulações de centralidade diferentes, para avaliar o desempenho de medidas clássicas em relação a diferentes processos de fluxo. Dessas $24$ simulações, são $12$ sobre o grafo g1 e $12$ sobre o grafo g2.
Step3: O primeiro grafo corresponde aos casamentos entre famílias de Florença durante a Renascença.
Step4: O segundo grafo corresponde ao estudo de caso do primeiro encontro.
Step5: Das $12$ simulações sobre um dos grafos, são $6$ de closeness e $6$ de betweenness
Step6: Em uma simulação de closeness, para cada origem s e destino t, mede-se o tempo que o fluxo leva para chegar de s a t. O closeness simulado de um nó s é a soma de todos os tempos medidos quando esse nó é uma origem. Como o fluxo pode ter passos aleatórios, o processo é repetido TIMES vezes e considera-se a média.
Step7: Vamos comparar o closeness simulado com o closeness teórico.
Step8: Comparação de closeness no grafo g1
Step9: Comparação de closeness no grafo g2
Step10: Em uma simulação de betweenness, para cada origem s, destino t e intermediário n, mede-se a quantidade de vezes que o fluxo passa por n antes de chegar de s a t. O betweenness simulado de um nó n é a soma de todas as quantidades medidas quando esse nó é um intermediário. Como o fluxo pode ter passos aleatórios, o processo é repetido TIMES vezes e considera-se a média.
Step11: Vamos comparar o betweenness simulado com o betweennesss teórico.
Step12: Comparação de betweenness no grafo g2
Step13: Entregáveis
|
5,691
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Activation('relu'))
model = Sequential()
model.add(Dense(32, input_shape=(784,)))
model = Sequential()
model.add(Dense(32, input_dim=784))
# For a multi-class classification problem
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# For a binary classification problem
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
# For a mean squared error regression problem
model.compile(optimizer='rmsprop',
loss='mse')
# For custom metrics
import keras.backend as K
def mean_pred(y_true, y_pred):
return K.mean(y_pred)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy', mean_pred])
# For a single-input model with 2 classes (binary classification):
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(2, size=(1000, 1))
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, labels, epochs=10, batch_size=32)
# For a single-input model with 10 classes (categorical classification):
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(10, size=(1000, 1))
# Convert labels to categorical one-hot encoding
one_hot_labels = keras.utils.to_categorical(labels, num_classes=10)
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, one_hot_labels, epochs=10, batch_size=32)
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
# Generate dummy data
import numpy as np
x_train = np.random.random((1000, 20))
y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10)
x_test = np.random.random((100, 20))
y_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(64, activation='relu', input_dim=20))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
model.fit(x_train, y_train,
epochs=20,
batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
# Generate dummy data
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000, 1))
x_test = np.random.random((100, 20))
y_test = np.random.randint(2, size=(100, 1))
model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(x_train, y_train,
epochs=20,
batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can also simply add layers via the .add() method
Step2: Specifying the input shape
Step3: Compilation
Step4: Training
Step5: Examples
Step6: MLP for binary classification
|
5,692
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import random
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
pd.set_option("display.max_rows", 8)
df = pd.read_csv('stress-ng/third/torpor-results/alltests.csv')
df.head()
df['machine'].unique()
machine_is_issdm_6 = df['machine'] == 'issdm-6'
machine_is_t2_micro = df['machine'] == 't2.micro'
machine_is_kv3 = df['machine'] == 'kv3'
limits_is_with = df['limits'] == 'with'
limits_is_without = df['limits'] == 'without'
df_issdm_6_with_limit = df[machine_is_issdm_6 & limits_is_with]
df_t2_micro_with_limit = df[machine_is_t2_micro & limits_is_with]
df_kv3_without_limit = df[machine_is_kv3 & limits_is_without]
print(
len(df_issdm_6_with_limit), # machine issdm-6 with limit
len(df[machine_is_issdm_6 & limits_is_without]), # machine issdm-6 without limit
len(df_t2_micro_with_limit), # machine t2.micro with limit
len(df[machine_is_t2_micro & limits_is_without]), # machine t2.micro without limit
len(df_kv3_without_limit) # machine kv3 without limit
)
issdm_6_with_limit_merge_kv3 = pd.merge(df_issdm_6_with_limit, df_kv3_without_limit, how='inner', on='benchmark')
t2_micro_with_limit_merge_kv3 = pd.merge(df_t2_micro_with_limit, df_kv3_without_limit, how='inner', on='benchmark')
print(
# common successful tests from issdm-6 and kv3
len(issdm_6_with_limit_merge_kv3),
# common successful tests from t2.micro and kv3
len(t2_micro_with_limit_merge_kv3)
)
df_normalized = pd.read_csv('stress-ng/third/torpor-results/alltests_with_normalized_results_1.1.csv')
df_normalized.head()
df_issdm_6_with_limit[~df_issdm_6_with_limit['benchmark'].isin(issdm_6_with_limit_merge_kv3['benchmark'])]
df_t2_micro_with_limit[~df_t2_micro_with_limit['benchmark'].isin(t2_micro_with_limit_merge_kv3['benchmark'])]
normalized_limits_is_with = df_normalized['limits'] == 'with'
normalized_limits_is_without = df_normalized['limits'] == 'without'
normalized_machine_is_issdm_6 = df_normalized['machine'] == 'issdm-6'
normalized_machine_is_t2_micro = df_normalized['machine'] == 't2.micro'
normalized_is_speed_up = df_normalized['normalized'] > 0
normalized_is_slow_down = df_normalized['normalized'] < 0
print(
# issdm-6 without CPU restriction
len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 1. speed-up
len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 2. slowdown
# issdm-6 with CPU restriction
len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 3. speed-up
len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 4. slowdown
# t2.micro without CPU restriction
len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 5. speed-up
len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_slow_down]), # 6. slowdown
# t2.micro with CPU restriction
len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 7. speed-up
len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_slow_down]) # 8. slowdown
)
print(
# For issdm-6
df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]['normalized'].mean(),
# For t2_micro
df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]['normalized'].mean()
)
df_normalized_issdm_6_without_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_without]
df_normalized_issdm_6_without_limit.normalized.hist(bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 without CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_normalized_issdm_6_without_limit_sorted = df_normalized_issdm_6_without_limit.sort_values(by='normalized', ascending=0)
df_normalized_issdm_6_without_limit_sorted_head = df_normalized_issdm_6_without_limit_sorted.head()
df_normalized_issdm_6_without_limit_sorted_tail = df_normalized_issdm_6_without_limit_sorted.tail()
df_normalized_issdm_6_without_limit_sorted_head.append(df_normalized_issdm_6_without_limit_sorted_tail)
df_normalized_issdm_6_with_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]
df_normalized_issdm_6_with_limit.normalized.hist(color='Orange', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 with CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_normalized_issdm_6_with_limit_sorted = df_normalized_issdm_6_with_limit.sort_values(by='normalized', ascending=0)
df_normalized_issdm_6_with_limit_sorted_head = df_normalized_issdm_6_with_limit_sorted.head()
df_normalized_issdm_6_with_limit_sorted_tail = df_normalized_issdm_6_with_limit_sorted.tail()
df_normalized_issdm_6_with_limit_sorted_head.append(df_normalized_issdm_6_with_limit_sorted_tail)
df_normalized_issdm_6_no_outlier = df_normalized_issdm_6_with_limit['benchmark'] != 'stressng-cpu-jenkin'
df_normalized_issdm_6_with_limit[df_normalized_issdm_6_no_outlier].normalized.hist(color='Green', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 with CPU restriction (no outlier)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_normalized_t2_micro_without_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_without]
df_normalized_t2_micro_without_limit.normalized.hist(bins=150,figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro without CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_normalized_t2_micro_without_limit_sorted = df_normalized_t2_micro_without_limit.sort_values(by='normalized', ascending=0)
df_normalized_t2_micro_without_limit_sorted_head = df_normalized_t2_micro_without_limit_sorted.head()
df_normalized_t2_micro_without_limit_sorted_tail = df_normalized_t2_micro_without_limit_sorted.tail()
df_normalized_t2_micro_without_limit_sorted_head.append(df_normalized_t2_micro_without_limit_sorted_tail)
df_normalized_t2_micro_with_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]
df_normalized_t2_micro_with_limit.normalized.hist(color='Orange', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro with CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_normalized_t2_micro_with_limit_sorted = df_normalized_t2_micro_with_limit.sort_values(by='normalized', ascending=0)
df_normalized_t2_micro_with_limit_sorted_head = df_normalized_t2_micro_with_limit_sorted.head()
df_normalized_t2_micro_with_limit_sorted_tail = df_normalized_t2_micro_with_limit_sorted.tail()
df_normalized_t2_micro_with_limit_sorted_head.append(df_normalized_t2_micro_with_limit_sorted_tail)
df_normalized_t2_micro_no_outlier = df_normalized_t2_micro_with_limit['benchmark'] != 'stressng-memory-stack'
df_normalized_t2_micro_with_limit[df_normalized_t2_micro_no_outlier].normalized.hist(color='Green', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro with CPU restriction (no outlier)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_verification = pd.read_csv('verification/results/2/alltests_with_normalized_results_1.1.csv')
len(df_verification) / 2
df_verification_rank = df_verification.reindex(df_verification.normalized.abs().sort_values(ascending=0).index)
df_verification_rank.head(8)
df_verification_issdm_6 = df_verification[df_verification['machine'] == 'issdm-6']
df_verification_issdm_6.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on issdm-6', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
print(
df_verification_issdm_6['normalized'].max(),
df_verification_issdm_6['normalized'].min()
)
df_verification_issdm_6['normalized'].mean()
df_verification_issdm_6_no_nbench = df_verification_issdm_6[~df_verification_issdm_6['benchmark'].str.startswith('nbench')]
df_verification_issdm_6_no_nbench.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on issdm-6 (no nbench)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
print(
df_verification_issdm_6_no_nbench['normalized'].max(),
df_verification_issdm_6_no_nbench['normalized'].min()
)
df_verification_issdm_6_no_nbench['normalized'].mean()
df_verification_t2_micro = df_verification[df_verification['machine'] == 't2.micro']
df_verification_t2_micro.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on t2.micro', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
df_verification_t2_micro['normalized'].mean()
df_verification_top_benchmakrs = df_verification_rank[df_verification_rank['machine'] == 't2.micro'].head(4)['benchmark']
df_verification_t2_micro_no_outliers = df_verification_t2_micro[~df_verification_t2_micro['benchmark'].isin(df_verification_top_benchmakrs)]
df_verification_t2_micro_no_outliers.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests on t2.micro (no outliers)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
print(
df_verification_t2_micro_no_outliers['normalized'].max(),
df_verification_t2_micro_no_outliers['normalized'].min()
)
df_verification_t2_micro_no_outliers['normalized'].mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we load all test data.
Step2: Let's have a look at the pattern of data.
Step3: Show all the test machines.
Step4: Define some predicates for machines and limits
Step5: Show the number of stress tests on different machines
Step6: Because those failed benchmarks are not shown in the result report, we want to know how many common successful stress tests on the target machine and kv3.
Step7: Read the normalized results.
Step8: Show some of the data lines. The normalized value is the speedup based on kv3. It becomes a negative value when the benchmark runs on the target machine is slower than on kv3 (slowdown).
Step9: Show those benchmarks are not both successful completed on the issdm-6 and kv3.
Step10: Show those benchmarks are not both successful completed on the t2.micro and kv3.
Step11: We can find the number of benchmarks are speed-up and slowdown, respectively.
Step12: The average of normalized value for results under CPU restriction
Step13: Experiment Results from issdm-6
Step14: Here is the rank of normalized value from stress tests without CPU restriction
Step15: Now let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on issdm-6.
Step16: Here is the rank of normalized value from stress tests with CPU restriction
Step17: We notice that the stressng-cpu-jenkin looks like an outlier. Let's redraw the histogram without this one.
Step18: Summary
Step19: Here is the rank of normalized value from stress tests without CPU restriction
Step20: Let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on t2.micro.
Step21: Here is the rank of normalized value from stress tests with CPU restriction
Step22: We notice that the stressng-memory-stack looks like an outlier. Let's redraw the histogram without this one.
Step23: The stressng-cpu-jenkin benchmark is a collection of (non-cryptographic) hash functions for multi-byte keys. See Jenkins hash function from Wikipedia for more details.
Step24: Show number of test benchmarks.
Step25: Order the test results by the absolute of normalized value
Step26: Verification Tests on issdm-6
Step27: Print the max the min normalized value,
Step28: The average of noramlized value is,
Step29: If we remove all nbench tests, the frequency histogram changes to
Step30: The max the min normalized value changes to,
Step31: The average of noramlized value changes to,
Step32: Verification Tests on t2.micro
Step33: The average of noramlized value of the verification benchmarks is,
Step34: Let's see the frequency histogram after removing right-most four outliers.
Step35: Print the max the min normalized value,
Step36: The average of noramlized value without the four outliners is,
|
5,693
|
<ASSISTANT_TASK:>
Python Code:
predictors = subset[variables]
targets = subset['High BreastCancer']
#Split into training and testing sets
training_data, test_data, training_target, test_target = train_test_split(predictors, targets, test_size=.3)
model=LassoLarsCV(cv=10, precompute=False).fit(training_data, training_target)
# Show the regression coeff of the predictors
dict(zip(predictors.columns,model.coef_))
feature_name = list(predictors.columns.values)
feature_coefficient = list(model.coef_)
features = pd.DataFrame({'Variable':feature_name, 'Regression Coefficients':feature_coefficient}).sort_values(by='Regression Coefficients', ascending=False)
print(features.head(len(feature_name)))
m_log_alphas = -np.log10(model.alphas_)
ax = plt.gca() # set the plot axis
plt.plot(m_log_alphas, model.coef_path_.T) # applying the -log10 transformation to the alpha values simply to make the values easier to read.
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k', label='alpha CV') # A black vertical line will be plotted at the negative log10 transformed alpha value for the selected model
plt.ylabel('Regression Coefficients')
plt.xlabel('-log(alpha)')
plt.title('Regression Coefficients Progression for Lasso Paths')
m_log_alphascv = -np.log10(model.cv_alphas_)
plt.figure()
plt.plot(m_log_alphascv, model.cv_mse_path_, ':')
plt.plot(m_log_alphascv, model.cv_mse_path_.mean(axis=-1), 'k', label='Average across the folds', linewidth=2)
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k', label='alpha CV')
plt.legend()
plt.xlabel('-log(alpha)')
plt.ylabel('Mean squared error')
plt.title('Mean squared error on each fold')
from sklearn.metrics import mean_squared_error
train_error = mean_squared_error(training_target, model.predict(training_data))
test_error = mean_squared_error(test_target, model.predict(test_data))
print ('training data MSE')
print(train_error)
print ('test data MSE')
print(test_error)
rsquared_train=model.score(training_data, training_target)
rsquared_test=model.score(test_data, test_target)
print ('training data R-square')
print(rsquared_train)
print ('test data R-square')
print(rsquared_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (a) Split into training and testing sets
Step2: (b) Building the LASSO Regression Model
Step3: Note
Step4: (b) Plot coefficient progression
Step5: (c) Plot mean square error for each fold
Step6: MSE from training and test data
Step7: The selected model has predicted better in the test data as compared to the train data.
|
5,694
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 6) )
# capacity of the BSC
def C_BEC(epsilon):
return 1 - epsilon
from scipy.special import comb
def get_Pe_RCU_BEC(n, r, epsilon):
return np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) * min(1, 2**(-n*(1-r)+t)) for t in range(n+1)])
def get_Pe_Singleton_BEC(n, r, epsilon):
return 1.0 - np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) for t in range(int(np.ceil(n*(1-r)))+1)])
#return np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) for t in range(int(np.ceil(n*(1-r)+1)),n+1)])
epsilon_range = np.linspace(0.2,0.6,100)
Pe_RCU_BEC_r12_n100 = [get_Pe_RCU_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n250 = [get_Pe_RCU_BEC(250, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n500 = [get_Pe_RCU_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n1000 = [get_Pe_RCU_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_Singleton_BEC_r12_n100 = [get_Pe_Singleton_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_Singleton_BEC_r12_n250 = [get_Pe_Singleton_BEC(250, 0.5, epsilon) for epsilon in epsilon_range]
Pe_Singleton_BEC_r12_n500 = [get_Pe_Singleton_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_Singleton_BEC_r12_n1000 = [get_Pe_Singleton_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
fig = plt.figure(1,figsize=(12,7))
plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n100)
plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n250)
plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n500)
plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n1000)
plt.axvline(x=0.5, color='k')
plt.gca().set_prop_cycle(None)
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n100, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n250, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n500, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n1000, '--')
plt.axvspan(0.5, 0.55, alpha=0.5, color='gray')
plt.axvline(x=0.5, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.2,0.55))
plt.xlabel('BEC erasure probability $\epsilon$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=250$','$n=500$', '$n=1000$', 'C'], fontsize=16)
plt.text(0.5, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
plt.savefig('BEC_Singleton_RCU_R12.pdf',bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Binary Erasure Channel (BEC)
Step2: Random Coding Union Bound for the BEC
Step3: Singleton Bound for the BEC
|
5,695
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = 10, 8
BASE = '/home/brandon/Documents/seq2seq_projects/data/saved_train_data/'
path = BASE + 'cornell_03_11.csv'
df = pd.read_csv(path, index_col=0)
df.head()
embed_sizes = set(df['embed_size'])
state_sizes = set(df['state_size'])
learning_rates = set(df['learning_rate'])
print(embed_sizes)
print(state_sizes)
print(learning_rates)
def get_split(df, col, vals):
return [(v, df[df[col]==v]) for v in vals]
def split_df_and_plot(df, split_col, split_vals):
Example usage:
split_df_and_plot(df, 'learning_rate', learning_rates)
df_split = get_split(df, split_col, split_vals)
plt.figure(figsize=(8, 6))
for val, df_sp in df_split:
ax=plt.subplot()
plt.scatter(df_sp['global_step'], df_sp['loss'], label='%.3f' % val)
plt.title(split_col + ' Comparisons', fontsize=20)
ax.set_xlabel('Global Step', fontsize=15)
ax.set_ylabel('Validation Loss', fontsize=15)
leg = ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,
title=split_col, prop={'size':15})
plt.setp(leg.get_title(),fontsize=20)
plt.tight_layout()
plt.savefig(split_col+'.pdf', bbox_extra_artists=(leg,), bbox_inches='tight')
plt.show()
def inception_split(df, split_col_one, split_vals_one, split_col_two, split_vals_two):
ENHANCE
df_split_one = get_split(df, split_col_one, split_vals_one)
fig = plt.figure(figsize=(12, 10))
ctr = 1
for val_one, df_sp_one in df_split_one:
df_split_two = get_split(df_sp_one, split_col_two, split_vals_two)
ax=fig.add_subplot(3, 2, ctr)
for val_two, df_sp_two in df_split_two:
ax.scatter(df_sp_two['global_step'], df_sp_two['loss'], label=split_col_two + ': %.2f' % val_two)
ax.set_ylim([3., 10.])
plt.title(split_col_one + ' = %.2f' % val_one, fontsize=15)
ax.set_xlabel('Global Step', fontsize=12)
ax.set_ylabel('Validation Loss', fontsize=12)
if ctr in [2, 4, 6]:
leg = ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,
title=split_col_two, prop={'size':12})
plt.setp(leg.get_title(),fontsize=15)
ctr += 1
plt.tight_layout()
plt.savefig(split_col_one + "_" + split_col_two + '.pdf', bbox_extra_artists=(leg,), bbox_inches='tight')
plt.show()
split_df_and_plot(df, 'learning_rate', learning_rates)
split_df_and_plot(df, 'embed_size', embed_sizes)
split_df_and_plot(df, 'state_size', state_sizes)
inception_split(df, "learning_rate", learning_rates, "state_size", state_sizes)
path = BASE + 'cornell.csv'
df = pd.read_csv(path, index_col=0)
df.head()
dropout_probs = set(df['dropout_prob'])
num_layers = set(df['num_layers'])
print(dropout_probs)
print(num_layers)
split_df_and_plot(df, 'dropout_prob', dropout_probs)
split_df_and_plot(df, 'num_layers', num_layers)
inception_split(df, "dropout_prob", dropout_probs, "num_layers", num_layers)
inception_split(df, "num_layers", num_layers, "dropout_prob", dropout_probs)
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
vocab_size = 40000
x_axis = np.arange(vocab_size)
zipfian = (np.log(x_axis + 2) - np.log(x_axis + 1)) / np.log(vocab_size + 1)
plt.figure(figsize=(10, 6))
plt.semilogy(x_axis, zipfian)
plt.title('Log-Uniform Zipfian Sampling Distribution.')
plt.xlabel('Word ID (in order of decreasing freq.)')
plt.ylabel('Sampling probability')
plt.show()
batch_size = 64
num_samples = 512
samples = [np.random.choice(x_axis, p=zipfian) for _ in range(num_samples)]
plt.hist(samples)
plt.title('Histogram of %d Sampled Values' % num_samples)
plt.xlabel('Sampled Word ID')
plt.ylabel('Counts')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Analysis, Goals, and Predictions
Step3: Single Plots Distinguishing One Variable
Step4: Plots with Fixed Learning Rate
Step5: Hyperparam Search on Dropout and Num Layers
Step6: Loss Functions
|
5,696
|
<ASSISTANT_TASK:>
Python Code:
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=plt.figaspect(0.5))
ax1.plot([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
ax2.scatter([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
plt.show()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=plt.figaspect(0.5))
ax1.plot([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
ax2.scatter([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
ax1.margins(x=0.0, y=0.1) # 10% padding in the y-direction only
ax2.margins(0.05) # 5% padding in all directions
plt.show()
fig, axes = plt.subplots(nrows=3)
for ax in axes:
ax.plot([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
axes[0].set_title('Normal Autoscaling', y=0.7, x=0.8)
axes[1].set_title('ax.axis("tight")', y=0.7, x=0.8)
axes[1].axis('tight')
axes[2].set_title('ax.axis("equal")', y=0.7, x=0.8)
axes[2].axis('equal')
plt.show()
# Good -- setting limits after plotting is done
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=plt.figaspect(0.5))
ax1.plot([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
ax2.scatter([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
ax1.set_ylim(bottom=-10)
ax2.set_xlim(right=25)
plt.show()
# Bad -- Setting limits before plotting is done
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=plt.figaspect(0.5))
ax1.set_ylim(bottom=-10)
ax2.set_xlim(right=25)
ax1.plot([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
ax2.scatter([-10, -5, 0, 5, 10, 15], [-1.2, 2, 3.5, -0.3, -4, 1])
plt.show()
fig, ax = plt.subplots()
ax.plot([1, 2, 3, 4], [10, 20, 25, 30], label='Philadelphia')
ax.plot([1, 2, 3, 4], [30, 23, 13, 4], label='Boston')
ax.set(ylabel='Temperature (deg C)', xlabel='Time', title='A tale of two cities')
ax.legend()
plt.show()
fig, ax = plt.subplots(1, 1)
ax.bar([1, 2, 3, 4], [10, 20, 25, 30], label="Foobar", align='center', color='lightblue')
ax.plot([1, 2, 3, 4], [10, 20, 25, 30], label="_nolegend_", marker='o', color='darkred')
ax.legend(loc='best')
plt.show()
%load exercises/4.1-legends_and_scaling.py
import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(0, 2 * np.pi, 150)
x1, y1 = np.cos(t), np.sin(t)
x2, y2 = 2 * x1, 2 * y1
colors = ['darkred', 'darkgreen']
# Try to plot the two circles, scale the axes as shown and add a legend
# Hint: it's easiest to combine `ax.axis(...)` and `ax.margins(...)` to scale the axes
fig, ax = plt.subplots()
ax.plot([1, 2, 3, 4], [10, 20, 25, 30])
# Manually set ticks and tick labels *on the x-axis* (note ax.xaxis.set, not ax.set!)
ax.xaxis.set(ticks=range(1, 5), ticklabels=[3, 100, -12, "foo"])
# Make the y-ticks a bit longer and go both in and out...
ax.tick_params(axis='y', direction='inout', length=10)
plt.show()
data = [('apples', 2), ('oranges', 3), ('peaches', 1)]
fruit, value = zip(*data)
fig, ax = plt.subplots()
x = np.arange(len(fruit))
ax.bar(x, value, align='center', color='gray')
ax.set(xticks=x, xticklabels=fruit)
plt.show()
fig, axes = plt.subplots(2, 2, figsize=(9, 9))
fig.subplots_adjust(wspace=0.5, hspace=0.3,
left=0.125, right=0.9,
top=0.9, bottom=0.1)
plt.show()
def example_plot(ax):
ax.plot([1, 2])
ax.set_xlabel('x-label', fontsize=16)
ax.set_ylabel('y-label', fontsize=8)
ax.set_title('Title', fontsize=24)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)
example_plot(ax1)
example_plot(ax2)
example_plot(ax3)
example_plot(ax4)
# Try enabling fig.tight_layout to compare...
#fig.tight_layout()
plt.show()
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, sharey=True)
ax1.plot([1, 2, 3, 4], [1, 2, 3, 4])
ax2.plot([3, 4, 5, 6], [6, 5, 4, 3])
plt.show()
fig, ax1 = plt.subplots(1, 1)
ax1.plot([1, 2, 3, 4], [1, 2, 3, 4])
ax2 = ax1.twinx()
ax2.scatter([1, 2, 3, 4], [60, 50, 40, 30])
ax1.set(xlabel='X', ylabel='First scale')
ax2.set(ylabel='Other scale')
plt.show()
fig, ax = plt.subplots()
ax.plot([-2, 2, 3, 4], [-10, 20, 25, 5])
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom') # no ticklines at the top
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left') # no ticklines on the right
# "outward"
# Move the two remaining spines "out" away from the plot by 10 points
ax.spines['bottom'].set_position(('outward', 10))
ax.spines['left'].set_position(('outward', 10))
# "data"
# Have the spines stay intersected at (0,0)
#ax.spines['bottom'].set_position(('data', 0))
#ax.spines['left'].set_position(('data', 0))
# "axes"
# Have the two remaining spines placed at a fraction of the axes
#ax.spines['bottom'].set_position(('axes', 0.75))
#ax.spines['left'].set_position(('axes', 0.25))
plt.show()
%load exercises/4.2-spines_ticks_and_subplot_spacing.py
import matplotlib.pyplot as plt
import numpy as np
# Try to reproduce the figure shown in images/exercise_4.2.png
# This one is a bit trickier!
# Here's the data...
data = [('dogs', 4, 4), ('frogs', -3, 1), ('cats', 1, 5), ('goldfish', -2, 2)]
animals, friendliness, popularity = zip(*data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ax.margins(...)
Step2: ax.axis(...)
Step3: Manually setting only one limit
Step4: Legends
Step5: Legends will go in the upper right corner by default (you can control this with the loc kwarg), but if you'd prefer matplotlib to choose a location to avoid overlapping plot elements as much as possible, you can pass in
Step6: Exercise 4.1
Step7: Dealing with the boundaries
Step8: A commonly-asked question is "How do I plot non-numerical categories?"
Step9: Subplot Spacing
Step10: A common "gotcha" is that the labels are not automatically adjusted to avoid overlapping those of another subplot. Matplotlib does not currently have any sort of robust layout engine, as it is a design decision to minimize the amount of "magic" that matplotlib performs. We intend to let users have complete, 100% control over their plots. LaTeX users would be quite familiar with the amount of frustration that can occur with placement of figures in their documents.
Step11: GridSpec
Step12: "Twinning" axes
Step13: Axis Spines
Step14: Exercise 4.2
|
5,697
|
<ASSISTANT_TASK:>
Python Code:
# !pip install git+https://github.com/openai/baselines >/dev/null
# !pip install gym >/dev/null
import numpy
import gym
from gym.utils import seeding
from gym import spaces
def state_name_to_int(state):
state_name_map = {
'S': 0,
'A': 1,
'B': 2,
'C': 3,
}
return state_name_map[state]
def int_to_state_name(state_as_int):
state_map = {
0: 'S',
1: 'A',
2: 'B',
3: 'C'
}
return state_map[state_as_int]
class BeraterEnv(gym.Env):
The Berater Problem
Actions:
There are 3 discrete deterministic actions:
- 0: First Direction
- 1: Second Direction
- 2: Third Direction / Go home
metadata = {'render.modes': ['ansi']}
showStep = False
showDone = True
envEpisodeModulo = 100
def __init__(self):
self.map = {
'S': [('A', 100), ('B', 400), ('C', 200 )],
'A': [('B', 250), ('C', 400), ('S', 100 )],
'B': [('A', 250), ('C', 250), ('S', 400 )],
'C': [('A', 400), ('B', 250), ('S', 200 )]
}
self.action_space = spaces.Discrete(3)
self.observation_space = spaces.Box(low=numpy.array([0,-1000,-1000,-1000,-1000,-1000,-1000]),
high=numpy.array([3,1000,1000,1000,1000,1000,1000]),
dtype=numpy.float32)
self.reward_range = (-1, 1)
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.envReward = 0
self.envEpisodeCount = 0
self.envStepCount = 0
self.reset()
self.optimum = self.calculate_customers_reward()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def step(self, actionArg):
paths = self.map[self.state]
action = actionArg
destination, cost = paths[action]
lastState = self.state
lastObState = state_name_to_int(lastState)
customerReward = self.customer_reward[destination]
info = {"from": self.state, "to": destination}
self.state = destination
reward = (-cost + self.customer_reward[destination]) / self.optimum
self.customer_visited(destination)
done = destination == 'S' and self.all_customers_visited()
stateAsInt = state_name_to_int(self.state)
self.totalReward += reward
self.stepCount += 1
self.envReward += reward
self.envStepCount += 1
if self.showStep:
print( "Episode: " + ("%4.0f " % self.envEpisodeCount) +
" Step: " + ("%4.0f " % self.stepCount) +
#lastState + ':' + str(lastObState) + ' --' + str(action) + '-> ' + self.state + ':' + str(stateAsInt) +
lastState + ' --' + str(action) + '-> ' + self.state +
' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) +
' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum)
)
if done and not self.isDone:
self.envEpisodeCount += 1
if BeraterEnv.showDone:
episodes = BeraterEnv.envEpisodeModulo
if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):
episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo
print( "Done: " +
("episodes=%6.0f " % self.envEpisodeCount) +
("avgSteps=%6.2f " % (self.envStepCount/episodes)) +
("avgTotalReward=% 3.2f" % (self.envReward/episodes) )
)
if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:
self.envReward = 0
self.envStepCount = 0
self.isDone = done
observation = self.getObservation(stateAsInt)
return observation, reward, done, info
def getObservation(self, position):
result = numpy.array([ position,
self.getEdgeObservation('S','A'),
self.getEdgeObservation('S','B'),
self.getEdgeObservation('S','C'),
self.getEdgeObservation('A','B'),
self.getEdgeObservation('A','C'),
self.getEdgeObservation('B','C'),
],
dtype=numpy.float32)
return result
def getEdgeObservation(self, source, target):
reward = self.customer_reward[target]
cost = self.getCost(source,target)
result = reward - cost
return result
def getCost(self, source, target):
paths = self.map[source]
targetIndex=state_name_to_int(target)
for destination, cost in paths:
if destination == target:
result = cost
break
return result
def customer_visited(self, customer):
self.customer_reward[customer] = 0
def all_customers_visited(self):
return self.calculate_customers_reward() == 0
def calculate_customers_reward(self):
sum = 0
for value in self.customer_reward.values():
sum += value
return sum
def reset(self):
self.totalReward = 0
self.stepCount = 0
self.isDone = False
reward_per_customer = 1000
self.customer_reward = {
'S': 0,
'A': reward_per_customer,
'B': reward_per_customer,
'C': reward_per_customer,
}
self.state = 'S'
return self.getObservation(state_name_to_int(self.state))
BeraterEnv.showStep = True
BeraterEnv.showDone = True
env = BeraterEnv()
print(env)
observation = env.reset()
print(observation)
for t in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
print(observation)
!rm -r logs
!mkdir logs
!mkdir logs/berater
# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py
# log_dir = logger.get_dir()
log_dir = '/content/logs/berater/'
import gym
from baselines import deepq
from baselines import bench
from baselines import logger
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.common.vec_env.vec_monitor import VecMonitor
from baselines.ppo2 import ppo2
BeraterEnv.showStep = False
BeraterEnv.showDone = False
env = BeraterEnv()
wrapped_env = DummyVecEnv([lambda: BeraterEnv()])
monitored_env = VecMonitor(wrapped_env, log_dir)
model = ppo2.learn(network='mlp', env=monitored_env, total_timesteps=50000)
# monitored_env = bench.Monitor(env, log_dir)
# https://en.wikipedia.org/wiki/Q-learning#Influence_of_variables
# %time model = deepq.learn(\
# monitored_env,\
# seed=42,\
# network='mlp',\
# lr=1e-3,\
# gamma=0.99,\
# total_timesteps=30000,\
# buffer_size=50000,\
# exploration_fraction=0.5,\
# exploration_final_eps=0.02,\
# print_freq=1000)
model.save('berater-ppo-v4.pkl')
monitored_env.close()
!ls -l $log_dir
from baselines.common import plot_util as pu
results = pu.load_results(log_dir)
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
# plt.ylim(-1, 1)
# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
import numpy as np
observation = env.reset()
state = np.zeros((1, 2*128))
dones = np.zeros((1))
BeraterEnv.showStep = True
BeraterEnv.showDone = False
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = env.step(actions[0])
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a href="https
Step2: Try out Environment
Step3: Train model
Step4: Visualizing Results
Step5: Enjoy model
|
5,698
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from xtoy import Toy
df = pd.read_csv(
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
header=None)
df.columns = [
"Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
"MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
"CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
# df = df.sample(frac=0.01, random_state=1)
train_cols = df.columns[0:-1]
label = df.columns[-1]
X = df[train_cols]
y = df[label].apply(lambda x: 0 if x == " <=50K" else 1)
toy = Toy()
toy.fit(X,y)
from interpret import show
from interpret.perf import ROC
blackbox_perf = ROC(toy.predict_proba).explain_perf(X,y, name='toy')
show(blackbox_perf)
from interpret.blackbox import MorrisSensitivity
trans_df = pd.DataFrame(data=toy.featurizer.transform(X).A, columns=toy.feature_names_)
sensitivity = MorrisSensitivity(predict_fn=toy.best_evo.predict_proba, data=trans_df)
sensitivity_global = sensitivity.explain_global(name="Global Sensitivity")
show(sensitivity_global)
print('Why Does this person displayed below earn less than 50k dollars?')
display(X.head(1))
print('Let\'s use shap value to explain our prediction')
from interpret.blackbox import ShapKernel
import numpy as np
background_val = np.median(toy.featurizer.transform(X).A, axis=0).reshape(1, -1)
shap = ShapKernel(predict_fn=toy.best_evo.predict_proba, data=background_val, feature_names=toy.feature_names_)
from ipywidgets import IntProgress
shap_local = shap.explain_local(toy.featurizer.transform(X).A[0:1], y[0:1], name='SHAP')
show(shap_local)
df[df['CapitalGain'] <= 2200]['Income'].value_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Performance
Step2: Which variable is most important?
Step3: The Above Analysis suggest that while Education Number is 13, Education has the word 'bachelors' and his age is 39, his capital gain is just 2174
|
5,699
|
<ASSISTANT_TASK:>
Python Code:
import pandas
# Load the data set.
df = pandas.read_csv('privacy/belgium_100k.csv')
df = df.where((pandas.notnull(df)), None)
df['birthday'] = df['birthday'].astype('datetime64[ns]')
df.head()
# Define function to evaluate uniqueness of the provided dataset.
def uniqueness(dataframe, pseudo):
groups = list(dataframe.groupby(pseudo).groups.values())
return sum(1. for g in groups if len(g) == 1) / len(dataframe)
print((uniqueness(df, ['zip'])))
print((uniqueness(df, ['sex', 'birthday'])))
print((uniqueness(df, ['sex', 'birthday', 'zip'])))
# Define function to evaluate k-anonymity of the provided data set.
def k_anonymity(dataframe, pseudo):
return dataframe.groupby(pseudo).count().min()[0]
print((k_anonymity(df, ['sex', 'birthday', 'zip'])))
# Use this code cell to review the k-anonymity function with different input parameters.
# Reduce the zip code to zip district.
df['zip_district'] = [z // 1000 for z in df['zip']]
df[['zip', 'zip_district']].head(3)
# From birthday to birth year.
df['birth_year'] = df['birthday'].map(lambda d: d.year)
df[['birthday', 'birth_year']].head(3)
# From birthday to birth decade.
df['birth_decade'] = df['birth_year'] // 10 * 10
df[['birthday', 'birth_year', 'birth_decade']].head()
print((k_anonymity(df, ['sex', 'birth_year', 'zip_district'])))
grouped = df.groupby(['sex', 'birth_year', 'zip_district'])
df_filtered = grouped.filter(lambda x: len(x) > 5)
print(('Reducing size:', len(df), '> ', len(df_filtered)))
print(('K-anonymity after suppression:', k_anonymity(df_filtered, ['sex', 'birth_year', 'zip_district'])))
# Function implementing the unicity assessment algorithm.
def draw_points(user, points):
'''IN: a Series; int'''
user.dropna(inplace=True)
indices = np.random.choice(len(user), points, replace=False)
return user[indices]
def is_unique(user_name, points):
'''IN: str, int'''
drawn_p = draw_points(samples[user_name], points)
for other_user in samples.loc[drawn_p.index].drop(user_name, axis=1).as_matrix().T:
if np.equal(drawn_p.values, other_user).all():
return False
return True
def compute_unicity(samples, points):
'''IN:int, int'''
unique_count = .0
users = samples.columns
for user_name in users:
if is_unique(user_name, points):
unique_count += 1
return unique_count / len(samples.columns)
def iterate_unicity(samples, points=4, iterations=10):
'''IN:int, int, int'''
unicities = []
for _ in tqdm(list(range(iterations))):
unicities.append(compute_unicity(samples, points))
return np.mean(unicities)
# Load required libraries and methods.
import pandas as pd
import numpy as np
from scipy.stats import rv_discrete
from tqdm import tqdm
%pylab inline
# Load samples of the data set.
samples = pd.read_csv('privacy/mobility_sample_1k.csv', index_col='datetime')
samples.index = samples.index.astype('datetime64[ns]')
samples.head(3)
## Compute unicity.
iterate_unicity(samples, 1, 3)
# Your code here.
# Load antenna data.
antennas = pd.read_csv("privacy/belgium_antennas.csv")
antennas.set_index('ins', inplace=True)
cluster_10 = pd.read_csv('privacy/clusters_10.csv')
#cluster_10['ins'] = map(int, cluster_10['ins'])
cluster_10['ins'] = list(map(int, cluster_10['ins']))
mapping = dict(cluster_10[['ins', 'cluster']].values)
# Reduce the grain of the data set.
# Requires Numpy version 1.11.
samples_10 = samples.copy()
samples_10 = samples_10.applymap(lambda k: np.nan if np.isnan(k) else mapping[antennas.index[k]])
# Your code here.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.2 Calculate uniqueness
Step2: The results indicate that about 20% of the individuals could potentially be identified using two features ("sex" and "birthday"), and 99% of the population could potentially be reidentified using three features ("sex", "birthday", and "zip").
Step3: In this example, you will notice a value of one. This implies that the minimum number of individuals with a unique combination of the provided features is one, and that there is a significant risk of potential attackers being able to reidentify individuals in the data set.
Step4: 1.4 Coarsening of data
Step5: 1.4.2 Coarsen the data from birthday to birth year
Step6: 1.4.3 Coarsen the data from birthday to birth decade
Step7: 1.5 Suppression
Step8: 2. Privacy considerations for big data
Step9: 2.1.1 Example
Step10: Compute the unicity for a single data point (with 3 iterations)
Step11: 2.2 Unicity levels in big data data sets and their consequences
Step12: <br>
Step13: <br>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.