text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import all radars
Step2: Import a specific radar
Step3: Import a specific radar site
|
<ASSISTANT_TASK:>
Python Code:
# Import radar module
%pylab inline
from davitpy.pydarn.radar import *
radars = network()
print radars
# How to get the total number of radars
print len(radars)
print radars.nradar
# How to get a specific radar from the mountain of recorded radars
print radars.getRadarByCode("bks")
# is equivalent to...
#print radars.getRadarById(33)
#print, radars.getRadarByName("Blackstone")
# How to get a specific radar site information at a given date
from datetime import datetime
print radars.getRadarByName('Goose Bay').getSiteByDate(datetime(2011,1,4))
# How to get only one radar without getting all the other radars
rad = radar(code='bks')
print rad
# How to get a site without going trhough the whole network or radar classes
print site(code='bks', dt=datetime(2010,11,17))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The module provide for each of the file resources a class, with for each of the classes the methods download_file and list_files defined
Step2: As the S3 bucket provides an essential part in the Enram data infrastructure, an additional class S3EnramHandler is available, providing the required functions to handle the enram bucket. As the class is inherited from the S3Connector, those functions are available as well in the S3 handler
Step3: In order to transfer files, Transporter classes are available to define specific transfers. Currently a LocalToS3 and a BaltradToS3 are defined to manage the file transfer from respectively a local file resource and the Baltrad file server to the Enram S3 bucket
Step4: From a enram file managament perspective, the S3EnramHandler, together with the transporter classes are most relevant, as explained in the following sections.
Step5: Functions for S3 file checks are provided to support file checks and enlisting
Step6: As the file listing provides a generator, different options are available to have an overview list
Step7: As the datamover is just a thin layer around the boto3 package, the other boto3 S3 client options are still available to use
Step8: File transfer
Step9: The transporter class BaltradToS3 supports the file transfer
Step10: The necessity of the profile_name depends from your AWS account setup. If you're default profile has the appropriate policy rights (as it is with the EC2 instance running the daily cron job), the AWS package will automatically use the default credentials and you do not need to specify the profile to use.
Step11: The results of the transfer are logged in the atributes btos.transferred and btos.stalled. A combined report can be written to a file log_file_transfer. The transfertype option provides the user the ability to have a custom message in the transfer header
Step12: The log is written to a file log_file_transfer
Step13: The transporter classes provide direct access to the individual connectors of the transfer, analog as the usage of the connector as such
Step14: Local files to S3 bucket
Step15: S3 enram handler functions
Step16: The data coverage for each radar can be derived for multiple temporal intervals
Step17: Remark the usage of the _ to ignore the second output of the function, which is the information on the most recent available file for each radar.
Step18: The same aggregation function can be used for monthly and yearly counts
Step19: For example, derive the yearly counts for the Belgian radars
Step20: The counts can be exported to a CSV-file as well, available as general datamover utility function
Step21: The most recent files for each radar can be extracted using the same function
Step22: and saved to a file as well
Step23: Zip file support
Step24: A typical use-case is the update of those files that were transferred during a transfer operation, e.g. btos.transferred
Step25: Other options are possible, e.g. update those zip files for a specific radar
Step26: An entire update can be done, by using the available coverage on a monthly or daily level (in comments, as this is a large operation)
Step27: Further remarks
|
<ASSISTANT_TASK:>
Python Code:
import datamover as dm
print(dm.LocalConnector.list_files, dm.LocalConnector.download_file)
print(dm.S3Connector.list_files, dm.S3Connector.download_file)
print(dm.FTPConnector.list_files, dm.FTPConnector.list_files)
print(dm.GithubConnector.list_files, dm.GithubConnector.download_file)
# print available methods:
print([method for method in dir(dm.S3EnramHandler) if not method.startswith("_")])
print(dm.LocalToS3, dm.BaltradToS3)
s3 = dm.S3Connector("lw-enram") # analog for S3EnramHandler
s3.bucket_name
# check if a file(path) already exists on the S3 bucket:
s3.key_exists('cz/brd/2017/04/09/23/czbrd_vp_20170409230000.h5')
set(s3.list_files(path='cz/brd/2016/09/23/00')) # using set
for filepath in s3.list_files(path='cz/brd/2016/09/23/00'):
print(filepath)
# do something...
print([method for method in dir(s3.s3client) if not method.startswith("_")])
from creds import URL, LOGIN, PASSWORD
btos = dm.BaltradToS3(URL, LOGIN, PASSWORD, "lw-enram", profile_name="lw-enram")
# transfer files with _vp_ in the name, overwriting existing files and limiting the transferred files to 5:
btos.transfer(name_match="_vp_", overwrite=True,
limit=5, verbose=True)
btos.transferred
btos.report(reset_file=True, transfertype="Baltrad to S3")
!cat log_file_transfer
btos.s3.key_exists('de/boo/2018/01/13/00/deboo_vp_20180113T0015Z_0x5.h5') # S3 check for existing file
set(btos.s3.list_files(path='de/boo/2018/01/13/00')) # S3 file listing
set(btos.ftp.list_files(name_match="deboo_vp_20180113T0015Z_0x5.h5")) # ftp file listing
ltos = dm.LocalToS3(filepath="../example_data/", bucket_name="lw-enram",
profile_name="lw-enram")
ltos.transfer(name_match="_vp_", overwrite=False,
limit=5, verbose=True)
ltos.transferred
s3enram = dm.S3EnramHandler("lw-enram", profile_name="lw-enram") # Connecto to S3 client
# Rerun file list overview to extract the current coverage
coverage_day, _ = s3enram.count_enram_coverage(level='day')
coverage_day['frtra 2017-04-07']
coverage_year, _ = s3enram.count_enram_coverage(level='year')
{k:v for k,v in coverage_year.items() if k.startswith("be")}
with open("my_coverage_filename.csv", 'w') as outfile:
dm.coverage_to_csv(outfile, coverage_year)
_, most_recent_file = s3enram.count_enram_coverage(level='month')
most_recent_file
with open("radars_latest.csv", 'w') as outfile:
dm.most_recent_to_csv(outfile, most_recent_file)
!head radars_latest.csv
keyset = ['bewid_vp_20161120233000.h5',
'bewid_vp_20161120233500.h5',
'bewid_vp_20161120234000.h5',
'bewid_vp_20161120234500.h5',
'bewid_vp_20161120235000.h5',
'bewid_vp_20161120235500.h5',
'bejab_vp_20161120235000.h5']
s3enram.create_zip_version(keyset)
s3enram.create_zip_version(btos.transferred)
import os
country = "be"
radar = "wid"
keyset = []
for key in s3enram.list_files(path="/".join([country, radar])):
keyset.append(os.path.split(key)[1])
s3enram.create_zip_version(keyset)
# s3client.create_zip_version(s3client.count_enram_coverage(level="month"))
dm.parse_filename("dkrom_vp_20170114231500.h5")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Place text search
Step2: Place details - get more information once we have a place
Step3: Now let's look up a series of places! We'll store our results in places_found, for each place that we find.
Step4: Exporting our data to CSV
Step5: Now we have an open file called 'myplaces.csv', and we have written one row to it. If you were to close the filehandle now and look at the file, you would see that it looks like this
Step6: Now we can make sure the file is there and has what we expect!
|
<ASSISTANT_TASK:>
Python Code:
import requests
search_endpoint = 'https://maps.googleapis.com/maps/api/place/textsearch/json'
search_params = {
'query': 'Lรคnggass Stรผbli',
'key': 'AIzaSyCNx-klDCfhopV6W_QPFZ0iwv5sp1J0XwQ',
'language': 'en'
}
r = requests.get( search_endpoint, params=search_params)
r.json() # See what we got
search_result = r.json()
details_endpoint = 'https://maps.googleapis.com/maps/api/place/details/json'
dparams = {
'key': 'AIzaSyCNx-klDCfhopV6W_QPFZ0iwv5sp1J0XwQ',
'placeid': search_result['results'][0]['place_id'],
'language': 'en'
}
r = requests.get( details_endpoint, params=dparams)
r.json() # See what we got
places_to_lookup = ['Moskva', 'Venice', 'Rosslyn Chapel', 'Cantabrigia']
places_found = {}
for p in places_to_lookup:
myparams = {
'query': p,
'key': 'AIzaSyCNx-klDCfhopV6W_QPFZ0iwv5sp1J0XwQ',
'language': 'en'
}
myr = requests.get( search_endpoint, params=myparams )
myresult = myr.json()
if 'results' in myresult and len(myresult['results']) > 0:
print("Found information for %s" % p)
places_found[p] = myresult['results'][0]
places_found
import csv
f = open('myplaces.csv', 'w', newline='', encoding='utf-8')
writer = csv.writer(f)
# First, write our column headers!
writer.writerow(['Place name', 'Address', 'ID', 'Latitude', 'Longitude'])
for p in places_found.keys():
place_info = places_found[p]
address = place_info['formatted_address']
placeid = place_info['place_id']
latitude = place_info['geometry']['location']['lat']
longitude = place_info['geometry']['location']['lng']
writer.writerow([p, address, placeid, latitude, longitude])
f.close() # Always close what you open, if you didn't use 'with'!
with open('myplaces.csv', encoding='utf-8') as f:
data = f.read()
print(data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First investigate sensitivity of the LogisiticModels to different seeds
Step2: Essentially no difference when setting the seed for different runs.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from vessel_scoring import data, utils
from vessel_scoring.models import train_model_on_data
from vessel_scoring.evaluate_model import evaluate_model, compare_models
from IPython.core.display import display, HTML, Markdown
import numpy as np
import sys
from sklearn import metrics
from vessel_scoring.logistic_model import LogisticModel
def make_model(seed=4321):
return LogisticModel(colspec=dict(
windows=[1800, 3600, 10800, 21600, 43200, 86400],
measures=['measure_daylight', 'measure_speed']), order=6, random_state=seed)
def load_data(seed=4321):
# Data supplied by Kristina
_, train_lline, valid_lline, test_lline = data.load_dataset_by_vessel(
'../datasets/kristina_longliner.measures.npz', seed)
_, train_trawl, valid_trawl, test_trawl = data.load_dataset_by_vessel(
'../datasets/kristina_trawl.measures.npz', seed)
_, train_pseine, valid_pseine, test_pseine = data.load_dataset_by_vessel(
'../datasets/kristina_ps.measures.npz', seed)
# Slow transits (used to train models to avoid classifying slow transits as fishing)
TRANSIT_WEIGHT = 10
x_tran, xtrain_tran, xcross_tran, xtest_tran = data.load_dataset_by_vessel(
'../datasets/slow-transits.measures.npz', even_split=False, seed=seed)
xtrain_tran = utils.clone_subset(xtrain_tran, test_lline.dtype)
xcross_tran = utils.clone_subset(xcross_tran, test_lline.dtype)
xtest_tran = utils.clone_subset(xtest_tran, test_lline.dtype)
train_tran = np.concatenate([xtrain_tran, xcross_tran] * TRANSIT_WEIGHT)
train = {'longliner': np.concatenate([train_lline, valid_lline, train_tran]),
'trawler': np.concatenate([train_trawl, valid_trawl, train_tran]),
'purse_seine': np.concatenate([train_pseine, valid_pseine, train_tran])}
test = {'longliner': test_lline,
'trawler': test_trawl,
'purse_seine': test_pseine}
return train, test
def get_seeds(count):
np.random.seed(4321)
return np.random.randint(4294967295, size=count)
N_SEEDS = 10
train_data, test_data = load_data()
for gear in ['purse_seine', 'trawler', 'longliner']:
X_test = test_data[gear]
display(HTML("<h2>{}</h2>".format(gear.replace('_', ' ').title())))
predictions = []
trained_models = []
for seed in get_seeds(N_SEEDS):
mdl = make_model(seed)
train_model_on_data(mdl, train_data[gear])
trained_models.append((seed, mdl))
predictions.append((seed, (mdl.predict_proba(X_test)[:,1] > 0.5), X_test['classification'] > 0.5))
lines = ["|Model|Recall|Precision|F1-Score|",
"|-----|------|---------|--------|"]
for name, pred, actual in predictions:
lines.append("|{}|{:.2f}|{:.2f}|{:.2f}|".format(name,
metrics.recall_score(actual, pred),
metrics.precision_score(actual, pred),
metrics.f1_score(actual, pred)))
display(Markdown('\n'.join(lines)))
compare_models(trained_models, X_test)
display(HTML("<hr/>"))
for gear in ['purse_seine', 'trawler', 'longliner']:
X_test = test_data[gear]
display(HTML("<h2>{}</h2>".format(gear.replace('_', ' ').title())))
predictions = []
trained_models = []
for seed in get_seeds(N_SEEDS):
mdl = make_model()
train_data, test_data = load_data(seed)
train_model_on_data(mdl, train_data[gear])
trained_models.append((seed, mdl))
predictions.append((seed, (mdl.predict_proba(X_test)[:,1] > 0.5), X_test['classification'] > 0.5))
lines = ["|Model|Recall|Precision|F1-Score|",
"|-----|------|---------|--------|"]
for name, pred, actual in predictions:
lines.append("|{}|{:.2f}|{:.2f}|{:.2f}|".format(name,
metrics.recall_score(actual, pred),
metrics.precision_score(actual, pred),
metrics.f1_score(actual, pred)))
display(Markdown('\n'.join(lines)))
compare_models(trained_models, X_test)
display(HTML("<hr/>"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then we need some imports
Step3: We define a simple function that returns our training dataset as a Ray Dataset
Step5: Now we define a simple training function. All the magic happens within the CometLoggerCallback
Step6: Let's kick off a run
|
<ASSISTANT_TASK:>
Python Code:
!pip install -qU "ray[tune]" sklearn xgboost_ray comet_ml
import ray
from ray.air import RunConfig
from ray.air.result import Result
from ray.train.xgboost import XGBoostTrainer
from ray.tune.integration.comet import CometLoggerCallback
from sklearn.datasets import load_breast_cancer
def get_train_dataset() -> ray.data.Dataset:
Return the "Breast cancer" dataset as a Ray dataset.
data_raw = load_breast_cancer(as_frame=True)
df = data_raw["data"]
df["target"] = data_raw["target"]
return ray.data.from_pandas(df)
def train_model(train_dataset: ray.data.Dataset, comet_project: str) -> Result:
Train a simple XGBoost model and return the result.
trainer = XGBoostTrainer(
scaling_config={"num_workers": 2},
params={"tree_method": "auto"},
label_column="target",
datasets={"train": train_dataset},
num_boost_round=10,
run_config=RunConfig(
callbacks=[
# This is the part needed to enable logging to Comet ML.
# It assumes Comet ML can find a valid API (e.g. by setting
# the ``COMET_API_KEY`` environment variable).
CometLoggerCallback(
project_name=comet_project,
save_checkpoints=True,
)
]
),
)
result = trainer.fit()
return result
comet_project = "ray_air_example"
train_dataset = get_train_dataset()
result = train_model(train_dataset=train_dataset, comet_project=comet_project)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's untar the file unless it has already been untarred.
Step2: The UW3-500 dataset is a collection of text line images and corresponding ground truth transcription. It's organized as a directory tree of the form book/<page_no>/<line_id>.bin.png etc.
Step3: Let's now run clstmctc training. We report every 100 training steps. Since we didn't dewarp or size-normalize the lines, we need to use a dewarp=center argument to training.
|
<ASSISTANT_TASK:>
Python Code:
!test -f uw3-500.tgz || wget -nd http://www.tmbdev.net/ocrdata/uw3-500.tgz
!test -d book || tar -zxvf uw3-500.tgz
!ls book/0005/010001.*
!dewarp=center report_every=500 save_name=test save_every=10000 ntrain=11000 ../clstmctc uw3-500.h5
!ls book/*/*.bin.png | sort -r > uw3.files
!sed 100q uw3.files > uw3-test.files
!sed 1,100d uw3.files > uw3-train.files
!wc -l uw3*.files
!params=1 save_name=uw3small save_every=1000 report_every=100 maxtrain=50000 test_every=1000 ../clstmocrtrain uw3-train.files uw3-test.files
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following code cell visualizes the audio waveform for your chosen example, along with the corresponding transcript. You also have the option to play the audio in the notebook!
Step2: <a id='step1'></a>
Step3: Mel-Frequency Cepstral Coefficients (MFCCs)
Step4: When you construct your pipeline, you will be able to choose to use either spectrogram or MFCC features. If you would like to see different implementations that make use of MFCCs and/or spectrograms, please check out the links below
Step5: <a id='model0'></a>
Step6: As explored in the lesson, you will train the acoustic model with the CTC loss criterion. Custom loss functions take a bit of hacking in Keras, and so we have implemented the CTC loss function for you, so that you can focus on trying out as many deep learning architectures as possible
Step7: <a id='model1'></a>
Step8: Please execute the code cell below to train the neural network you specified in input_to_softmax. After the model has finished training, the model is saved in the HDF5 file model_1.h5. The loss history is saved in model_1.pickle. You are welcome to tweak any of the optional parameters while calling the train_model function, but this is not required.
Step9: <a id='model2'></a>
Step10: Please execute the code cell below to train the neural network you specified in input_to_softmax. After the model has finished training, the model is saved in the HDF5 file model_2.h5. The loss history is saved in model_2.pickle. You are welcome to tweak any of the optional parameters while calling the train_model function, but this is not required.
Step11: <a id='model3'></a>
Step12: Please execute the code cell below to train the neural network you specified in input_to_softmax. After the model has finished training, the model is saved in the HDF5 file model_3.h5. The loss history is saved in model_3.pickle. You are welcome to tweak any of the optional parameters while calling the train_model function, but this is not required.
Step13: <a id='model4'></a>
Step14: Please execute the code cell below to train the neural network you specified in input_to_softmax. After the model has finished training, the model is saved in the HDF5 file model_4.h5. The loss history is saved in model_4.pickle. You are welcome to tweak any of the optional parameters while calling the train_model function, but this is not required.
Step15: <a id='model5'></a>
Step16: <a id='compare'></a>
Step17: Question 1
Step18: Please execute the code cell below to train the neural network you specified in input_to_softmax. After the model has finished training, the model is saved in the HDF5 file model_end.h5. The loss history is saved in model_end.pickle. You are welcome to tweak any of the optional parameters while calling the train_model function, but this is not required.
Step20: Question 2
Step21: Use the code cell below to obtain the transcription predicted by your final model for the first example in the training dataset.
Step22: Use the next code cell to visualize the model's prediction for the first example in the validation dataset.
|
<ASSISTANT_TASK:>
Python Code:
from data_generator import vis_train_features
# extract label and audio features for a single training example
vis_text, vis_raw_audio, vis_mfcc_feature, vis_spectrogram_feature, vis_audio_path = vis_train_features()
from IPython.display import Markdown, display
from data_generator import vis_train_features, plot_raw_audio
from IPython.display import Audio
%matplotlib inline
# plot audio signal
plot_raw_audio(vis_raw_audio)
# print length of audio signal
display(Markdown('**Shape of Audio Signal** : ' + str(vis_raw_audio.shape)))
# print transcript corresponding to audio clip
display(Markdown('**Transcript** : ' + str(vis_text)))
# play the audio file
Audio(vis_audio_path)
from data_generator import plot_spectrogram_feature
# plot normalized spectrogram
plot_spectrogram_feature(vis_spectrogram_feature)
# print shape of spectrogram
display(Markdown('**Shape of Spectrogram** : ' + str(vis_spectrogram_feature.shape)))
from data_generator import plot_mfcc_feature
# plot normalized MFCC
plot_mfcc_feature(vis_mfcc_feature)
# print shape of MFCC
display(Markdown('**Shape of MFCC** : ' + str(vis_mfcc_feature.shape)))
#####################################################################
# RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK #
#####################################################################
# allocate 50% of GPU memory (if you like, feel free to change this)
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
set_session(tf.Session(config=config))
# watch for any changes in the sample_models module, and reload it automatically
%load_ext autoreload
%autoreload 2
# import NN architectures for speech recognition
from sample_models import *
# import function for training acoustic model
from train_utils import train_model
model_0 = simple_rnn_model(input_dim=161) # change to 13 if you would like to use MFCC features
train_model(input_to_softmax=model_0,
pickle_path='model_0.pickle',
save_model_path='model_0.h5',
spectrogram=True) # change to False if you would like to use MFCC features
model_1 = rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
activation='relu')
train_model(input_to_softmax=model_1,
pickle_path='model_1.pickle',
save_model_path='model_1.h5',
spectrogram=True) # change to False if you would like to use MFCC features
model_2 = cnn_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200)
train_model(input_to_softmax=model_2,
pickle_path='model_2.pickle',
save_model_path='model_2.h5',
spectrogram=True) # change to False if you would like to use MFCC features
model_3 = deep_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
recur_layers=2)
train_model(input_to_softmax=model_3,
pickle_path='model_3.pickle',
save_model_path='model_3.h5',
spectrogram=True) # change to False if you would like to use MFCC features
model_4 = bidirectional_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200)
train_model(input_to_softmax=model_4,
pickle_path='model_4.pickle',
save_model_path='model_4.h5',
spectrogram=True) # change to False if you would like to use MFCC features
## (Optional) TODO: Try out some more models!
### Feel free to use as many code cells as needed.
from glob import glob
import numpy as np
import _pickle as pickle
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style(style='white')
# obtain the paths for the saved model history
all_pickles = sorted(glob("results/*.pickle"))
# extract the name of each model
model_names = [item[8:-7] for item in all_pickles]
# extract the loss history for each model
valid_loss = [pickle.load( open( i, "rb" ) )['val_loss'] for i in all_pickles]
train_loss = [pickle.load( open( i, "rb" ) )['loss'] for i in all_pickles]
# save the number of epochs used to train each model
num_epochs = [len(valid_loss[i]) for i in range(len(valid_loss))]
fig = plt.figure(figsize=(16,5))
# plot the training loss vs. epoch for each model
ax1 = fig.add_subplot(121)
for i in range(len(all_pickles)):
ax1.plot(np.linspace(1, num_epochs[i], num_epochs[i]),
train_loss[i], label=model_names[i])
# clean up the plot
ax1.legend()
ax1.set_xlim([1, max(num_epochs)])
plt.xlabel('Epoch')
plt.ylabel('Training Loss')
# plot the validation loss vs. epoch for each model
ax2 = fig.add_subplot(122)
for i in range(len(all_pickles)):
ax2.plot(np.linspace(1, num_epochs[i], num_epochs[i]),
valid_loss[i], label=model_names[i])
# clean up the plot
ax2.legend()
ax2.set_xlim([1, max(num_epochs)])
plt.xlabel('Epoch')
plt.ylabel('Validation Loss')
plt.show()
# specify the model
model_end = final_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
recur_layers=2,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid')
train_model(input_to_softmax=model_end,
pickle_path='model_end.pickle',
save_model_path='model_end.h5',
spectrogram=True) # change to False if you would like to use MFCC features
import numpy as np
from data_generator import AudioGenerator
from keras import backend as K
from utils import int_sequence_to_text
from IPython.display import Audio
def get_predictions(index, partition, input_to_softmax, model_path):
Print a model's decoded predictions
Params:
index (int): The example you would like to visualize
partition (str): One of 'train' or 'validation'
input_to_softmax (Model): The acoustic model
model_path (str): Path to saved acoustic model's weights
# load the train and test data
data_gen = AudioGenerator()
data_gen.load_train_data()
data_gen.load_validation_data()
# obtain the true transcription and the audio features
if partition == 'validation':
transcr = data_gen.valid_texts[index]
audio_path = data_gen.valid_audio_paths[index]
data_point = data_gen.normalize(data_gen.featurize(audio_path))
elif partition == 'train':
transcr = data_gen.train_texts[index]
audio_path = data_gen.train_audio_paths[index]
data_point = data_gen.normalize(data_gen.featurize(audio_path))
else:
raise Exception('Invalid partition! Must be "train" or "validation"')
# obtain and decode the acoustic model's predictions
input_to_softmax.load_weights(model_path)
prediction = input_to_softmax.predict(np.expand_dims(data_point, axis=0))
output_length = [input_to_softmax.output_length(data_point.shape[0])]
pred_ints = (K.eval(K.ctc_decode(
prediction, output_length)[0][0])+1).flatten().tolist()
# play the audio file, and display the true and predicted transcriptions
print('-'*80)
Audio(audio_path)
print('True transcription:\n' + '\n' + transcr)
print('-'*80)
print('Predicted transcription:\n' + '\n' + ''.join(int_sequence_to_text(pred_ints)))
print('-'*80)
get_predictions(index=0,
partition='train',
input_to_softmax=final_model(
input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
recur_layers=2,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid'),
model_path='results/model_end.h5')
get_predictions(index=0,
partition='validation',
input_to_softmax=final_model(
input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
recur_layers=2,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid'),
model_path='results/model_end.h5')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rectangle and Triangle Pulses Defined
Step2: Consider an interactive version of the above
Step3: More Signal Plotting
Step5: Custom Piecewise
Step7: Energy and Power Signals
Step8: For the power calculation create a time array that runs over exactly one period. Below is the case for the original problem statement.
Step9: A simple numerical approximation to the integral
Step10: Power in the Sum of Two Sinusoids
Step12: Fourier Series and Line Spectra Plotting
Step13: Example
Step15: Example
Step16: Let $\tau = 1/8$ and $t_r = 1/20$
Step17: With the edge speed slowed down it is clear that the harmonics drop off faster.
Step18: Example
Step19: Example
Step20: Notice the numerical approximation and exact spectral plots overlay one another
Step21: Example
Step22: Note
Step23: For convolutions involving semi-infinite signals, such as $u(t)$, you can tell ssd.conv_integral() about this via the optional extent argument. See the function help using
Step24: Spectrum of PN Sequence (exact)
Step25: Spectrum of PN Sequence (approx)
Step28: Lab Tip
Step29: Cross Correlation and Signal Delay
Step30: Spectral Containment Bandwidth (text problem 2.55)
Step35: Filter Analysis
Step36: Example
Step37: Example
Step38: Second-Order Butterworth Lowpass Response
Step39: Obtaining the Step Response via Simulation
|
<ASSISTANT_TASK:>
Python Code:
t = arange(-4,4,.01)
x = cos(2*pi*t)
plot(t,x)
grid()
t = arange(-5,5,.01)
x_rect = ss.rect(t-3,2)
x_tri = ss.tri(t+2,1.5)
subplot(211)
plot(t,x_rect)
grid()
ylabel(r'$\Pi((t-3)/2)$');
subplot(212)
plot(t,x_tri)
grid()
xlabel(r'Time (s)')
ylabel(r'$\Lambda((t+2)/1.5)$');
tight_layout()
# Make an interactive version of the above
from ipywidgets import interact, interactive
def pulses_plot(D1,D2,W1,W2):
t = arange(-5,5,.01)
x_rect = ss.rect(t-D1,W1)
x_tri = ss.tri(t-D2,W2)
subplot(211)
plot(t,x_rect)
grid()
ylabel(r'$\Pi((t-3)/2)$');
subplot(212)
plot(t,x_tri)
grid()
xlabel(r'Time (s)')
ylabel(r'$\Lambda((t+2)/1.5)$');
tight_layout()
interactive_plot = interactive(pulses_plot,D1 = (-3,3,.5), D2 = (-3,3,.5), W1 = (0.5,2,.25), W2 = (0.5,2,.25));
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
t1 = arange(0,10+.01,.01) # arange stops one step size less than the upper limit
x1 = sin(2*pi*5*t1)* ss.rect(t1-2,2)
subplot(211)
plot(t1,x1)
xlabel(r'Time (s)')
ylabel(r'$x_1(t)$')
grid()
t2 = arange(-10,10,.01)
# Tweak mod() to take on negative values
x2 = ss.rect(mod(t2+2.5,5)-2.5,1)
subplot(212)
plot(t2,x2)
xlabel(r'Time (s)')
ylabel(r'$x_2(t)$')
grid()
tight_layout()
def x3_func(t):
Create a piecewise function for plotting x3
x3 = zeros_like(t)
for k,tk in enumerate(t):
if tk >= 0 and tk <= 3:
x3[k] = 1 + tk**2
elif tk > 3 and tk <= 5:
x3[k] = cos(2*pi*5*tk)
return x3
t3 = arange(-2,6+.01,.01)
x3 = x3_func(t3)
plot(t3,x3)
xlabel(r'Time (s)')
ylabel(r'$x_3(t)$')
xlim([-2,6])
grid()
26/2
def periodic_tri(t,tau,T,N):
Approximate x2(t) by running the sum index from -N to +N.
The period is set by T and tau is the tri pulse width
parameter (base width is 2*tau).
Mark Wickert January 2015
x = zeros_like(t)
for n in arange(-N,N+1):
x += ss.tri(t-T*n,tau)
return x
t = arange(-10,10,.001)
x = periodic_tri(t,2,6,10)
plot(t,x)
plot(t,abs(x)**2)
grid()
#xlim([-5,5])
xlabel(r'Time (s)')
ylabel(r'$x_2(t)$ and $x_2^2(t)$');
T0 = 6
tp = arange(-T0/2,T0/2+.001,.001)
xp = periodic_tri(tp,2,T0,5)
plot(tp,xp)
plot(tp,abs(xp)**2)
legend((r'$x(t)$', r'$|x(t)|^2$'),loc='best',shadow=True)
grid();
xlim([-T0/2,T0/2])
xlabel(r'Time (s)')
ylabel(r'$x_2(t)$ and $x_2^2(t)$');
#Power calculation
Px2 = (1/T0)*sum(xp**2)*.001 # rectangular partitions for integral
print('Power estimate via numerical integration: %2.4f W' % Px2)
t = arange(-10,10,.001)
x1 = 4*cos(2*pi*10*t)
x2 = 3*cos(2*pi*3.45*t+pi/9)
plot(t,x1)
plot(t,x2)
plot(t,x1+x2)
grid()
xlabel(r'Time (s)')
ylabel(r'Amplitude')
legend((r'$x_1(t)$', r'$x_2(t)$', r'$x_1(t)+x_2(t)$'),loc='best',shadow=True)
xlim([-.1,.1]);
print('Power calculations: %3.2f, %3.2f, %3.2f' \
% (var(x1),var(x2),var(x1+x2)))
print('Theory: %3.2f, %3.2f, %3.2f' \
% (4**2/2,3**2/2,4**2/2+3**2/2))
def pulse_train(Np,fs,tau,t0):
Generate a discrete-time approximation to a continuous-time
pulse train signal. Amplitude values are [0,1]. Scale and offset
later if needed.
Inputs
------
Np = number of periods to generate
fs = samples per period
tau = duty cycle
t0 = pulse delay time relative to first rising edge at t = 0
Return
------
t = time axis array
x = waveform
Mark Wickert, January 2015
t = arange(0,Np*fs+1,1)/fs #time is normalized to make period T0 = 1.0
x = zeros_like(t)
# Using a brute force approach, just fill x with the sample values
for k,tk in enumerate(t):
if mod(tk-t0,1) <= tau and mod(tk-t0,1) >= 0:
x[k] = 1
return t,x
tau = 1/8; fs = 8*16; t0 = 0 # note t0 = tau/2
subplot(211)
t,x = pulse_train(4,fs,tau,t0)
plot(t,x) # Just a plot of xa(t)
ylim([-.1,1.1])
grid()
ylabel(r'$x_a(t)$')
title(r'Pulse Train Signal: (top) $x_a(t)$, (bot) $x_b(t) = 1-x_a(t)$');
subplot(212)
t,x = pulse_train(4,fs,tau,t0)
plot(t,1-x) # Note here y(t) = 1 - x(t), a special case of
ylim([-.1,1.1]) # y(t) = A + B*x(t) in the notes
grid()
xlabel(r'Time ($t/T_0$)')
ylabel(r'$x_b(t)$');
n = arange(0,25+1) # Get 0 through 25 harmonics
tau = 0.125; f0 = 1; A = 1;
Xn = A*tau*f0*sinc(n*f0*tau)*exp(-1j*2*pi*n*f0*tau/2)
# Xn = -Xn # Convert the coefficients from xa(t) t0 xb(t)
# Xn[0] += 1
figure(figsize=(6,2))
f = n # Assume a fundamental frequency of 1 Hz so f = n
ss.line_spectra(f,Xn,mode='mag',sides=2,fsize=(6,2))
xlim([-25,25]);
#ylim([-50,10])
figure(figsize=(6,2))
ss.line_spectra(f,Xn,mode='phase',fsize=(6,2))
xlim([-25,25]);
def trap_pulse(N,tau,tr):
xp = trap_pulse(N,tau,tr)
Mark Wickert, January 2015
n = arange(0,N)
t = n/N
xp = zeros(len(t))
# Assume tr and tf are equal
T1 = tau + tr
# Create one period of the trapezoidal pulse waveform
for k in n:
if t[k] <= tr:
xp[k] = t[k]/tr
elif (t[k] > tr and t[k] <= tau):
xp[k] = 1
elif (t[k] > tau and t[k] < T1):
xp[k] = -t[k]/tr + 1 + tau/tr;
else:
xp[k] = 0
return xp, t
# tau = 1/8, tr = 1/20
N = 1024
xp,t = trap_pulse(N,1/8,1/20)
Xp = fft.fft(xp)
figure(figsize=(6,2))
plot(t,xp)
grid()
title(r'Spectra of Finite Risetime Pulse Train: $\tau = 1/8$ $t_r = 1/20$')
ylabel(r'$x(t)$')
xlabel('Time (s)')
f = arange(0,N/2)
ss.line_spectra(f[0:25],Xp[0:25]/N,'magdB',floor_dB=-80,fsize=(6,2))
ylabel(r'$|X_n| = |X(f_n)|$ (dB)');
#% tau = 1/8, tr = 1/10
xp,t = trap_pulse(N,1/8,1/10)
Xp = fft.fft(xp)
figure(figsize=(6,2))
plot(t,xp)
grid()
title(r'Spectra of Finite Risetime Pulse Train: $\tau = 1/8$ $t_r = 1/10$')
ylabel(r'$x(t)$')
xlabel('Time (s)')
ss.line_spectra(f[0:25],Xp[0:25]/N,'magdB',floor_dB=-80,fsize=(6,2))
ylabel(r'$|X_n| = |X(f_n)|$ (dB)');
def FT_approx(x,t,Nfft):
'''
Approximate the Fourier transform of a finite duration
signal using scipy.signal.freqz()
Inputs
------
x = input signal array
t = time array used to create x(t)
Nfft = the number of frdquency domain points used to
approximate X(f) on the interval [fs/2,fs/2], where
fs = 1/Dt. Dt being the time spacing in array t
Return
------
f = frequency axis array in Hz
X = the Fourier transform approximation (complex)
Mark Wickert, January 2015
'''
fs = 1/(t[1] - t[0])
t0 = (t[-1]+t[0])/2 # time delay at center
N0 = len(t)/2 # FFT center in samples
f = arange(-1/2,1/2,1/Nfft)
w, X = signal.freqz(x,1,2*pi*f)
X /= fs # account for dt = 1/fs in integral
X *= exp(-1j*2*pi*f*fs*t0)# time interval correction
X *= exp(1j*2*pi*f*N0)# FFT time interval is [0,Nfft-1]
F = f*fs
return F, X
fs = 100 # sampling rate in Hz
tau = 1
t = arange(-5,5,1/fs)
x0 = ss.rect(t-.5,tau)
figure(figsize=(6,5))
subplot(311)
plot(t,x0)
grid()
ylim([-0.1,1.1])
xlim([-2,2])
title(r'Exact Waveform')
xlabel(r'Time (s)')
ylabel(r'$x_0(t)$');
# FT Exact Plot
fe = arange(-10,10,.01)
X0e = tau*sinc(fe*tau)
subplot(312)
plot(fe,abs(X0e))
#plot(f,angle(X0))
grid()
xlim([-10,10])
title(r'Exact Spectrum Magnitude')
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_0e(f)|$');
# FT Approximation Plot
f,X0 = ss.ft_approx(x0,t,4096)
subplot(313)
plot(f,abs(X0))
#plot(f,angle(X0))
grid()
xlim([-10,10])
title(r'Approximation Spectrum Magnitude')
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_0(f)|$');
tight_layout()
fs = 100
t = arange(-5,5,1/fs)
x1 = ss.rect(t+1/2,1)-ss.rect(t-1/2,1)
subplot(211)
plot(t,x1)
grid()
ylim([-1.1,1.1])
xlim([-2,2])
xlabel(r'Time (s)')
ylabel(r'$x_1(t)$');
fe = arange(-10,10,.01)
X1e = 2*1j*sinc(fe)*sin(pi*fe)
f,X1 = ss.ft_approx(x1,t,4096)
subplot(212)
plot(f,abs(X1))
plot(fe,abs(X1e))
#plot(f,angle(X1))
legend((r'Num Approx',r'Exact'),loc='best')
grid()
xlim([-10,10])
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_1(f)|$');
tight_layout()
fs = 100 # sampling rate in Hz
tau = 1
t = arange(-5,5,1/fs)
x3 = ss.tri(t,tau)
y = x3*cos(2*pi*10*t)
subplot(211)
plot(t,x3)
plot(t,y)
grid()
ylim([-1.1,1.1])
xlim([-2,2])
legend((r'$x_3(t)$', r'$y(t)$'),loc='lower right',shadow=True)
title(r'Time Domain: $x_3(t)$ and $y(t)=x_3(t)\cos(2\pi\cdot 5\cdot t)$')
xlabel(r'Time (s)')
ylabel(r'$y(t)$');
f,Y = ss.ft_approx(y,t,4096)
subplot(212)
plot(f,abs(Y))
#plot(f,angle(X0))
grid()
title(r'Frequency Domain: $Y(f)$')
xlim([-15,15])
xlabel(r'Frequency (Hz)')
ylabel(r'$|Y(f)|$');
tight_layout()
fs = 100 # sampling rate in Hz
W = 5
t = arange(-5,5,1/fs)
x4 = 2*W*sinc(2*W*t)
figure(figsize=(6,2))
plot(t,x4)
grid()
#ylim([-1.1,1.1])
xlim([-2,2])
title(r'Time Domain: $x_4(t),\ W = 5$ Hz')
xlabel(r'Time (s)')
ylabel(r'$x_4(t)$');
f,X4 = ss.ft_approx(x4,t,4096)
figure(figsize=(6,2))
plot(f,abs(X4))
grid()
title(r'Frequency Domain: $X_4(f)$')
xlim([-10,10])
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_4(f)|$');
figure(figsize=(6,2))
plot(f,20*log10(abs(X4)))
grid()
title(r'Frequency Domain: $X_4(f)$ in dB')
ylim([-50,5])
xlim([-10,10])
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_4(f)|$ (dB)');
t = arange(-2,2.001,.001)
p1 = ss.rect(t,1)
p2 = ss.rect(t,3)
y,ty = ss.conv_integral(p1,t,p2,t)
plot(ty,y)
ylim([-.01,1.01])
grid()
xlabel(r'Time (s)')
ylabel(r'$y(t)$');
# Consider a pulse convolved with an exponential ('r' type extent)
tx = arange(-1,8,.01)
x = ss.rect(tx-2,4) # pulse starts at t = 0
h = 4*exp(-4*tx)*ss.step(tx)
y,ty = ss.conv_integral(x,tx,h,tx,extent=('f','r')) # note extents set
plot(ty,y) # expect a pulse charge and discharge waveform
grid()
title(r'$\Pi((t-2)/4)\ast 4 e^{-4t} u(t)$')
xlabel(r'Time (s)')
ylabel(r'$y(t)$');
n = arange(0,25+1) # Get 0 through 25 harmonics
tau = 0.125; f0 = 1; A = 1;
Xn = A*tau*f0*sinc(n*f0*tau)*exp(-1j*2*pi*n*f0*tau/2)
# Xn = -Xn # Convert the coefficients from xa(t) t0 xb(t)
# Xn[0] += 1
figure(figsize=(6,2))
f = n # Assume a fundamental frequency of 1 Hz so f = n
ss.line_spectra(f,Xn,mode='mag',sides=2,fsize=(6,2))
xlim([-25,25]);
#ylim([-50,10])
figure(figsize=(6,2))
ss.line_spectra(f,Xn,mode='phase',fsize=(6,2))
xlim([-25,25]);
x_PN4 = ss.m_seq(4)
x = signal.lfilter(ones(10),1,ss.upsample(x_PN4,10))
t = arange(0,len(x))/10
figure(figsize=(6,2))
plot(t,x);
title(r'Time Domain and PSD of $M=15$ PN Code with $T = 1$')
xlabel(r'Time (s)')
ylabel(r'x(t)')
axis([0,15,-0.1,1.1]);
grid()
# 10 samples/bit so 150 samples/period
# harmonics spaced by 1/(15*T) = 1/15
Xk,fk = ss.fs_coeff(x,45,1/15)
ss.line_spectra(fk,Xk,'magdB',lwidth=2.0,floor_dB=-50,fsize=(6,2))
xlim([-3,3])
ylabel(r'$|X_n| = |X(f_n)|$ (dB)');
# Line spacing
1/15
import sk_dsp_comm.digitalcom as dc
y_PN5_bits = ss.pn_gen(10000,5)
# Convert to waveform level shifted to +/-1 amplitude
y = 2*signal.lfilter(ones(10),1,ss.upsample(y_PN5_bits,10))-1
# Find the time averaged autocorrelation function normalized
# to have a peak amplitude of 1
Ry,tau = dc.xcorr(y,y,400)
# We know Ry is real so strip small imag parts from FFT-based calc
Ry = Ry.real
fs = 10
t = arange(len(y))/fs
plot(t[:500],y[:500])
title(r'PN Waveform for 5 Stages (Period $2^5 -1 = 31$ bits)')
ylabel(r'Amplitude')
xlabel(r'Bits (10 samples/bit)')
grid();
tau_s = tau/10
figure(figsize=(6,2))
plot(tau_s,Ry)
title(r'Autocorrelation and PSD Estimates for $M=31$ with $T = 1$')
xlabel(r'Autocorrelation Lag $\tau$ (s)')
ylabel(r'$R_y(\tau)$')
grid();
figure(figsize=(6,2))
psd(y,2**12,10)
xlabel(r'Frequency (Hz)')
ylabel(r'$S_y(f)$ (dB)')
#xlim([0,.002]);
ylim([-30,20]);
class bitwise_PN(object):
Implement a PN generator using bitwise manipulation for
the shift register. The LSB holds b0 and bits are shifted left.
+----+----+----+----+----+----+----+
sr = |bM-1| .. |bM-k| .. | b2 | b1 | b0 |
+----+----+----+----+----+----+----+
| |
Feedback:(tap1-1) (tap2-1) Shift left using <<
Mark Wickert February 2017
def __init__(self,tap1,tap2,Nstage,sr_initialize):
Initialize the PN generator object
self.tap1 = tap1 - 1
self.tap2 = tap2 - 1
self.mask1 = 0x1 << (tap1 - 1) # to select bit of interest
self.mask2 = 0x1 << (tap2 - 1) # to select bit of interest
self.Nstage = Nstage
self.period = 2**Nstage - 1
self.sr = sr_initialize
self.bit = 0
self.sync_bit = 0
def clock_PN(self):
'''
Method to advance m-sequence generator by one bit
XOR tap1 and tap2 SR values and feedback to input
'''
fb = ((self.sr & self.mask1)>> self.tap1) ^ \
((self.sr & self.mask2) >> self.tap2)
self.sr = (self.sr << 1) + fb
self.sr = self.sr & self.period # set MSBs > Nstage to 0
self.bit = self.sr & 0x1 # output LSB from SR
# See if all 1's condition exits in SR, if so output a synch pulse
if ((self.sr & self.period) == self.period):
self.sync_bit = 0x1
else:
self.sync_bit = 0x0
print('output = %d, sr contents = %s, sync bit = %d' \
% (self.bit, binary(self.sr, self.Nstage), self.sync_bit))
# A simple binary format display function which shows
# leading zeros to a fixed bit width
def binary(num, length=8):
return format(num, '#0{}b'.format(length + 2))
PN1 = bitwise_PN(10,7,10,0x1)
PN1.clock_PN()
# sr initial condition
sr = 0b1
Nout = 20
x_out = zeros(Nout)
s_out = zeros(Nout)
PN1 = bitwise_PN(3,2,3,0x1)
for k in range(Nout):
PN1.clock_PN()
x_out[k] = PN1.bit
s_out[k] = PN1.sync_bit
stem(x_out)
stem(0.2*s_out,markerfmt = 'ro')
ylim([0,1.1])
import sk_dsp_comm.digitalcom as dc
x_PN4_bits = ss.pn_gen(10000,6)
# Convert to waveform level shifted to +/-1 amplitude
x_s = 2*signal.lfilter(ones(10),1,ss.upsample(x_PN4_bits,10))-1
# Form a delayed version of x_S
T_D = 35 # 35 sample delay
y_s = signal.lfilter(concatenate((zeros(T_D),array([1]))),1,x_s)
figure(figsize=(6,2))
plot(x_s[:200])
plot(y_s[:200])
ylim([-1.1,1.1])
title(r'Delayed and Undelayed Signals for $T_D = 35$ Samples')
xlabel(r'Samples (10/PN bit)')
ylabel(r'$x_s(t)$ and $y_s(t)$')
grid();
# Find the time averaged autocorrelation function normalized
# to have a peak amplitude of 1
Ryx,tau = dc.xcorr(y_s,x_s,200) #note order change
# We know Ryx is real
Ryx = Ryx.real
tau_s = tau/10
figure(figsize=(6,2))
plot(tau_s,Ryx)
title(r'Cross Correlation for $M=4$ with $T = 1$ and Delay 35 Samples')
xlabel(r'Autocorrelation Lag $\tau$ (s)')
ylabel(r'$R_{yx}(\tau)$')
grid();
fn = arange(0,10,.001)
Gn = 4*sinc(fn)**2 * sin(pi*fn)**2
Gn_cumsum = cumsum(Gn)
Gn_tot = sum(Gn)
plot(fn,Gn_cumsum/Gn_tot)
grid()
xlabel('Normalized Frequency $f\tau$')
ylabel('Fractional Power Containment');
fn_idx = np.nonzero(np.ravel(abs(Gn_cumsum/Gn_tot - 0.9)< 0.0005))[0]
fn_idx
print('The normalized 90 percent containment bandwidth is %2.2f Hz-s.' \
% fn[1448])
def freqz_resp(b,a=[1],mode = 'dB',fs=1.0,Npts = 1024,fsize=(6,4)):
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = 'dB',Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqs_resp(b,a=[1],Dmin=1,Dmax=5,mode = 'dB',Npts = 1024,fsize=(6,4))
b = ndarray of numerator coefficients
a = ndarray of denominator coefficents
Dmin = start frequency as 10**Dmin
Dmax = stop frequency as 10**Dmax
mode = display mode: 'dB' magnitude, 'phase' in radians, or
'groupdelay_s' in samples and 'groupdelay_t' in sec,
all versus frequency in Hz
Npts = number of points to plot; defult is 1024
fsize = figure size; defult is (6,4) inches
Mark Wickert, January 2015
f = np.arange(0,Npts)/(2.0*Npts)
w,H = signal.freqz(b,a,2*np.pi*f)
plt.figure(figsize=fsize)
if mode.lower() == 'db':
plt.plot(f*fs,20*np.log10(np.abs(H)))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain (dB)')
plt.title('Frequency Response - Magnitude')
elif mode.lower() == 'phase':
plt.plot(f*fs,np.angle(H))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Phase (rad)')
plt.title('Frequency Response - Phase')
elif (mode.lower() == 'groupdelay_s') or (mode.lower() == 'groupdelay_t'):
Notes
-----
Since this calculation involves finding the derivative of the
phase response, care must be taken at phase wrapping points
and when the phase jumps by +/-pi, which occurs when the
amplitude response changes sign. Since the amplitude response
is zero when the sign changes, the jumps do not alter the group
delay results.
theta = np.unwrap(np.angle(H))
# Since theta for an FIR filter is likely to have many pi phase
# jumps too, we unwrap a second time 2*theta and divide by 2
theta2 = np.unwrap(2*theta)/2.
theta_dif = np.diff(theta2)
f_diff = np.diff(f)
Tg = -np.diff(theta2)/np.diff(w)
max_Tg = np.max(Tg)
#print(max_Tg)
if mode.lower() == 'groupdelay_t':
max_Tg /= fs
plt.plot(f[:-1]*fs,Tg/fs)
plt.ylim([0,1.2*max_Tg])
else:
plt.plot(f[:-1]*fs,Tg)
plt.ylim([0,1.2*max_Tg])
plt.xlabel('Frequency (Hz)')
if mode.lower() == 'groupdelay_t':
plt.ylabel('Group Delay (s)')
else:
plt.ylabel('Group Delay (samples)')
plt.title('Frequency Response - Group Delay')
else:
s1 = 'Error, mode must be "dB", "phase, '
s2 = '"groupdelay_s", or "groupdelay_t"'
print(s1 + s2)
def freqs_resp(b,a=[1],Dmin=1,Dmax=5,mode = 'dB',Npts = 1024,fsize=(6,4)):
A method for displaying analog filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqs_resp(b,a=[1],Dmin=1,Dmax=5,mode='dB',Npts=1024,fsize=(6,4))
b = ndarray of numerator coefficients
a = ndarray of denominator coefficents
Dmin = start frequency as 10**Dmin
Dmax = stop frequency as 10**Dmax
mode = display mode: 'dB' magnitude, 'phase' in radians, or
'groupdelay', all versus log frequency in Hz
Npts = number of points to plot; defult is 1024
fsize = figure size; defult is (6,4) inches
Mark Wickert, January 2015
f = np.logspace(Dmin,Dmax,Npts)
w,H = signal.freqs(b,a,2*np.pi*f)
plt.figure(figsize=fsize)
if mode.lower() == 'db':
plt.semilogx(f,20*np.log10(np.abs(H)))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain (dB)')
plt.title('Frequency Response - Magnitude')
elif mode.lower() == 'phase':
plt.semilogx(f,np.angle(H))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Phase (rad)')
plt.title('Frequency Response - Phase')
elif mode.lower() == 'groupdelay':
Notes
-----
See freqz_resp() for calculation details.
theta = np.unwrap(np.angle(H))
# Since theta for an FIR filter is likely to have many pi phase
# jumps too, we unwrap a second time 2*theta and divide by 2
theta2 = np.unwrap(2*theta)/2.
theta_dif = np.diff(theta2)
f_diff = np.diff(f)
Tg = -np.diff(theta2)/np.diff(w)
max_Tg = np.max(Tg)
#print(max_Tg)
plt.semilogx(f[:-1],Tg)
plt.ylim([0,1.2*max_Tg])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Group Delay (s)')
plt.title('Frequency Response - Group Delay')
else:
print('Error, mode must be "dB" or "phase or "groupdelay"')
import sk_dsp_comm.iir_design_helper as iird
import sk_dsp_comm.fir_design_helper as fird
b1,a1,sos1 = iird.IIR_bpf(200,250,300,350,0.1,60.0,1000,'butter')
b2,a2,sos2 = iird.IIR_bpf(200,250,300,350,0.1,60.0,1000,'cheby1')
figure()
iird.freqz_resp_cas_list([sos1,sos2],'dB',1000)
ylim([-70,0])
grid();
figure()
iird.freqz_resp_cas_list([sos1,sos2],'groupdelay_t',1000)
grid();
figure()
iird.sos_zplane(sos2)
b,a = signal.cheby1(5,.1,2*array([250,300])/1000,btype='bandpass')
freqz_resp(b,a,mode='dB',fs=1000,fsize=(6,2))
grid()
ylim([-80,5]);
xlim([100,400]);
freqz_resp(b,a,mode='groupdelay_s',fs=1000,fsize=(6,2))
grid()
xlim([100,400]);
bc,ac = signal.bessel(7,2*pi*array([10.0,50.0])*1e6,btype='bandpass',analog=True)
freqs_resp(bc,ac,6,9,mode='dB',fsize=(6,2))
grid()
ylim([-80,5]);
freqs_resp(bc,ac,6,9,mode='groupdelay',fsize=(6,2))
grid()
b3,a3 = signal.butter(3,2*pi*1,analog=True)
freqs_resp(b3,a3,-1,2,mode='dB',fsize=(6,2))
grid()
ylim([-80,5]);
freqs_resp(b3,a3,-1,2,mode='groupdelay',fsize=(6,2))
grid()
t = arange(0,2,.0001)
xs = ss.step(t)
tout,ys,x_state = signal.lsim((b3,a3),xs,t)
plot(t,ys)
title(r'Third-Order Butterworth Step Response for $f_3 = 1$ Hz')
ylabel(r'Ste Response')
xlabel(r'Time (s)')
grid();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We Can Proud of Ourselves ๐
Step2: Are we done ?! ๐ฐ
Step3: When a file does not exists or when the file is not a valid json we raise FileNotFoundException and JSONDecodeError and log it away
Step4: Becomes
Step5: First since we handle both FileNotFoundException and JSONDecodeError in the same manner they can "share the except block"
Step6: This
Step7: Becomes
Step8: Reasons for errors
Step9: Becomes
Step10: Lesson 5
Step11: typeguard for runtime
Step12: Contract Testing Libraries ๐
Step13: Unstable environment ๐คช
Step14: There must be better way ๐
Step15: Useful usecases ๐ง
Step16: Useful usecases ๐ง
Step17: Replace exception type with both traces
Step18: Replace exception type with only one trace
Step19: Lesson 10
Step20: Lesson 11
|
<ASSISTANT_TASK:>
Python Code:
! pip install typeguard rollbar returns tenacity > /dev/null 2>&1
import contextlib
import json
import logging
import pathlib
import os
from typing import Union
import requests
from typeguard import typechecked
# Naive code snippets
def get_relevant_restaurants(user):
base_url = "https://en.wikipedia.org/wiki"
return requests.get(f"{base_url}/{user}").content
def get_user(path):
with open(path, 'r') as json_file:
return json.load(json_file)["user"]
def pick_best_restaurants(restaurants):
pass
def get_restaurant_recommendation(path):
user = get_user(path)
candidates = get_relevant_restaurants(user)
return pick_best_restaurants(candidates)
get_restaurant_recommendation("MY_AMAZING_FILE.json")
def get_restaurant_recommendation(path):
try:
user = get_user(path)
candidates = get_relevant_restaurants(user)
pick_best_restaurants(candidates)
except BaseException:
logging.error("VERY UNINFORMATIVE INFORMATION")
raise BaseException("VERY UNINFORMATIVE INFORMATION")
def get_restaurant_recommendation(path):
try:
user = get_config(path)["user"]
except FileNotFoundException:
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
except JSONDecodeError:
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
candidates = get_relevant_restaurants(user)
pick_best_restaurants(candidates)
def get_restaurant_recommendation(path):
try:
user = get_user(path)
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
else:
candidates = get_relevant_restaurants(user)
pick_best_restaurants(candidates)
def get_restaurant_recommendation(path):
try:
user = get_user(path)
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
else:
candidates = get_relevant_restaurants(user)
pick_best_restaurants(candidates)
def run_unstopable_animation():
pass
try:
os.remove('somefile.pyc')
except FileNotFoundError:
pass
try:
run_unstopable_animation()
except KeyboardInterrupt:
pass
from contextlib import suppress
with suppress(FileNotFoundError):
os.remove('somefile.pyc')
from contextlib import suppress
with suppress(KeyboardInterrupt):
run_unstopable_animation()
def get_user(path):
with open(path, 'r') as json_file:
return json.load(json_file)\
.get("user", "default_user")
def get_restaurant_recommendation(path):
try:
user = get_user(path)
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
else:
candidates = get_relevant_restaurants(user)
pick_best_restaurants(candidates)
def get_user(path):
with open(path, 'r') as json_file:
try:
user = json.load(json_file)\
.get("user", "default_user")
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
else:
return user
def get_restaurant_recommendation(path):
user = get_user(path)
candidates = get_relevant_restaurants(user)
pick_best_restaurants(candidates)
def get_user(path: Union[str, pathlib.PurePath]) -> str:
with open(path, 'r') as json_file:
try:
data = json.load(json_file)
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY INFORMATIVE INFORMATION")
raise
else:
return data.get("user","default_user")
@typechecked
def get_user(path: Union[str, pathlib.PurePath]) -> str:
with open(path, 'r') as json_file:
try:
data = json.load(json_file)
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY INFORMATIVE INFORMATION")
raise
else:
return data.get("user","default_user")
def get_relevant_restaurants(user):
base_url = "cool_restaurants.com"
resp = requests.get(f"{base_url}/{user}")
resp.raise_for_status()
return resp.content
def get_relevant_restaurants(user):
base_url = "cool_restaurants.com"
allowed_retries = 5
for i in range(allowed_retries):
try:
resp = requests.get(f"{base_url}/{user}")
resp.raise_for_status()
except (requests.ConnectionError):
if i == allowed_retries:
raise
else:
return resp.content
from functools import wraps
def retry(exceptions, allowed_retries=5):
def callable(func):
@wraps(func)
def wrapped(*args, **kwargs):
for i in range(allowed_retries):
try:
res = func()
except exceptions:
continue
else:
return res
return wrapped
return callable
@retry(exceptions=requests.ConnectionError)
def get_relevant_restaurants(country):
base_url = "cool_restaurants.com"
resp = requests.get(f"{base_url}/{user}")
resp.raise_for_status()
return resp.content
import tenacity
@tenacity.retry(retry=tenacity.retry_if_exception_type(ConnectionError))
def get_relevant_restaurants(user):
base_url = "cool_restaurants.com"
resp = requests.get(f"{base_url}/{user}")
resp.raise_for_status()
return resp.content
import sys
import rollbar
rollbar.init("Super Secret Token")
def rollbar_except_hook(exc_type, exc_value, traceback):
rollbar.report_exc_info((exc_type, exc_value, traceback))
sys.__excepthook__(exc_type, exc_value, traceback)
sys.excepthook = rollbar_except_hook
try:
1/0
except ZeroDivisionError:
raise
try:
1/0
except ZeroDivisionError as e:
raise Exception from e
try:
1/0
except ZeroDivisionError as e:
raise Exception from None
def pick_best_restaurants(user: str, candidates: List[str]) -> List[str]:
validate_user(user)
best_candicates = heapq.nlargest(5, valid_candidates)
update_df(user, best_candicates)
send_email()
def get_user(path):
with open(path, 'r') as json_file:
try:
user = json.load(json_file)\
.get("user", "default_user")
except (FileNotFoundException, JSONDecodeError):
logging.error("VERY UNINFORMATIVE INFORMATION")
raise
else:
try:
send_email(user)
else :
return user
def login(user):
raise CommonPasswordException(f"password: {password} is too common")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network
Step2: Graphviz
Step3: ParameterGraph
Step4: Parameter
Step5: DomainGraph
Step6: MorseDecomposition
Step7: MorseGraph
|
<ASSISTANT_TASK:>
Python Code:
import DSGRN
network = DSGRN.Network("network.txt")
print(network)
print(network.graphviz())
import graphviz
graph = graphviz.Source(network.graphviz())
graph
parametergraph = DSGRN.ParameterGraph(network)
print("There are " + str(parametergraph.size()) + " nodes in the parameter graph.")
parameterindex = 34892 # An integer in [0,32592)
parameter = parametergraph.parameter(parameterindex)
parameter
print(parameter)
domaingraph = DSGRN.DomainGraph(parameter)
domaingraph
graphviz.Source(domaingraph.graphviz())
print(domaingraph.coordinates(5)) # ... I wonder what region in phase space domain 5 corresponds to.
morsedecomposition = DSGRN.MorseDecomposition(domaingraph.digraph())
graphviz.Source(morsedecomposition.graphviz())
morsegraph = DSGRN.MorseGraph()
morsegraph.assign(domaingraph, morsedecomposition)
morsegraph
print(morsegraph)
graphviz.Source(morsegraph.graphviz())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Doubly Linked List
Step2: Stack
Step3: ```
|
<ASSISTANT_TASK:>
Python Code:
# Node of a Singly Linked List
class Node:
# constructor
def __init__(self, data=None, next=None):
self.data = data
self.next = next
# method of getter, setter for data
def setData(self, data):
self.data = data
def getData(self):
return self.data
# method of getter, setter for next
def setNext(self, next):
self.next = next
def getNext(self):
return self.next
# returns true if the node points to another node
def hasNext(self):
return self.next != None
# implement Singly Linked List
class SinglyLinkedList:
def __init__(self, node=None):
self.head = node
def setHead(self, node):
self.head = node
def getHead(self):
return self.head
# returns length of list
## Time Complexity: O(n), for scanning the list of size n.
## Space COmplexity: O(1), for creating a temporary vaiable.
def listLength(self):
current = self.head
count = 0
while current != None:
count += 1
current = current.getNext()
return count
# method to insert
## - at beginning
## - at ending
## - at specific position
def insertAtBeg(self, data):
newNode = Node(data)
length = self.listLength()
if length == 0:
self.setHead(newNode)
else :
newNode.setNext(self.getHead())
self.setHead(newNode)
return self
def insertAtEnd(self, data):
newNode = Node(data)
current = self.getHead()
while current.getNext() != None:
current = current.getNext()
current.setNext(newNode)
return self
def insertAtPos(self, data, pos):
length = self.listLength()
if pos > length or pos < 0:
return None
else:
if pos == 0:
self.insertAtBeg(data)
else:
newNode = Node(data)
count = 0
current = self.getHead()
while count < pos - 1:
current = current.getNext()
count += 1
newNode.setNext(current.getNext())
current.setNext(newNode)
return self
# method to delete
## - at beginning
## - at ending
## - at specific position
def deleteAtBeg(self):
length = self.listLength()
if length == 0:
return None
else :
self.setHead(self.getHead().getNext())
return self
def deleteAtEnd(self):
length = self.listLength()
if length == 0:
return None
else :
current = self.getHead()
while current.getNext() != None:
prev = current
current = current.getNext()
prev.setNext(None)
return self
def deleteAtPos(self, pos):
length = self.listLength()
if pos > length or pos < 0:
return None
else:
count = 0
current = self.getHead()
while count < pos - 1:
count += 1
current = current.getNext()
current.setNext(current.getNext().getNext())
return self
def show(self):
if self.listLength() == 0:
return "Null"
current = self.getHead()
acc = str(current.getData())
while current.getNext() != None:
current = current.getNext()
acc = acc + "::" + str(current.getData())
return acc+"::Null"
ls = SinglyLinkedList(Node(1))
print(ls.insertAtBeg(3).show())
print(ls.insertAtBeg(5).show())
print(ls.insertAtEnd(8).show())
print(ls.insertAtPos(12, 2).show())
print(ls.deleteAtPos(3).show())
print(ls.deleteAtEnd().show())
print(ls.deleteAtBeg().show())
# Node of a Doubly Linked List
class Node:
def __init__(self, data=None, next=None, prev=None):
self.data = data
self.next = next
self.prev = prev
def setNext(self, next):
self.next = next
def getNext(self):
return self.next
def setPrev(self, prev):
self.prev = prev
def getPrev(self):
return self.prev
def setData(self, data):
self.data = data
def getData(self):
return self.data
class DSShow:
def show(self):
return self.list.show()
class Stack(DSShow):
def __init__(self):
self.list = SinglyLinkedList()
self.length = 0
def top(self):
return self.list.getHead()
def push(self, data):
self.list.insertAtBeg(data)
self.length += 1
return self
def pop(self):
if self.length > 0:
self.length -= 1
top = self.top()
self.list.deleteAtBeg()
return top
else:
return None
stk = Stack()
stk.push(3)
stk.push(4)
stk.show()
stk.pop()
stk.show()
class Queue(DSShow):
def __init__(self):
self.list = SinglyLinkedList()
self.length = 0
def enqueue(self):
pass
def dequeue(self):
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mutiple Plots on One Graph
Step2: Save it to a File
Step3: Adjust the Axes
Step4: Add a Grid
Step5: Change Line Types and Colors
Step6: Labeling Axes and Adding a Legend
Step7: XKCD Style
Step8: Pie Chart
Step9: Bar Chart
Step10: Scatter Plot
Step11: Histogram
Step12: Box & Whisker Plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from scipy.stats import norm
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-3, 3, 0.01)
plt.plot(x, norm.pdf(x))
plt.show()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.savefig('C:\\Users\\Frank\\MyPlot.png', format='png')
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.show()
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.xlabel('Greebles')
plt.ylabel('Probability')
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.legend(['Sneetches', 'Gacks'], loc=4)
plt.show()
plt.xkcd()
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10))
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
# Remove XKCD mode:
plt.rcdefaults()
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
explode = [0, 0, 0.2, 0, 0]
labels = ['India', 'United States', 'Russia', 'China', 'Europe']
plt.pie(values, colors= colors, labels=labels, explode = explode)
plt.title('Student Locations')
plt.show()
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
plt.bar(range(0,5), values, color= colors)
plt.show()
from pylab import randn
X = randn(500)
Y = randn(500)
plt.scatter(X,Y)
plt.show()
incomes = np.random.normal(27000, 15000, 10000)
plt.hist(incomes, 50)
plt.show()
uniformSkewed = np.random.rand(100) * 100 - 40
high_outliers = np.random.rand(10) * 50 + 100
low_outliers = np.random.rand(10) * -50 - 100
data = np.concatenate((uniformSkewed, high_outliers, low_outliers))
plt.boxplot(data)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vamos a hacer una prueba en primer lugar
Step2: En la grรกfica anterior podemos ver no solo la posiciรณn en el primer cuadro, si no tambien la velocidad y la aceleraciรณn en el segundo y tercero, de tal manera que nos damos una mejor idea de la trayectoria.
Step3: Vamos a generar una trayectoria en la que en los primero dos segundos se mueva de $0^o$ a $90^o$, en los segundos dos segundos de $90^o$ a $-60^o$ y en los ultimos seis segundos de $-60^o$ a $240^o$
Step4: Si quiero concatenar todos estos arreglos que generรฉ, tan solo tengo que sumarlos
Step5: Esta trayectoria la podemos graficar
Step6: Pero mas importante, puedo generar una animaciรณn, tomando en cuenta que es un pendulo simple
|
<ASSISTANT_TASK:>
Python Code:
from generacion_trayectorias import grafica_trayectoria
%matplotlib inline
ts, qs, qฬs, qฬs = grafica_trayectoria(0, 2, 0, 1, 1000)
from numpy import pi
ฯ = 2*pi
ts, q1, qฬ1, qฬ1 = grafica_trayectoria(0, 2, 0, ฯ/4, 100)
ts, q2, qฬ2, qฬ2 = grafica_trayectoria(2, 4, ฯ/4, -ฯ/6, 100)
ts, q3, qฬ3, qฬ3 = grafica_trayectoria(4, 10, -ฯ/6, 2*ฯ/3, 300)
qs = q1 + q2 + q3
from matplotlib.pyplot import figure, style
from numpy import linspace
fig = figure(figsize=(17, 5))
ax = fig.gca()
ts = linspace(0, 10, 500)
ax.plot(ts, qs)
# Se importa la funcion animation para crear la animacion, y rc para poder mostrar el video
# directamente en el notebook
from matplotlib import animation, rc
rc('animation', html='html5')
# Se importan las funciones necesarias para calcular la cinematica directa e inversa
from numpy import sin, cos, arange
# Se define una funcion para calcular la cinematica directa del sistema
def cinematica_directa_pendulo(q1):
# Se definen constantes utilizadas para graficar el sistema
l1, l2 = 1, 1
xs = [0, l1*cos(q1)]
ys = [0, l1*sin(q1)]
return xs, ys
# Se define el tamaรฑo de la figura
fig = figure(figsize=(8, 8))
# Se define una sola grafica en la figura y se dan los limites de los ejes x y y
axi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.1, 1.1), ylim=(-1.1, 1.1))
# Se utilizan graficas de linea para el eslabon del pendulo
linea, = axi.plot([], [], "-o", lw=2, color='gray')
def inicializacion():
'''Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema'''
# Se inicializa la linea vacia para evitar que al principio exista una linea en la grafica
linea.set_data([], [])
return linea
def animacion(i):
'''Esta funcion se ejecuta para cada cuadro del GIF'''
# Se obtienen las coordenadas x y y para el eslabon
xs, ys = cinematica_directa_pendulo(qs[i])
# Se actualiza el estado de la linea con las coordenadas calculadas
linea.set_data(xs, ys)
return linea
# Se hace la animacion dandole la funcion que se debe ejecutar para cada cuadro, el numero de cuadros
# que se debe de hacer, el periodo de cada cuadro y la funcion inicial
ani = animation.FuncAnimation(fig, animacion, arange(1, len(qs)), interval=20, init_func=inicializacion)
ani
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the MNIST data
Step2: Create the Network
Step3: Set up the Loss Function
Step4: Set up the Training Function
Step5: Set up the Initializer
Step6: Batching of Training
Step7: Test function to check final accuracy
Step8: Finally, the Training...
Step9: Visualising the Weight Matrix
Step10: Plot the weight images.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
import gzip
import pickle
# Seed for reproducibility
np.random.seed(42)
# Download the MNIST digits dataset (only if not present locally)
import os
import urllib.request
mnist_data = './data/MNIST'
mnist_pkl_gz = mnist_data+'/mnist.pkl.gz'
if not os.path.isfile(mnist_pkl_gz):
if not os.path.exists(mnist_data):
os.makedirs(mnist_data)
print("Downloading MNIST data file")
urllib.request.urlretrieve(
'http://deeplearning.net/data/mnist/mnist.pkl.gz',
mnist_pkl_gz)
print("MNIST data file available locally")
# Load training and test splits as numpy arrays
train, val, test = pickle.load(gzip.open(mnist_pkl_gz), encoding='iso-8859-1')
X_train, y_train = train
X_val, y_val = val
X_test, y_test = test
# The original 28x28 pixel images are flattened into 784 dimensional feature vectors
X_train.shape
# Plot the first few examples
plt.figure(figsize=(12,3))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i].reshape((28, 28)), cmap='gray', interpolation='nearest')
plt.axis('off')
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float32", [None, n_input], name='x_input')
#y = tf.placeholder("int32", [None, n_classes], name='y_target') # originally, a one-hot label
y = tf.placeholder("int32", [ None, ], name='y_target') # This is the label index instead
# Create model
def multilayer_perceptron(x, layer):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, layer['h1']['weights']), layer['h1']['bias'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, layer['h2']['weights']), layer['h2']['bias'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, layer['out']['weights']) + layer['out']['bias']
return out_layer
# Store layers weight & bias
layers=dict(
h1 = {
'weights':tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'bias' :tf.Variable(tf.random_normal([n_hidden_1])),
},
h2 = {
'weights':tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'bias' :tf.Variable(tf.random_normal([n_hidden_2])),
},
out = {
'weights':tf.Variable(tf.random_normal([n_hidden_2, n_classes])),
'bias' :tf.Variable(tf.random_normal([n_classes])),
},
)
# Construct model
logits = multilayer_perceptron(x, layers)
#pred = tf.argmax(logits, axis=1) # being deprecated
pred = tf.arg_max(logits, 1)
#pred = tf.reshape( tf.arg_max(logits, 1), [-1])
# Define optimizer for the labels (expressed as a onehot encoding)
labels = tf.one_hot(indices=y, depth=10)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
# Parameters for the training phase
learning_rate = 0.001
TRAINING_EPOCHS = 10
BATCH_SIZE = 100
display_step = 1
# Define optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
#optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)
# Define an 'op' that initializes the variables
init = tf.global_variables_initializer()
# We'll choose a batch size, and calculate the number of batches in an "epoch"
# (approximately one pass through the data).
N_BATCHES = len(X_train) // BATCH_SIZE
#N_VAL_BATCHES = len(X_val) // BATCH_SIZE
# For training, we want to sample examples at random in small batches
def batch_gen(X_, y_, N):
while True:
idx = np.random.choice(len(y_), N)
yield X_[idx], y_[idx]
# Minibatch generator(s) for the training and validation sets
train_batches = batch_gen(X_train, y_train, BATCH_SIZE)
#val_batches = batch_gen(X_val, y_val, BATCH_SIZE)
# Try sampling from the batch generator.
# Plot an image and corresponding label from the training batcher to verify they match.
X_batch, y_batch = next(train_batches)
plt.imshow(X_batch[0].reshape((28, 28)), cmap='gray', interpolation='nearest')
print(y_batch[0])
# Plot an image and corresponding label from the validation set to verify they match.
X_batch, y_batch = X_val, y_val
plt.imshow(X_batch[0].reshape((28, 28)), cmap='gray', interpolation='nearest')
print(y_batch[0])
X_batch.shape, y_batch.shape,
#correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) # with one-hots
correct_prediction = tf.equal(pred, tf.cast(y, tf.int64)) # with indices
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float32"))
# Launch the graph
with tf.Session() as sess:
sess.run(init) # Running this 'op' initialises the network weights
# Training cycle
for epoch in range(TRAINING_EPOCHS):
avg_cost = 0.
# Loop over all batches
for _ in range(N_BATCHES):
batch_x, batch_y = next(train_batches)
#print(batch_x.shape, batch_y.shape)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x:batch_x, y:batch_y})
# Compute average loss
avg_cost += c / N_BATCHES
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=","{:.2f}".format(avg_cost))
print("Optimization Finished!")
# Test model
accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_val[0:10], y:y_val[0:10] })
print("Validation Accuracy: %.2f%% for first 10 examples" % ( 100. * accuracy_, ))
#print(y_)
#print(pred_)
print("Validation Accuracy: %.2f%%" % ( 100. * accuracy.eval({ x: X_val, y: y_val, }),))
print("DONE")
# This proves that when the sessions are done, the network is thrown away...
#with tf.Session() as sess:
# accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_test[0:100], y:y_test[0:100] })
# print("Test Accuracy: %.2f%% for first 100 examples" % ( 100. * accuracy_, ))
#print("DONE")
#weights = l_out.W.get_value()
#print(weights.shape)
#with tf.Session() as sess:
# accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_val[0:10], y:y_val[0:10] })
#plt.figure(figsize=(12,3))
#for i in range(10):
# plt.subplot(1, 10, i+1)
# plt.imshow(weights[:,i].reshape((28, 28)), cmap='gray', interpolation='nearest')
# plt.axis('off')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Problem Statement
Step2: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
Step3: 1.2 - Overview of the model
Step5: Expected output
Step7: Expected output
Step12: Expected output
Step13: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
Step14: Conclusion
Step15: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
Step16: The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from utils import *
import random
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
#print(char_to_ix)
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (โ2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
np.clip(gradient, -maxValue, maxValue, gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (โ1 line)
x = np.zeros((vocab_size, 1))
# Step 1': Initialize a_prev as zeros (โ1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (โ1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.matmul(Wax, x)+np.matmul(Waa, a_prev) + b) # b is vector of size n_a
z = np.matmul(Wya, a) + by #by is vector of size vocab_size
y = softmax(z)
#print("size of a=",a.shape, "shape of b=", b.shape, "shape of by=", by.shape)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
#print("size of y=", y.shape, "shape of z=", z.shape, "shape of b=", b.shape)
idx = np.random.choice(list(range(vocab_size)), p = np.ravel(y))
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
### START CODE HERE ###
# Forward propagate through time (โ1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (โ1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (โ1 line)
gradients = clip(gradients, 5)
# Update parameters (โ1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (โ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
parameters = model(data, ix_to_char, char_to_ix)
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
Forsooth this maketh no sense
Forsooth this maketh no sense
Forsooth this maketh no sense
hello world
sure
?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading an mp3 file
Step2: Trim it and make it a 2d.
Step3: Let's make it a batch of 4 items
Step4: A Keras model
Step5: The model has no trainable parameters because STFT layer uses tf.signal.stft() function which is just an implementation of FFT-based short-time Fourier transform.
|
<ASSISTANT_TASK:>
Python Code:
import librosa
import kapre
import tensorflow as tf
from tensorflow.keras.models import Sequential
import numpy as np
from datetime import datetime
now = datetime.now()
print('%s/%s/%s' % (now.year, now.month, now.day))
print('Tensorflow: {}'.format(tf.__version__))
print('Librosa: {}'.format(librosa.__version__))
print('Image data format: {}'.format(tf.keras.backend.image_data_format()))
print('Kapre: {}'.format(kapre.__version__))
src, sr = librosa.load('../srcs/bensound-cute.mp3', sr=None, mono=True)
print('Audio length: %d samples, %04.2f seconds. \n' % (len(src), len(src) / sr) +
'Audio sample rate: %d Hz' % sr)
len_second = 1.0 # Let's trim it to make it quick
src = src[:int(sr*len_second)]
src = np.expand_dims(src, axis=1)
input_shape = src.shape
print('The shape of an item', input_shape)
x = np.array([src] * 4)
print('The shape of a batch: ',x.shape)
from kapre.time_frequency import STFT, Magnitude, MagnitudeToDecibel
model = Sequential()
# A STFT layer
model.add(STFT(n_fft=2048, win_length=2018, hop_length=1024,
window_fn=None, pad_end=False,
input_data_format='channels_last', output_data_format='channels_last',
input_shape=input_shape,
name='stft-layer'))
model.summary()
from tensorflow.keras.layers import Conv2D, BatchNormalization, ReLU, GlobalAveragePooling2D, Dense, Softmax
model.add(Magnitude())
model.add(MagnitudeToDecibel())
model.add(Conv2D(32, (3, 3), strides=(2, 2)))
model.add(BatchNormalization())
model.add(ReLU())
model.add(GlobalAveragePooling2D())
model.add(Dense(10))
model.add(Softmax())
# Compile the model
model.compile('adam', 'categorical_crossentropy') # if single-label classification
model.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's a create a phase diagram to show the logistic map's fixed-point attractor at 0.655 when the growth rate parameter is set to 2.9
Step2: Next, let's create phase diagrams to show the logistic map's limit cycle attractor when the growth rate parameter is set to 3.50 - 3.57
Step3: Now, let's create a phase diagram to show the logistic map's strange attractor when the growth rate parameter is set to 3.9
Step4: Now, let's create a phase diagram to show the logistic map's strange attractors across the chaotic regime (from r=3.6 to r=4.0)
Step5: Next, let's demonstrate how to use phase diagrams to differentiate chaos from random noise in time series data
Step6: Now, let's create 3-D phase diagrams to show the same chaos vs random noise, in three dimensions
Step7: Create a 3-D phase diagram to show the logistic map's strange attractors across the chaotic regime (from r=3.6 to r=4.0), twisting and curling around their state space in three dimensions
Step8: In another notebook, I create animated gifs by panning and rotating around my 3-D phase diagrams to reveal more of their structure
|
<ASSISTANT_TASK:>
Python Code:
import IPython.display as IPdisplay
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pynamical
from pynamical import simulate, save_fig, phase_diagram, phase_diagram_3d
%matplotlib inline
title_font = pynamical.get_title_font()
label_font = pynamical.get_label_font()
# draw a phase diagram for 100 generations for the growth rate parameter 2.9
# it shows points converging on 0.655 because the logistic map has a fixed-point attractor at 0.655 when r=2.9
pops = simulate(num_gens=100, rate_min=2.9, num_rates=1, num_discard=100)
phase_diagram(pops, title='Logistic Map Attractor, r=2.9', size=20)
# draw a phase diagram for 100 generations for the growth rate parameter 3.5
# it shows 4 points because the logistic map has a period of 4 when r=3.5
pops = simulate(num_gens=100, rate_min=3.5, num_rates=1, num_discard=100)
phase_diagram(pops, title='Logistic Map Attractor, r=3.5', size=20)
# draw a phase diagram for 100 generations for the growth rate parameter 3.56
# it shows 8 points because the logistic map has a period of 8 when r=3.56
pops = simulate(num_gens=100, rate_min=3.56, num_rates=1, num_discard=100)
phase_diagram(pops, title='Logistic Map Attractor, r=3.56', size=20)
# draw a phase diagram for 100 generations for the growth rate parameter 3.57
# it shows n points because the logistic map has a period of n when r=3.57
pops = simulate(num_gens=100, rate_min=3.57, num_rates=1, num_discard=100)
phase_diagram(pops, title='Logistic Map Attractor, r=3.57', size=20)
# draw a phase diagram for 2,000 generations for the growth rate parameter 3.9
# the plot reveals the strange attractor - the logistic map is chaotic when r=3.9
pops = simulate(num_gens=2000, rate_min=3.9, num_rates=1)
phase_diagram(pops, xmin=0.25, xmax=0.75, ymin=0.8, ymax=1.01, size=20, title='Logistic Map Attractor, r=3.9')
# draw a phase diagram for 2,000 generations across 50 growth rate steps from 3.6 to 4.0
# each chaotic growth rate has its own parabola
pops = simulate(num_gens=2000, rate_min=3.6, rate_max=4.0, num_rates=50)
phase_diagram(pops, xmin=0.25, xmax=0.75, ymin=0.8, ymax=1.01, size=7,
title='Logistic Map Attractor, r=3.6 to r=4.0', color='viridis')
# sometimes it is hard to tell if a time series is chaotic or random
# generate two time series of 1,000 steps, one chaotic and one random
# generate 30,000 time steps for the chaotic series but only keep the final 1,000 (when system is fully evolved)
total_gens = 30000
gens = 1000
np.random.seed(1)
chaos_pops = simulate(num_gens=total_gens, rate_min=3.99, num_rates=1)
chaos_pops = chaos_pops.iloc[total_gens-gens:].reset_index().drop(labels='index', axis=1)
random_pops = pd.DataFrame(np.random.random(gens), columns=['value'])
time_series = pd.concat([chaos_pops, random_pops], axis=1)
time_series.columns = ['chaos', 'random']
time_series.head()
# plot the chaotic and random time series to show how they are sometimes tough to differentiate
ax = time_series.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_xlim(40, 90)
ax.set_ylim(0, 1)
ax.set_title('Time Series, Deterministic Chaos vs Random Data', fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(loc=3)
save_fig('chaos-vs-random-line')
plt.show()
# plot same data as 2D phase diagram instead
pops = pd.concat([chaos_pops, random_pops], axis=1)
pops.columns = ['chaos', 'random']
phase_diagram(pops, size=20, color=['#003399','#cc0000'], ymax=1.005, legend=True,
filename='logistic-attractor-chaos-random')
# plot same data as 3D phase diagram instead
phase_diagram_3d(pops, color=['#003399','#cc0000'], filename='logistic-attractor-chaos-random-3d',
legend=True, legend_bbox_to_anchor=(0.94, 0.9))
# run logistic model for 4,000 generations across 50 growth rate steps from 3.6 to 4.0
pops = simulate(num_gens=4000, rate_min=3.6, rate_max=4.0, num_rates=50)
# phase diagram: each chaotic growth rate has its own strange attractor curling through state space
phase_diagram_3d(pops, title='Logistic Map Attractor, r=3.6 to r=4.0', alpha=0.1, color='viridis', color_reverse=False,
azim=230, filename='3d-logistic-map-attractor-1')
# phase diagram: each chaotic growth rate has its own strange attractor curling through state space
phase_diagram_3d(pops, title='Logistic Map Attractor, r=3.6 to r=4.0', alpha=0.1, color='viridis', color_reverse=False,
elev=7, azim=320, filename='3d-logistic-map-attractor-2')
# here's an example of the animated phase diagrams that I create in pynamical-demo-3d-animation.ipynb
IPdisplay.Image(url='images/phase-animate/05-logistic-3d-phase-diagram-chaotic-regime.gif')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use this helper funciton to write lists containing article ids, categories, and authors for each article in our database to local file.
Step3: Pull data from BigQuery
Step5: There should be 15,634 articles in the database.
Step7: The categories are 'News', 'Stars & Kultur', and 'Lifestyle'.
Step10: There should be 385 authors in the database.
Step11: Let's have a look at the two csv files we just created containing the training and test set. We'll also do a line count of both files to confirm that we have achieved an approximate 90/10 train/test split.
|
<ASSISTANT_TASK:>
Python Code:
import os
import tensorflow as tf
import numpy as np
from google.cloud import bigquery
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
def write_list_to_disk(my_list, filename):
with open(filename, 'w') as f:
for item in my_list:
line = "%s\n" % item
f.write(line.encode('utf8'))
sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
content_id
content_ids_list = bigquery.Client().query(sql).to_dataframe()['content_id'].tolist()
write_list_to_disk(content_ids_list, "content_ids.txt")
print("Some sample content IDs {}".format(content_ids_list[:3]))
print("The total number of articles is {}".format(len(content_ids_list)))
sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
category
categories_list = bigquery.Client().query(sql).to_dataframe()['category'].tolist()
write_list_to_disk(categories_list, "categories.txt")
print(categories_list)
sql=
#standardSQL
SELECT
REGEXP_EXTRACT((SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)), r"^[^,]+") AS first_author
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
first_author
authors_list = bigquery.Client().query(sql).to_dataframe()['first_author'].tolist()
write_list_to_disk(authors_list, "authors.txt")
print("Some sample authors {}".format(authors_list[:10]))
print("The total number of authors is {}".format(len(authors_list)))
sql=
WITH site_history as (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
)
SELECT
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") as title,
REGEXP_EXTRACT(author_list, r"^[^,]+") as author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id
FROM
site_history
WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) < 9
training_set_df = bigquery.Client().query(sql).to_dataframe()
training_set_df.to_csv('training_set.csv', header=False, index=False, encoding='utf-8')
training_set_df.head()
sql=
WITH site_history as (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
)
SELECT
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") as title,
REGEXP_EXTRACT(author_list, r"^[^,]+") as author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id
FROM
site_history
WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) >= 9
test_set_df = bigquery.Client().query(sql).to_dataframe()
test_set_df.to_csv('test_set.csv', header=False, index=False, encoding='utf-8')
test_set_df.head()
%%bash
wc -l *_set.csv
!head *_set.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build a base image to work with fairing
Step2: Start an AI Platform job
|
<ASSISTANT_TASK:>
Python Code:
BUCKET = "gs://" # your bucket here
assert re.search(r'gs://.+', BUCKET), 'A GCS bucket is required to store your results.'
!cat Dockerfile
!docker build . -t {base_image}
!docker push {base_image}
additional_files = '' # If your code requires additional files, you can specify them here (or include everything in the current folder with glob.glob('./**', recursive=True))
# If your code does not require any dependencies or config changes, you can directly start from an official Tensorflow docker image
#fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image='gcr.io/deeplearning-platform-release/tf-gpu.1-13')
# base image
fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image=base_image)
# AI Platform job hardware config
fairing.config.set_deployer('gcp', job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}})
# input and output notebooks
fairing.config.set_preprocessor('full_notebook',
notebook_file="05K_MNIST_TF20Keras_Tensorboard_playground.ipynb",
input_files=additional_files,
output_file=os.path.join(BUCKET, 'fairing-output', 'mnist-001.ipynb'))
# GPU settings for single K80, single p100 respectively
# job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}
# job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}}
# These job_config settings for TPUv2
#job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}
#job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'n1-standard-8', 'workerType': 'cloud_tpu', 'workerCount': 1,
# 'workerConfig': {'accelerator_config': {'type': 'TPU_V2','count': 8}}}})
# On AI Platform, TPUv3 support is alpha and available to whitelisted customers only
fairing.config.run()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Step6: Write out the graph for TensorBoard
Step7: Training
Step8: Sampling
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Univariate normal
Step3: Grid algorithm
Step4: Update
Step5: Posterior distribution of sigma
Step6: Posterior distribution of mu
Step7: Posterior joint distribution
Step8: Sampling from posterior predictive
Step9: Analytic posterior predictive
Step10: Multivariate normal
Step11: Update
Step12: Posterior distribution of covariance
Step13: Evaluate the Inverse Wishart PDF
Step14: Posterior distribution of mu
Step15: Multivariate student t
Step16: Compare to analytic univariate distributions
Step17: Sampling from posterior predictive
Step18: Comparing to the multivariate student t
Step19: Bayesian linear regression
Step22: Grid algorithm
Step23: Posterior distribution of sigma
Step24: Posterior distribution of sigma, updatable version
Step25: Sampling the posterior of the parameters
Step26: Posterior using multivariate Student t
Step27: Sampling the predictive distribution
Step28: Modeling the predictive distribution
Step29: Leftovers
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
if not os.path.exists('figs'):
!mkdir figs
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf, Cdf
from utils import decorate, savefig
from scipy.stats import norm
data = norm(10, 2).rvs(20)
data
n = len(data)
xbar = np.mean(data)
s2 = np.var(data)
n, xbar, s2
mus = np.linspace(8, 12, 101)
prior_mu = Pmf(1, mus)
prior_mu.index.name = 'mu'
sigmas = np.linspace(0.01, 5, 100)
ps = sigmas**-2
prior_sigma = Pmf(ps, sigmas)
prior_sigma.index.name = 'sigma'
from utils import make_joint
prior = make_joint(prior_mu, prior_sigma)
from utils import normalize
def update_norm(prior, data):
Update the prior based on data.
prior: joint distribution of mu and sigma
data: sequence of observations
X, Y, Z = np.meshgrid(prior.columns, prior.index, data)
likelihood = norm(X, Y).pdf(Z).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
posterior = update_norm(prior, data)
from utils import marginal
posterior_mu_grid = marginal(posterior, 0)
posterior_sigma_grid = marginal(posterior, 1)
posterior_mu_grid.plot()
decorate(title='Posterior distribution of mu')
posterior_sigma_grid.plot(color='C1')
decorate(title='Posterior distribution of sigma')
m0 = 0
kappa0 = 0
alpha0 = 0
beta0 = 0
m_n = (kappa0 * m0 + n * xbar) / (kappa0 + n)
m_n
kappa_n = kappa0 + n
kappa_n
alpha_n = alpha0 + n/2
alpha_n
beta_n = beta0 + n*s2/2 + n * kappa0 * (xbar-m0)**2 / (kappa0 + n) / 2
beta_n
def update_normal(prior, summary):
m0, kappa0, alpha0, beta0 = prior
n, xbar, s2 = summary
m_n = (kappa0 * m0 + n * xbar) / (kappa0 + n)
kappa_n = kappa0 + n
alpha_n = alpha0 + n/2
beta_n = (beta0 + n*s2/2 +
n * kappa0 * (xbar-m0)**2 / (kappa0 + n) / 2)
return m_n, kappa_n, alpha_n, beta_n
prior = 0, 0, 0, 0
summary = n, xbar, s2
update_normal(prior, summary)
from scipy.stats import invgamma
dist_sigma2 = invgamma(alpha_n, scale=beta_n)
dist_sigma2.mean()
dist_sigma2.std()
sigma2s = np.linspace(0.01, 20, 101)
ps = dist_sigma2.pdf(sigma2s)
posterior_sigma2_invgammas = Pmf(ps, sigma2s)
posterior_sigma2_invgammas.normalize()
posterior_sigma2_invgammas.plot()
decorate(xlabel='$\sigma^2$',
ylabel='PDF',
title='Posterior distribution of variance')
sigmas = np.sqrt(sigma2s)
posterior_sigma_invgammas = Pmf(ps, sigmas)
posterior_sigma_invgammas.normalize()
posterior_sigma_grid.make_cdf().plot(color='gray', label='grid')
posterior_sigma_invgammas.make_cdf().plot(color='C1', label='invgamma')
decorate(xlabel='$\sigma$',
ylabel='PDF',
title='Posterior distribution of standard deviation')
posterior_sigma_invgammas.mean(), posterior_sigma_grid.mean()
posterior_sigma_invgammas.std(), posterior_sigma_grid.std()
2 / np.sqrt(2 * (n-1))
from scipy.stats import t as student_t
def make_student_t(df, loc, scale):
return student_t(df, loc=loc, scale=scale)
df = 2 * alpha_n
precision = alpha_n * kappa_n / beta_n
dist_mu = make_student_t(df, m_n, 1/np.sqrt(precision))
dist_mu.mean()
dist_mu.std()
np.sqrt(4/n)
mus = np.linspace(8, 12, 101)
ps = dist_mu.pdf(mus)
posterior_mu_student = Pmf(ps, mus)
posterior_mu_student.normalize()
posterior_mu_student.plot()
decorate(xlabel='$\mu$',
ylabel='PDF',
title='Posterior distribution of mu')
posterior_mu_grid.make_cdf().plot(color='gray', label='grid')
posterior_mu_student.make_cdf().plot(label='invgamma')
decorate(xlabel='$\mu$',
ylabel='CDF',
title='Posterior distribution of mu')
def make_posterior_mu(m_n, kappa_n, alpha_n, beta_n):
df = 2 * alpha_n
loc = m_n
precision = alpha_n * kappa_n / beta_n
dist_mu = make_student_t(df, loc, 1/np.sqrt(precision))
return dist_mu
mu_mesh, sigma2_mesh = np.meshgrid(mus, sigma2s)
joint = (dist_sigma2.pdf(sigma2_mesh) *
norm(m_n, sigma2_mesh/kappa_n).pdf(mu_mesh))
joint_df = pd.DataFrame(joint, columns=mus, index=sigma2s)
from utils import plot_contour
plot_contour(joint_df)
decorate(xlabel='$\mu$',
ylabel='$\sigma^2$',
title='Posterior joint distribution')
sample_sigma2 = dist_sigma2.rvs(1000)
sample_mu = norm(m_n, sample_sigma2 / kappa_n).rvs()
sample_pred = norm(sample_mu, np.sqrt(sample_sigma2)).rvs()
cdf_pred = Cdf.from_seq(sample_pred)
cdf_pred.plot()
sample_pred.mean(), sample_pred.var()
df = 2 * alpha_n
precision = alpha_n * kappa_n / beta_n / (kappa_n+1)
dist_pred = make_student_t(df, m_n, 1/np.sqrt(precision))
xs = np.linspace(2, 16, 101)
ys = dist_pred.cdf(xs)
plt.plot(xs, ys, color='gray', label='student t')
cdf_pred.plot(label='sample')
decorate(title='Predictive distribution')
def make_posterior_pred(m_n, kappa_n, alpha_n, beta_n):
df = 2 * alpha_n
loc = m_n
precision = alpha_n * kappa_n / beta_n / (kappa_n+1)
dist_pred = make_student_t(df, loc, 1/np.sqrt(precision))
return dist_pred
mean = [10, 20]
sigma_x = 2
sigma_y = 3
rho = 0.3
cov = rho * sigma_x * sigma_y
Sigma = [[sigma_x**2, cov], [cov, sigma_y**2]]
Sigma
from scipy.stats import multivariate_normal
n = 20
data = multivariate_normal(mean, Sigma).rvs(n)
data
n = len(data)
n
xbar = np.mean(data, axis=0)
xbar
S = np.cov(data.transpose())
S
np.corrcoef(data.transpose())
stds = np.sqrt(np.diag(S))
stds
corrcoef = S / np.outer(stds, stds)
corrcoef
def unpack_cov(S):
stds = np.sqrt(np.diag(S))
corrcoef = S / np.outer(stds, stds)
return stds[0], stds[1], corrcoef[0][1]
sigma_x, sigma_y, rho = unpack_cov(S)
sigma_x, sigma_y, rho
def pack_cov(sigma_x, sigma_y, rho):
cov = sigma_x * sigma_y * rho
return np.array([[sigma_x**2, cov], [cov, sigma_y**2]])
pack_cov(sigma_x, sigma_y, rho)
S
m_0 = 0
Lambda_0 = 0
nu_0 = 0
kappa_0 = 0
m_n = (kappa_0 * m_0 + n * xbar) / (kappa_0 + n)
m_n
xbar
diff = (xbar - m_0)
D = np.outer(diff, diff)
D
Lambda_n = Lambda_0 + S + n * kappa_0 * D / (kappa_0 + n)
Lambda_n
S
nu_n = nu_0 + n
nu_n
kappa_n = kappa_0 + n
kappa_n
from scipy.stats import invwishart
def make_invwishart(nu, Lambda):
d, _ = Lambda.shape
return invwishart(nu, scale=Lambda * (nu - d - 1))
dist_cov = make_invwishart(nu_n, Lambda_n)
dist_cov.mean()
S
sample_Sigma = dist_cov.rvs(1000)
np.mean(sample_Sigma, axis=0)
res = [unpack_cov(Sigma) for Sigma in sample_Sigma]
sample_sigma_x, sample_sigma_y, sample_rho = np.transpose(res)
sample_sigma_x.mean(), sample_sigma_y.mean(), sample_rho.mean()
unpack_cov(S)
Cdf.from_seq(sample_sigma_x).plot(label=r'$\sigma_x$')
Cdf.from_seq(sample_sigma_y).plot(label=r'$\sigma_y$')
decorate(xlabel='Standard deviation',
ylabel='CDF',
title='Posterior distribution of standard deviation')
Cdf.from_seq(sample_rho).plot()
decorate(xlabel='Coefficient of correlation',
ylabel='CDF',
title='Posterior distribution of correlation')
num = 51
sigma_xs = np.linspace(0.01, 10, num)
sigma_ys = np.linspace(0.01, 10, num)
rhos = np.linspace(-0.3, 0.9, num)
index = pd.MultiIndex.from_product([sigma_xs, sigma_ys, rhos],
names=['sigma_x', 'sigma_y', 'rho'])
joint = Pmf(0, index)
joint.head()
dist_cov.pdf(S)
for sigma_x, sigma_y, rho in joint.index:
Sigma = pack_cov(sigma_x, sigma_y, rho)
joint.loc[sigma_x, sigma_y, rho] = dist_cov.pdf(Sigma)
joint.normalize()
from utils import pmf_marginal
posterior_sigma_x = pmf_marginal(joint, 0)
posterior_sigma_y = pmf_marginal(joint, 1)
marginal_rho = pmf_marginal(joint, 2)
posterior_sigma_x.mean(), posterior_sigma_y.mean(), marginal_rho.mean()
unpack_cov(S)
posterior_sigma_x.plot(label='$\sigma_x$')
posterior_sigma_y.plot(label='$\sigma_y$')
decorate(xlabel='Standard deviation',
ylabel='PDF',
title='Posterior distribution of standard deviation')
posterior_sigma_x.make_cdf().plot(color='gray')
posterior_sigma_y.make_cdf().plot(color='gray')
Cdf.from_seq(sample_sigma_x).plot(label=r'$\sigma_x$')
Cdf.from_seq(sample_sigma_y).plot(label=r'$\sigma_y$')
decorate(xlabel='Standard deviation',
ylabel='CDF',
title='Posterior distribution of standard deviation')
marginal_rho.make_cdf().plot(color='gray')
Cdf.from_seq(sample_rho).plot()
decorate(xlabel='Coefficient of correlation',
ylabel='CDF',
title='Posterior distribution of correlation')
m_n
sample_mu = [multivariate_normal(m_n, Sigma/kappa_n).rvs()
for Sigma in sample_Sigma]
sample_mu0, sample_mu1 = np.transpose(sample_mu)
sample_mu0.mean(), sample_mu1.mean()
xbar
sample_mu0.std(), sample_mu1.std()
2 / np.sqrt(n), 3 / np.sqrt(n)
Cdf.from_seq(sample_mu0).plot(label=r'$\mu_0$ sample')
Cdf.from_seq(sample_mu1).plot(label=r'$\mu_1$ sample')
decorate(xlabel=r'$\mu$',
ylabel='CDF',
title=r'Posterior distribution of $\mu$')
from scipy.special import gammaln
def multistudent_pdf(x, mean, shape, df):
return np.exp(logpdf(x, mean, shape, df))
def logpdf(x, mean, shape, df):
p = len(mean)
vals, vecs = np.linalg.eigh(shape)
logdet = np.log(vals).sum()
valsinv = np.array([1.0/v for v in vals])
U = vecs * np.sqrt(valsinv)
dev = x - mean
maha = np.square(dev @ U).sum(axis=-1)
t = 0.5 * (df + p)
A = gammaln(t)
B = gammaln(0.5 * df)
C = p/2. * np.log(df * np.pi)
D = 0.5 * logdet
E = -t * np.log(1 + (1./df) * maha)
return A - B - C - D + E
d = len(m_n)
x = m_n
mean = m_n
df = nu_n - d + 1
shape = Lambda_n / kappa_n
multistudent_pdf(x, mean, shape, df)
mu0s = np.linspace(8, 12, 91)
mu1s = np.linspace(18, 22, 101)
mu_mesh = np.dstack(np.meshgrid(mu0s, mu1s))
mu_mesh.shape
ps = multistudent_pdf(mu_mesh, mean, shape, df)
joint = pd.DataFrame(ps, columns=mu0s, index=mu1s)
normalize(joint)
plot_contour(joint)
from utils import marginal
posterior_mu0_student = marginal(joint, 0)
posterior_mu1_student = marginal(joint, 1)
posterior_mu0_student.make_cdf().plot(color='gray', label=r'$\mu_0 multi t$')
posterior_mu1_student.make_cdf().plot(color='gray', label=r'$\mu_1 multi t$')
Cdf.from_seq(sample_mu0).plot(label=r'$\mu_0$ sample')
Cdf.from_seq(sample_mu1).plot(label=r'$\mu_1$ sample')
decorate(xlabel=r'$\mu$',
ylabel='CDF',
title=r'Posterior distribution of $\mu$')
prior = 0, 0, 0, 0
summary = n, xbar[0], S[0][0]
summary
params = update_normal(prior, summary)
params
dist_mu0 = make_posterior_mu(*params)
dist_mu0.mean(), dist_mu0.std()
mu0s = np.linspace(7, 12, 101)
ps = dist_mu0.pdf(mu0s)
posterior_mu0 = Pmf(ps, index=mu0s)
posterior_mu0.normalize()
prior = 0, 0, 0, 0
summary = n, xbar[1], S[1][1]
summary
params = update_normal(prior, summary)
params
dist_mu1 = make_posterior_mu(*params)
dist_mu1.mean(), dist_mu1.std()
mu1s = np.linspace(17, 23, 101)
ps = dist_mu1.pdf(mu1s)
posterior_mu1 = Pmf(ps, index=mu1s)
posterior_mu1.normalize()
posterior_mu0.make_cdf().plot(label=r'$\mu_0$ uni t', color='gray')
posterior_mu1.make_cdf().plot(label=r'$\mu_1$ uni t', color='gray')
Cdf.from_seq(sample_mu0).plot(label=r'$\mu_0$ sample')
Cdf.from_seq(sample_mu1).plot(label=r'$\mu_1$ sample')
decorate(xlabel=r'$\mu$',
ylabel='CDF',
title=r'Posterior distribution of $\mu$')
sample_pred = [multivariate_normal(mu, Sigma).rvs()
for mu, Sigma in zip(sample_mu, sample_Sigma)]
sample_x0, sample_x1 = np.transpose(sample_pred)
sample_x0.mean(), sample_x1.mean()
sample_x0.std(), sample_x1.std()
prior = 0, 0, 0, 0
summary = n, xbar[0], S[0][0]
params = update_normal(prior, summary)
dist_x0 = make_posterior_pred(*params)
dist_x0.mean(), dist_x0.std()
x0s = np.linspace(2, 18, 101)
ps = dist_x0.pdf(x0s)
pred_x0 = Pmf(ps, index=x0s)
pred_x0.normalize()
prior = 0, 0, 0, 0
summary = n, xbar[1], S[1][1]
params = update_normal(prior, summary)
dist_x1 = make_posterior_pred(*params)
dist_x1.mean(), dist_x1.std()
x1s = np.linspace(10, 30, 101)
ps = dist_x1.pdf(x1s)
pred_x1 = Pmf(ps, index=x1s)
pred_x1.normalize()
pred_x0.make_cdf().plot(label=r'$x_0$ student t', color='gray')
pred_x1.make_cdf().plot(label=r'$x_1$ student t', color='gray')
Cdf.from_seq(sample_x0).plot(label=r'$x_0$ sample')
Cdf.from_seq(sample_x1).plot(label=r'$x_1$ sample')
decorate(xlabel='Quantity',
ylabel='CDF',
title='Posterior predictive distributions')
d = len(m_n)
x = m_n
mean = m_n
df = nu_n - d + 1
shape = Lambda_n * (kappa_n+1) / kappa_n
multistudent_pdf(x, mean, shape, df)
x0s = np.linspace(0, 20, 91)
x1s = np.linspace(10, 30, 101)
x_mesh = np.dstack(np.meshgrid(x0s, x1s))
x_mesh.shape
ps = multistudent_pdf(x_mesh, mean, shape, df)
joint = pd.DataFrame(ps, columns=x0s, index=x1s)
normalize(joint)
plot_contour(joint)
from utils import marginal
posterior_x0_student = marginal(joint, 0)
posterior_x1_student = marginal(joint, 1)
posterior_x0_student.make_cdf().plot(color='gray', label=r'$x_0$ multi t')
posterior_x1_student.make_cdf().plot(color='gray', label=r'$x_1$ multi t')
Cdf.from_seq(sample_x0).plot(label=r'$x_0$ sample')
Cdf.from_seq(sample_x1).plot(label=r'$x_1$ sample')
decorate(xlabel='Quantity',
ylabel='CDF',
title='Posterior predictive distributions')
inter, slope = 5, 2
sigma = 3
n = 20
xs = norm(0, 3).rvs(n)
xs = np.sort(xs)
ys = inter + slope * xs + norm(0, sigma).rvs(20)
plt.plot(xs, ys, 'o');
import statsmodels.api as sm
X = sm.add_constant(xs)
X
model = sm.OLS(ys, X)
results = model.fit()
results.summary()
beta_hat = results.params
beta_hat
# k = results.df_model
k = 2
s2 = results.resid @ results.resid / (n - k)
s2
s2 = results.ssr / (n - k)
s2
np.sqrt(s2)
beta0s = np.linspace(2, 8, 71)
prior_inter = Pmf(1, beta0s, name='inter')
prior_inter.index.name = 'Intercept'
beta1s = np.linspace(1, 3, 61)
prior_slope = Pmf(1, beta1s, name='slope')
prior_slope.index.name = 'Slope'
sigmas = np.linspace(1, 6, 51)
ps = sigmas**-2
prior_sigma = Pmf(ps, sigmas, name='sigma')
prior_sigma.index.name = 'Sigma'
prior_sigma.normalize()
prior_sigma.plot()
from utils import make_joint
def make_joint3(pmf1, pmf2, pmf3):
Make a joint distribution with three parameters.
pmf1: Pmf object
pmf2: Pmf object
pmf3: Pmf object
returns: Pmf representing a joint distribution
joint2 = make_joint(pmf2, pmf1).stack()
joint3 = make_joint(pmf3, joint2).stack()
return Pmf(joint3)
prior3 = make_joint3(prior_slope, prior_inter, prior_sigma)
prior3.head()
from utils import normalize
def update_optimized(prior, data):
Posterior distribution of regression parameters
`slope`, `inter`, and `sigma`.
prior: Pmf representing the joint prior
data: DataFrame with columns `x` and `y`
returns: Pmf representing the joint posterior
xs = data['x']
ys = data['y']
sigmas = prior.columns
likelihood = prior.copy()
for slope, inter in prior.index:
expected = slope * xs + inter
resid = ys - expected
resid_mesh, sigma_mesh = np.meshgrid(resid, sigmas)
densities = norm.pdf(resid_mesh, 0, sigma_mesh)
likelihood.loc[slope, inter] = densities.prod(axis=1)
posterior = prior * likelihood
normalize(posterior)
return posterior
data = pd.DataFrame(dict(x=xs, y=ys))
from utils import normalize
posterior = update_optimized(prior3.unstack(), data)
normalize(posterior)
from utils import marginal
posterior_sigma_grid = marginal(posterior, 0)
posterior_sigma_grid.plot(label='grid')
decorate(title='Posterior distribution of sigma')
joint_posterior = marginal(posterior, 1).unstack()
plot_contour(joint_posterior)
posterior_beta0_grid = marginal(joint_posterior, 0)
posterior_beta1_grid = marginal(joint_posterior, 1)
posterior_beta0_grid.make_cdf().plot(label=r'$\beta_0$')
posterior_beta1_grid.make_cdf().plot(label=r'$\beta_1$')
decorate(title='Posterior distributions of parameters')
nu = n-k
nu/2, nu*s2/2
from scipy.stats import invgamma
dist_sigma2 = invgamma(nu/2, scale=nu*s2/2)
dist_sigma2.mean()
sigma2s = np.linspace(0.01, 30, 101)
ps = dist_sigma2.pdf(sigma2s)
posterior_sigma2_invgamma = Pmf(ps, sigma2s)
posterior_sigma2_invgamma.normalize()
posterior_sigma2_invgamma.plot()
sigmas = np.sqrt(sigma2s)
posterior_sigma_invgamma = Pmf(ps, sigmas)
posterior_sigma_invgamma.normalize()
posterior_sigma_invgamma.mean(), posterior_sigma_grid.mean()
posterior_sigma_grid.make_cdf().plot(color='gray', label='grid')
posterior_sigma_invgamma.make_cdf().plot(label='invgamma')
decorate(title='Posterior distribution of sigma')
Lambda_0 = np.zeros((k, k))
Lambda_n = Lambda_0 + X.T @ X
Lambda_n
from scipy.linalg import inv
mu_0 = np.zeros(k)
mu_n = inv(Lambda_n) @ (Lambda_0 @ mu_0 + X.T @ X @ beta_hat)
mu_n
a_0 = 0
a_n = a_0 + n / 2
a_n
b_0 = 0
b_n = b_0 + (ys.T @ ys +
mu_0.T @ Lambda_0 @ mu_0 -
mu_n.T @ Lambda_n @ mu_n) / 2
b_n
a_n, nu/2
b_n, nu * s2 / 2
sample_sigma2 = dist_sigma2.rvs(1000)
sample_sigma = np.sqrt(sample_sigma2)
from scipy.linalg import inv
V_beta = inv(X.T @ X)
V_beta
sample_beta = [multivariate_normal(beta_hat, V_beta * sigma2).rvs()
for sigma2 in sample_sigma2]
np.mean(sample_beta, axis=0)
beta_hat
np.std(sample_beta, axis=0)
results.bse
sample_beta0, sample_beta1 = np.transpose(sample_beta)
Cdf.from_seq(sample_beta0).plot(label=r'$\beta_0$')
Cdf.from_seq(sample_beta1).plot(label=r'$\beta_1$')
decorate(title='Posterior distributions of the parameters')
x = beta_hat
mean = beta_hat
df = (n - k)
shape = (V_beta * s2)
multistudent_pdf(x, mean, shape, df)
low, high = sample_beta0.min(), sample_beta0.max()
low, high
beta0s = np.linspace(0.9*low, 1.1*high, 101)
low, high = sample_beta1.min(), sample_beta1.max()
beta1s = np.linspace(0.9*low, 1.1*high, 91)
beta0_mesh, beta1_mesh = np.meshgrid(beta0s, beta1s)
beta_mesh = np.dstack(np.meshgrid(beta0s, beta1s))
beta_mesh.shape
ps = multistudent_pdf(beta_mesh, mean, shape, df)
ps.shape
joint = pd.DataFrame(ps, columns=beta0s, index=beta1s)
from utils import normalize
normalize(joint)
from utils import plot_contour
plot_contour(joint)
decorate(xlabel=r'$\beta_0$',
ylabel=r'$\beta_1$')
marginal_beta0_student = marginal(joint, 0)
marginal_beta1_student = marginal(joint, 1)
from utils import marginal
posterior_beta0_grid.make_cdf().plot(color='gray', label=r'grid $\beta_0$')
posterior_beta1_grid.make_cdf().plot(color='gray', label=r'grid $\beta_1$')
marginal_beta0_student.make_cdf().plot(label=r'student $\beta_0$', color='gray')
marginal_beta1_student.make_cdf().plot(label=r'student $\beta_0$', color='gray')
Cdf.from_seq(sample_beta0).plot(label=r'sample $\beta_0$')
Cdf.from_seq(sample_beta1).plot(label=r'sample $\beta_1$')
decorate()
t = [X @ beta + norm(0, sigma).rvs(n)
for beta, sigma in zip(sample_beta, sample_sigma)]
predictions = np.array(t)
predictions.shape
low, median, high = np.percentile(predictions, [5, 50, 95], axis=0)
plt.plot(xs, ys, 'o')
plt.plot(xs, median)
plt.fill_between(xs, low, high, color='C1', alpha=0.3)
xnew = [1, 2, 3]
Xnew = sm.add_constant(xnew)
Xnew
t = [Xnew @ beta + norm(0, sigma).rvs(len(xnew))
for beta, sigma in zip(sample_beta, sample_sigma)]
predictions = np.array(t)
predictions.shape
x0, x1, x2 = predictions.T
Cdf.from_seq(x0).plot()
Cdf.from_seq(x1).plot()
Cdf.from_seq(x2).plot()
mu_new = Xnew @ beta_hat
mu_new
cov_new = s2 * (np.eye(len(xnew)) + Xnew @ V_beta @ Xnew.T)
cov_new
x = mu_new
mean = mu_new
df = (n - k)
shape = cov_new
multistudent_pdf(x, mean, shape, df)
y1s = np.linspace(0, 20, 51)
y0s = np.linspace(0, 20, 61)
y2s = np.linspace(0, 20, 71)
mesh = np.stack(np.meshgrid(y0s, y1s, y2s), axis=-1)
mesh.shape
ps = multistudent_pdf(mesh, mean, shape, df)
ps.shape
ps /= ps.sum()
ps.sum()
p1s = ps.sum(axis=1).sum(axis=1)
p1s.shape
p0s = ps.sum(axis=0).sum(axis=1)
p0s.shape
p2s = ps.sum(axis=0).sum(axis=0)
p2s.shape
pmf_y0 = Pmf(p0s, y0s)
pmf_y1 = Pmf(p1s, y1s)
pmf_y2 = Pmf(p2s, y2s)
pmf_y0.mean(), pmf_y1.mean(), pmf_y2.mean()
pmf_y0.make_cdf().plot(color='gray')
pmf_y1.make_cdf().plot(color='gray')
pmf_y2.make_cdf().plot(color='gray')
Cdf.from_seq(x0).plot()
Cdf.from_seq(x1).plot()
Cdf.from_seq(x2).plot()
stop
from scipy.stats import chi2
class NormalInverseWishartDistribution(object):
def __init__(self, mu, lmbda, nu, psi):
self.mu = mu
self.lmbda = float(lmbda)
self.nu = nu
self.psi = psi
self.inv_psi = np.linalg.inv(psi)
def sample(self):
sigma = np.linalg.inv(self.wishartrand())
return (np.random.multivariate_normal(self.mu, sigma / self.lmbda), sigma)
def wishartrand(self):
dim = self.inv_psi.shape[0]
chol = np.linalg.cholesky(self.inv_psi)
foo = np.zeros((dim,dim))
for i in range(dim):
for j in range(i+1):
if i == j:
foo[i,j] = np.sqrt(chi2.rvs(self.nu-(i+1)+1))
else:
foo[i,j] = np.random.normal(0,1)
return np.dot(chol, np.dot(foo, np.dot(foo.T, chol.T)))
def posterior(self, data):
n = len(data)
mean_data = np.mean(data, axis=0)
sum_squares = np.sum([np.array(np.matrix(x - mean_data).T * np.matrix(x - mean_data)) for x in data], axis=0)
mu_n = (self.lmbda * self.mu + n * mean_data) / (self.lmbda + n)
lmbda_n = self.lmbda + n
nu_n = self.nu + n
dev = mean_data - self.mu
psi_n = (self.psi + sum_squares +
self.lmbda * n / (self.lmbda + n) * np.array(dev.T @ dev))
return NormalInverseWishartDistribution(mu_n, lmbda_n, nu_n, psi_n)
x = NormalInverseWishartDistribution(np.array([0,0])-3,1,3,np.eye(2))
samples = [x.sample() for _ in range(100)]
data = [np.random.multivariate_normal(mu,cov) for mu,cov in samples]
y = NormalInverseWishartDistribution(np.array([0,0]),1,3,np.eye(2))
z = y.posterior(data)
print('mu_n: {0}'.format(z.mu))
print('psi_n: {0}'.format(z.psi))
from scipy.linalg import inv
from scipy.linalg import cholesky
def wishartrand(nu, Lambda):
d, _ = Lambda.shape
chol = cholesky(Lambda)
foo = np.empty((d, d))
for i in range(d):
for j in range(i+1):
if i == j:
foo[i,j] = np.sqrt(chi2.rvs(nu-(i+1)+1))
else:
foo[i,j] = np.random.normal(0, 1)
return np.dot(chol, np.dot(foo, np.dot(foo.T, chol.T)))
sample = [wishartrand(nu_n, Lambda_n) for i in range(1000)]
np.mean(sample, axis=0)
Lambda_n
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in notmist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: Problem 1
Step4: Problem 2
Step5: Problem 3
Step6: Problem 4
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
hidden1 = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
weights2 = tf.Variable(
tf.truncated_normal([1024, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(hidden1, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1) +
tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2))
loss += 5e-4 * regularizers
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1),
weights2) + biases2)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1),
weights2) + biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 12
SEED = 66478
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(tf.truncated_normal([1024, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
def model(data, train=False):
hidden1 = tf.nn.relu(tf.matmul(data, weights1) + biases1)
return tf.matmul(hidden1, weights2) + biases2
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1) +
tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2))
loss += 5e-4 * regularizers
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 12
SEED = 66478
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(tf.truncated_normal([1024, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
def model(data, train=False):
hidden1 = tf.nn.relu(tf.matmul(data, weights1) + biases1)
if train:
hidden1 = tf.nn.dropout(hidden1, 0.5, seed=SEED)
return tf.matmul(hidden1, weights2) + biases2
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1) +
tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2))
loss += 5e-4 * regularizers
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(tf.truncated_normal([1024, 1024]))
biases2 = tf.Variable(tf.zeros([1024]))
weights3 = tf.Variable(tf.truncated_normal([1024, num_labels]))
biases3 = tf.Variable(tf.zeros([num_labels]))
def model(data, train=False):
hidden1 = tf.nn.relu(tf.matmul(data, weights1) + biases1)
if train:
hidden1 = tf.nn.dropout(hidden1, 0.7, seed=SEED)
hidden2 = tf.matmul(hidden1, weights2) + biases2
if train:
hidden2 = tf.nn.dropout(hidden2, 0.7, seed=SEED)
return tf.matmul(hidden2, weights3) + biases3
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1)
+ tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2)
+ tf.nn.l2_loss(weights3) + tf.nn.l2_loss(biases3))
loss += 5e-4 * regularizers
# Optimizer.
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.01, step, 3000, 0.5, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Build a Shyft model
Step2: Create a collection of simulation cells
Step4: So the first step is to extract these from the netcdf file, and get them into the model.
Step5: The first keys, are probably quite clear
Step6: In this dictionary we define two variables
Step7: The region_model
Step8: You might be surprised to see the dimensions are 'cells', but recall that in Shyft everything is vectorized. Each 'cell' is an element within a domain, and each cell has associated variables
Step9: Exploring the region_model
Step10: You'll likely note that there are a number of intriguing fucntions, e.g. initialize_cell_environment or interpolate. But before we can go further, we need some more information. Perhaps you are wondering about forcing data. So far, we haven't said anything about model input or the time of the simulation, we've only set up a container that holds all the domain and model type information about our simulation.
Step11: So you can see that so far, each of the cells in the region_model contain information regarding their LandTypeFractions, geolocation, catchment_id, and area.
Step12: Adding forcing data to the region_model
Step13: Data Repositories
Step15: tmp_repo is now an instance of the Shyft CFDataRepository, and this will provide Shyft with the data when it sets up a simulation by reading the data directly out of the file referenced in the 'source'. But that is just one repository, and we defined many in fact. Furthermore, you may have a heterogenous collection of data sources -- if for example you want to get your temperature from station data, but radiation from model output. You could define different repositories in the ForcingData dictionary.
Step16: geots_repo is now a "geographic timeseries repository", meaning that the timeseries it holds are spatially aware of their x,y,z coordinates (see CFDataRepository for details). It also has several methods. One in particular we are interested in is the get_timeseries method. However, before we can proceed, we need to define the period for the simulation.
Step17: We now have an object that defines the time dimension for the simulation, and we will use this to initialize the region_model with the "environmental timeseries" or env_ts data. These containers will be given data from the appropriate repositories using the get_timeseries function. Following the templates in the shyft.repository.interfaces module, you'll see that the repositories should provide the capability to "screen" data based on time criteria and optinally* geo_location criteria.
Step18: Now we have a new dictionary, called 'sources' that contains specialized Shyft api types specific to each forcing data type. You can look at one for example
Step19: We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting.
Step20: Finally, the next step will take the data from the sources and connect it to our region_model.region_env class
Step22: And now our forcing data is connected to the region_model. We are almost ready to run a simulation. There is just one more step. We've connected the sources to the model, but remember that Shyft is a distributed modeling framework, and we've connected point data sources (in this case). So we need to get the data from the observed points to each cell. This is done through interpolation.
Step23: The next step is to set the intial states of the model using our last repository. This one, the GeneratedStateRepository will set empty default values.
Step24: Conduct the simulation
Step26: As a habit, we have a quick "sanity check" function to see if the model is runnable. Itis recommended to have this function when you create 'run scripts'.
Step27: Okay, so the simulation was run. Now we may be interested in looking at some of the output. We'll take a brief summary glance in the next section, and save a deeper dive into the simulation results for another notebook.
Step28: Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. This doesn't necessarily show anything about the simulation, per se, but rather results from the interpolation step.
Step29: As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot.
Step30: Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual state object, we'll see init_state contains, for each cell in our simulation, the state variables for each 'method' of the method stack.
|
<ASSISTANT_TASK:>
Python Code:
# Pure python modules and jupyter notebook functionality
# first you should import the third-party python modules which you'll use later on
# the first line enables that figures are shown inline, directly in the notebook
%matplotlib inline
import os
import datetime as dt
import numpy as np
from os import path
import sys
from matplotlib import pyplot as plt
from netCDF4 import Dataset
# try to auto-configure the path. This will work in the case
# that you have checked out the doc and data repositories
# at same level. Make sure this is done **before** importing shyft
shyft_data_path = path.abspath("../../../shyft-data")
if path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:
os.environ['SHYFT_DATA']=shyft_data_path
# shyft should be available either by it's install in python
# or by PYTHONPATH set by user prior to starting notebook.
# If you have cloned the repositories according to guidelines:
shyft_path=path.abspath('../../../shyft')
sys.path.insert(0,shyft_path)
from shyft import api
import shyft
import shyft.api.pt_gs_k
import shyft.repository
import shyft.repository.netcdf
print(shyft.__path__)
for env in os.environ:
if 'SHYFT' in env:
print('{0}:\n{1}'.format(env, os.environ[env]))
from shyft import shyftdata_dir
print(shyftdata_dir)
# load the data from the example datasets
cell_data = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/cell_data.nc'))
# plot the coordinates of the cell data provided
# fetch the x- and y-location of the cells
x = cell_data.variables['x'][:]
y = cell_data.variables['y'][:]
z = cell_data.variables['z'][:]
cid = cell_data.variables['catchment_id'][:]
# and make a quick catchment map...
# using a scatter plot of the cells
fig, ax = plt.subplots(figsize=(15,5))
cm = plt.cm.get_cmap('rainbow')
elv_col = ax.scatter(x, y, c=z, marker='.', s=40, lw=0, cmap=cm)
# cm = plt.cm.get_cmap('gist_gray')
# cid_col = ax.scatter(x, y, c=cid, marker='.', s=40, lw=0, alpha=0.4, cmap=cm)
plt.colorbar(elv_col).set_label('catchment elevation [m]')
# plt.colorbar(cid_col).set_label('catchment indices [id]')
plt.title('Nea Nidelva Catchment')
# print(set(cid))
print(cell_data.variables.keys())
# Let's first create a 'container' that will hold all of our model domains cells:
cell_data_vector = api.GeoCellDataVector()
# help(cell_data_vector)
#help(api.GeoPoint)
num_cells = cell_data.dimensions['cell'].size
for i in range(num_cells):
gp = api.GeoPoint(x[i], y[i], z[i]) # recall, we extracted x,y,z above
cid = cell_data.variables['catchment_id'][i]
cell_area = cell_data.variables['area'][i]
# land fractions:
glac = cell_data.variables['glacier-fraction'][i]
lake = cell_data.variables['lake-fraction'][i]
rsvr = cell_data.variables['reservoir-fraction'][i]
frst = cell_data.variables['forest-fraction'][i]
unsp = 1 - (glac + lake + rsvr + frst)
land_cover_frac = api.LandTypeFractions(glac, lake, rsvr, frst, unsp)
rad_fx = 0.9
# note, for now we need to make sure we cast some types to pure python, not numpy
geo_cell_data = api.GeoCellData(gp, float(cell_area), int(cid), rad_fx, land_cover_frac)
cell_data_vector.append(geo_cell_data)
nci.close()
# now get the forcing data ready.
# first create a region_environment object, the 'container' that will hold all
# the forcing data sources
re = api.ARegionEnvironment()
# map the variable names in the netcdf file to the source types
source_map = {'precipitation' : (api.PrecipitationSource, re.precipitation),
'radiation' : (api.RadiationSource, re.radiation),
'temperature' : (api.TemperatureSource, re.temperature),
'wind_speed' : (api.WindSpeedSource, re.wind_speed),
'relative_humidity' : (api.RelHumSource, re.rel_hum) }
# load the data from the example datasets
# station_met = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/stations_met.nc'))
# for station in station_met.groups.keys():
# stn = station_met.groups[station]
# print(stn)
# time = api.UtcTimeVector([int(t) for t in stn.variables['time'][:]])
# dt = time[1] - time[0] if len(time) > 1 else api.deltahours(1)
# x = stn.x
# y = stn.y
# z = stn.z
# gp = api.GeoPoint(x, y, z)
# for var, (source, source_vec) in source_map.items():
# if var in stn.variables.keys():
# data = stn.variables[var][:]
# time_axis = api.TimeAxis(int(time[0]), api.deltahours(dt), len(time))
# cts = api.TsFactory().create_time_point_ts(time_axis.total_period(), time, data, api.POINT_AVERAGE_VALUE)
# # add it to the precipitation source
# source_vec.append(source(gp, cts))
### ANOTHER APPROACH
re2 = api.ARegionEnvironment()
# precip = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/precipitation.nc'))
# rad = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/radiation.nc'))
# windsp = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/wind_speed.nc'))
# relhum = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/relative_humidity.nc'))
# temp = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/temperature.nc'))
# map the variable names in the netcdf file to the source types
source_map = {'precipitation' : ('precipitation.nc', api.PrecipitationSource, re2.precipitation),
'global_radiation' : ('radiation.nc', api.RadiationSource, re2.radiation),
'temperature' : ('temperature.nc', api.TemperatureSource, re2.temperature),
'wind_speed' : ('wind_speed.nc', api.WindSpeedSource, re2.wind_speed),
'relative_humidity' : ('relative_humidity.nc', api.RelHumSource, re2.rel_hum) }
for var, (file_name, source, source_vec) in source_map.items():
nci = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/' + file_name))
time = api.UtcTimeVector([int(t) for t in nci.variables['time'][:]])
dt = time[1] - time[0] if len(time) > 1 else api.deltahours(1)
for i in range(nci.dimensions['station'].size):
x = nci.variables['x'][i]
y = nci.variables['y'][i]
z = nci.variables['z'][i]
gp = api.GeoPoint(x, y, z)
data = nci.variables[var][:, i]
print(data)
time_axis = api.TimeAxis(int(time[0]), api.deltahours(dt), len(time))
cts = api.TsFactory().create_time_point_ts(time_axis.total_period(), time, data, api.POINT_AVERAGE_VALUE)
# add it to the precipitation source
source_vec.append(source(gp, cts))
re2.temperature.values_at_time(int(stn.variables['time'][100]))
def create_dummy_region_environment(self, time_axis, mid_point):
re = api.ARegionEnvironment()
re.precipitation.append(self._create_constant_geo_ts(api.PrecipitationSource, mid_point, time_axis.total_period(), 5.0))
re.temperature.append(self._create_constant_geo_ts(api.TemperatureSource, mid_point, time_axis.total_period(), 10.0))
re.wind_speed.append(self._create_constant_geo_ts(api.WindSpeedSource, mid_point, time_axis.total_period(), 2.0))
re.rel_hum.append(self._create_constant_geo_ts(api.RelHumSource, mid_point, time_axis.total_period(), 0.7))
re.radiation = api.RadiationSourceVector() # just for testing BW compat
re.radiation.append(self._create_constant_geo_ts(api.RadiationSource, mid_point, time_axis.total_period(), 300.0))
return re
def _create_constant_geo_ts(self, geo_ts_type, geo_point, utc_period, value):
Create a time point ts, with one value at the start
of the supplied utc_period.
tv = api.UtcTimeVector()
tv.push_back(utc_period.start)
vv = api.DoubleVector()
vv.push_back(value)
cts = api.TsFactory().create_time_point_ts(utc_period, tv, vv, api.POINT_AVERAGE_VALUE)
return geo_ts_type(geo_point, cts)
# next, create the simulation dictionary
RegionDict = {'region_model_id': 'demo', #a unique name identifier of the simulation
'domain': {'EPSG': 32633,
'nx': 400,
'ny': 80,
'step_x': 1000,
'step_y': 1000,
'lower_left_x': 100000,
'lower_left_y': 6960000},
'repository': {'class': shyft.repository.netcdf.cf_region_model_repository.CFRegionModelRepository,
'params': {'data_file': 'netcdf/orchestration-testdata/cell_data.nc'}},
}
from shyft.api.pt_gs_k import PTGSKModel
ModelDict = {'model_t': PTGSKModel, # model to construct
'model_parameters': {
'actual_evapotranspiration':{
'ae_scale_factor': 1.5},
'gamma_snow':{
'calculate_iso_pot_energy': False,
'fast_albedo_decay_rate': 6.752787747748934,
'glacier_albedo': 0.4,
'initial_bare_ground_fraction': 0.04,
'max_albedo': 0.9,
'max_water': 0.1,
'min_albedo': 0.6,
'slow_albedo_decay_rate': 37.17325702015658,
'snow_cv': 0.4,
'tx': -0.5752881492890207,
'snowfall_reset_depth': 5.0,
'surface_magnitude': 30.0,
'wind_const': 1.0,
'wind_scale': 1.8959672005350063,
'winter_end_day_of_year': 100},
'kirchner':{
'c1': -3.336197322290274,
'c2': 0.33433661533385695,
'c3': -0.12503959620315988},
'precipitation_correction': {
'scale_factor': 1.0},
'priestley_taylor':{'albedo': 0.2,
'alpha': 1.26},
}
}
region_repo = CFRegionModelRepository(RegionDict, ModelDict)
cell_data_file = os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/cell_data.nc')
print(cell_data_file)
cell_data = Dataset(cell_data_file)
print(cell_data)
region_model = region_repo.get_region_model('demo')
region_model.bounding_region.epsg()
cell_0 = region_model.cells[0]
print(cell_0.geo)
#just so we don't see 'private' attributes
print([d for d in dir(cell_0.env_ts) if '_' not in d[0]])
region_model.size()
from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository
from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository
ForcingData = {'sources': [
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'selection_criteria': None,
'stations_met': 'netcdf/orchestration-testdata/precipitation.nc'},
'types': ['precipitation']},
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'selection_criteria': None,
'stations_met': 'netcdf/orchestration-testdata/temperature.nc'},
'types': ['temperature']},
{'params': {'epsg': 32633,
'selection_criteria': None,
'stations_met': 'netcdf/orchestration-testdata/wind_speed.nc'},
'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'types': ['wind_speed']},
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'selection_criteria': None,
'stations_met': 'netcdf/orchestration-testdata/relative_humidity.nc'},
'types': ['relative_humidity']},
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'selection_criteria': None,
'stations_met': 'netcdf/orchestration-testdata/radiation.nc'},
'types': ['radiation']}]
}
# get the temperature sources:
tmp_sources = [source for source in ForcingData['sources'] if 'temperature' in source['types']]
# in this example there is only one
t0 = tmp_sources[0]
# We will now instantiate the repository with the parameters that are provided
# in the dictionary.
# Note the 'call' structure expects params to contain keyword arguments, and these
# can be anything you want depending on how you create your repository
tmp_repo = t0['repository'](**t0['params'])
# we'll actually create a collection of repositories, as we have different input types.
from shyft.repository.geo_ts_repository_collection import GeoTsRepositoryCollection
def construct_geots_repo(datasets_config, epsg=None):
iterates over the different sources that are provided
and prepares the repository to read the data for each type
geo_ts_repos = []
src_types_to_extract = []
for source in datasets_config['sources']:
if epsg is not None:
source['params'].update({'epsg': epsg})
# note that here we are instantiating the different source repositories
# to place in the geo_ts list
geo_ts_repos.append(source['repository'](**source['params']))
src_types_to_extract.append(source['types'])
return GeoTsRepositoryCollection(geo_ts_repos, src_types_per_repo=src_types_to_extract)
# instantiate the repository
geots_repo = construct_geots_repo(ForcingData)
# next, create the simulation dictionary
TimeDict = {'start_datetime': "2013-09-01T00:00:00",
'run_time_step': 86400, # seconds, daily
'number_of_steps': 365 # one year
}
def time_axis_from_dict(t_dict)->api.TimeAxis:
utc = api.Calendar()
sim_start = dt.datetime.strptime(t_dict['start_datetime'], "%Y-%m-%dT%H:%M:%S")
utc_start = utc.time(sim_start.year, sim_start.month, sim_start.day,\
sim_start.hour, sim_start.minute, sim_start.second)
tstep = t_dict['run_time_step']
nstep = t_dict['number_of_steps']
time_axis = api.TimeAxis(utc_start, tstep, nstep)
return time_axis
ta_1 = time_axis_from_dict(TimeDict)
print(f'1. {ta_1} \n {ta_1.total_period()}')
# or shyft-wise, ready tested, precise and less effort, two lines
utc = api.Calendar() # 'Europe/Oslo' can be passed to calendar for time-zone
ta_2 = api.TimeAxis(utc.time(2013, 9, 1), api.deltahours(24), 365)
print(f'2. {ta_2} \n {ta_2.total_period()}')
# we can extract our "bounding box" based on the `region_model` we set up
bbox = region_model.bounding_region.bounding_box(region_model.bounding_region.epsg())
period = ta_1.total_period() #just defined above
# required forcing data sets we want to retrieve
geo_ts_names = ("temperature", "wind_speed", "precipitation",
"relative_humidity", "radiation")
sources = geots_repo.get_timeseries( geo_ts_names, period, geo_location_criteria=bbox )
prec = sources['precipitation']
print(len(prec))
fig, ax = plt.subplots(figsize=(15,10))
for pr in prec:
t,p = [dt.datetime.utcfromtimestamp(t_.start) for t_ in pr.ts.time_axis], pr.ts.values
ax.plot(t,p, label=pr.mid_point().x) #uid is empty now, but we reserve for later use
fig.autofmt_xdate()
ax.legend(title="Precipitation Input Sources")
ax.set_ylabel("precip[mm/hr]")
def get_region_environment(sources):
region_env = api.ARegionEnvironment()
region_env.temperature = sources["temperature"]
region_env.precipitation = sources["precipitation"]
region_env.radiation = sources["radiation"]
region_env.wind_speed = sources["wind_speed"]
region_env.rel_hum = sources["relative_humidity"]
return region_env
region_model.region_env = get_region_environment(sources)
from shyft.repository.interpolation_parameter_repository import InterpolationParameterRepository
class interp_config(object):
a simple class to provide the interpolation parameters
def __init__(self):
self.interp_params = {'precipitation': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10,
'scale_factor': 1.02}},
'radiation': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10}},
'relative_humidity': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10}},
'temperature': {'method': 'btk',
'params': {'nug': 0.5,
'range': 200000.0,
'sill': 25.0,
'temperature_gradient': -0.6,
'temperature_gradient_sd': 0.25,
'zscale': 20.0}},
'wind_speed': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10}}}
def interpolation_parameters(self):
return self.interp_params
ip_conf = interp_config()
ip_repo = InterpolationParameterRepository(ip_conf)
region_model.interpolation_parameter = ip_repo.get_parameters(0) #just a '0' for now
from shyft.repository.generated_state_repository import GeneratedStateRepository
init_values = {'gs': {'acc_melt': 0.0,
'albedo': 0.65,
'alpha': 6.25,
'iso_pot_energy': 0.0,
'lwc': 0.1,
'sdc_melt_mean': 0.0,
'surface_heat': 30000.0,
'temp_swe': 0.0},
'kirchner': {'q': 0.01}}
state_generator = GeneratedStateRepository(region_model)#, init_values=init_values)
# we need the state_repository to have the same size as the model
#state_repo.n = region_model.size()
# there is only 1 state (indexed '0')
s0 = state_generator.get_state(0)
not_applied_list=region_model.state.apply_state( # apply state set the current state according to arguments
cell_id_state_vector=s0, # ok, easy to get
cids=[] # empty means apply all, if we wanted to only apply state for specific catchment-ids, this is where to put them
)
assert len(not_applied_list)==0, 'Ensure all states was matched and applied to the model'
region_model.initial_state=region_model.current_state # now we stash the current state to the initial state
state_generator.find_state?
region_model.initialize_cell_environment(ta_1)
def runnable(reg_mod):
returns True if model is properly configured
**note** this is specific depending on your model's input data requirements
return all((reg_mod.initial_state.size() > 0, reg_mod.time_axis.size() > 0,
all([len(getattr(reg_mod.region_env, attr)) > 0 for attr in
("temperature", "wind_speed", "precipitation", "rel_hum", "radiation")])))
# run the model, e.g. as you may configure it in a script:
if runnable(region_model):
region_model.interpolate(region_model.interpolation_parameter, region_model.region_env)
region_model.revert_to_initial_state()
region_model.run_cells()
else:
print('Something wrong with model configuration.')
# Here we are going to extact data from the simulation.
# We start by creating a list to hold discharge for each of the subcatchments.
# Then we'll get the data from the region_model object
import pandas as pd
# mapping of internal catch ID to catchment
catchment_id_map = region_model.catchment_id_map
# First get the time-axis which we'll use as the index for the data frame
ta = region_model.time_axis
# and convert it to datetimes
index = [dt.datetime.utcfromtimestamp(p.start) for p in ta]
# Now we'll add all the discharge series for each catchment
data = {}
for cid in catchment_id_map:
# get the discharge time series for the subcatchment
q_ts = region_model.statistics.discharge([int(cid)])
data[cid] = q_ts.values.to_numpy()
df = pd.DataFrame(data, index=index)
# we can simply use:
ax = df.plot(figsize=(20,15))
ax.legend(title="Catch. ID")
ax.set_ylabel("discharge [m3 s-1]")
from matplotlib.cm import jet as jet
from matplotlib.colors import Normalize
# get all the cells for one sub-catchment with 'id' == 1228
c1228 = [c for c in region_model.cells if c.geo.catchment_id() == 1228]
# for plotting, create an mpl normalizer based on min,max elevation
elv = [c.geo.mid_point().z for c in c1228]
norm = Normalize(min(elv), max(elv))
#plot with line color a function of elevation
fig, ax = plt.subplots(figsize=(15,10))
# here we are cycling through each of the cells in c1228
for dat,elv in zip([c.env_ts.temperature.values for c in c1228], [c.mid_point().z for c in c1228]):
ax.plot(dat, color=jet(norm(elv)), label=int(elv))
# the following is just to plot the legend entries and not related to Shyft
handles, labels = ax.get_legend_handles_labels()
# sort by labels
import operator
hl = sorted(zip(handles, labels),
key=operator.itemgetter(1))
handles2, labels2 = zip(*hl)
# show legend, but only every fifth entry
ax.legend(handles2[::5], labels2[::5], title='Elevation [m]')
# create a function to read the states from the state repository
def get_init_state_from_repo(initial_state_repo_, region_model_id_=None, timestamp=None):
state_id = 0
if hasattr(initial_state_repo_, 'n'): # No stored state, generated on-the-fly
initial_state_repo_.n = region_model.size()
else:
states = initial_state_repo_.find_state(
region_model_id_criteria=region_model_id_,
utc_period_criteria=timestamp)
if len(states) > 0:
state_id = states[0].state_id # most_recent_state i.e. <= start time
else:
raise Exception('No initial state matching criteria.')
return initial_state_repo_.get_state(state_id)
init_state = get_init_state_from_repo(state_generator, timestamp=region_model.time_axis.start)
def print_pub_attr(obj):
#only public attributes
print(f'{obj.__class__.__name__}:\t',[attr for attr in dir(obj) if attr[0] is not '_'])
print(len(init_state))
init_state_cell0 = init_state[0]
# the identifier
print_pub_attr(init_state_cell0.id)
# gam snow states
print_pub_attr(init_state_cell0.state.gs)
#init_state_cell0.kirchner states
print_pub_attr(init_state_cell0.state.kirchner)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sockets are the endpoints of a bidirectional communications channel.
Step2: To check if the socket is actually created one can check using the command netstat -nlp on the command line.
Step3: Blocking Execution
Step4: So the threads execute concurrently independent of each other.
|
<ASSISTANT_TASK:>
Python Code:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# This creates a socket
# AF_INET => family ipv4.
# SOCK_STREAM => TCP protocol.
import socket
def client_handler(client_sock):
# Do things here
# .send() Takes byte type object
# b' it indicates that the literal should become a bytes literal in Python 3
client_sock.send(b'Thank you for connecting')
# Close the connection with the client
client_sock.close()
# creates the socket
server = socket.socket()
PORT = 1234
# bind the socket to the port.
server.bind(('localhost',PORT)) # takes a tuple
# at most 5 connections in the queue
server.listen(5)
print("Listening for clients to connect")
while True:
# Establish connection with client.
client, addr = server.accept() # waits until a connection arrives i.e (blocking).
print('Got connection from', addr)
client_handler(client)
from socket import *
PORT = 1234
client = socket()
address=("localhost", PORT)
# connect to the given address
client.connect(address)
print(client.recv(1024))
client.close()
from threading import *
def lift_off(number):
for i in range(3,0, -1):
print("#"+ str(i) + "("+ str(number) + ") ")
for x in range(7):
Thread(target=lift_off, args=(x,)).start()
# Inheriting from the Thread class
from threading import *
class lift_off(Thread):
def __init__(self, number):
Thread.__init__(self)
self.number = number
def run(self):
for i in range(3,0, -1):
print("#"+ str(i) + "("+ str(self.number) + ") ")
for x in range(7):
lift_off(5).start()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's re-create Figure 2
Step2: Now let's see what the performance is as we vary different parameters. To do this, I'm using pytry, a simple Python package for running experiments and gathering data. (You can install it with pip install pytry)
Step3: Now let's see how the spike count varies as we adjust dt. We run the experiment varying dt and it will save data in a directory called exp2.
Step4: And we can now plot the data.
Step5: That looks great! Now let's try varying w_fac (the weight for the facilitation input).
Step6: And let's also check varying w_trig. This should give the identical results as varying w_fac, since they are just multiplied together.
Step7: Now let's vary the time constant for the trigger synapse.
Step8: And finally, let's very the time constant for the facilitation synapse.
|
<ASSISTANT_TASK:>
Python Code:
# the facilitation spikes
def stim_1_func(t):
index = int(t/0.001)
if index in [100, 1100, 2100]:
return 1000
else:
return 0
# the trigger spikes
def stim_2_func(t):
index = int(t/0.001)
if index in [90, 1500, 2150]:
return 1000
else:
return 0
# the operation we're going to do on the two different inputs to the sEMD neuron
def dendrite_func(t, x):
return x[0]*x[1]
# the trigger weight (w_e2 in the paper)
w = 2.0
model = nengo.Network()
with model:
stim1 = nengo.Node(stim_1_func)
stim2 = nengo.Node(stim_2_func)
# this will handle the non-linearity we need for the input
dendrite = nengo.Node(dendrite_func, size_in=2)
# the facilitation input gets a low-pass filter of 10ms but the trigger is unfiltered
nengo.Connection(stim1, dendrite[0], synapse=0.01)
nengo.Connection(stim2, dendrite[1], transform=w, synapse=None)
# one simple leaky integrate-and-fire neuron
ens = nengo.Ensemble(n_neurons=1, dimensions=1, gain=np.ones(1), bias=np.zeros(1))
# a low-pass filter of 5 ms for the output from the dendritic nonlinearity
nengo.Connection(dendrite, ens.neurons, synapse=0.005)
# now let's probe a bunch of data so we can plot things
pd = nengo.Probe(dendrite, synapse=0.005)
p1_n = nengo.Probe(stim1, synapse=None)
p1 = nengo.Probe(stim1, synapse=0.01)
p2 = nengo.Probe(stim2, synapse=None)
pn = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model)
with sim:
sim.run(3)
plt.figure(figsize=(14,5))
plt.subplot(3,1,1)
import nengo.utils.matplotlib
nengo.utils.matplotlib.rasterplot(sim.trange(), np.hstack([sim.data[p1_n], sim.data[p2]]))
plt.xlim(0, sim.trange()[-1])
plt.ylim(0.5,2.5)
plt.subplot(3, 1, 2)
plt.plot(sim.trange(), sim.data[p1])
plt.plot(sim.trange(), sim.data[pd])
plt.xlim(0, sim.trange()[-1])
plt.subplot(3, 1, 3)
plt.plot(sim.trange(), sim.data[pn])
plt.xlim(0, sim.trange()[-1])
plt.show()
import pytry
class SEMDTrial(pytry.PlotTrial):
def params(self):
self.param('trigger weight', w_trig=1.0)
self.param('facilitation weight', w_fac=1.0)
self.param('time delay between facilitation spike and trigger spike', dt=0)
self.param('facilitation synapse', syn_fac=0.01)
self.param('trigger synapse', syn_trig=0.005)
def evaluate(self, p, plt):
model = nengo.Network()
with model:
stim1 = nengo.Node(lambda t: 1000 if int(t/0.001)==100 else 0)
stim2 = nengo.Node(lambda t: 1000 if int((t-p.dt)/0.001)==100 else 0)
dendrite = nengo.Node(lambda t, x: x[0]*x[1], size_in=2)
nengo.Connection(stim1, dendrite[0], transform=p.w_fac, synapse=p.syn_fac)
nengo.Connection(stim2, dendrite[1], transform=p.w_trig, synapse=None)
ens = nengo.Ensemble(n_neurons=1, dimensions=1, gain=np.ones(1), bias=np.zeros(1))
nengo.Connection(dendrite, ens.neurons, synapse=p.syn_trig)
pn = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model, progress_bar=False)
with sim:
sim.run(0.1+p.dt+0.2)
if plt:
plt.plot(sim.trange(), sim.data[pn]) # neuron output
plt.axvline(0.1, color='g') # facilitation spike
plt.axvline(0.1+p.dt, color='b') # trigger spike
spike_count = np.sum(sim.data[pn])/1000
return dict(spike_count=spike_count)
SEMDTrial().run(plt=True, dt=0.02)
dts = (np.arange(99)+1)*0.001
for dt in dts:
SEMDTrial().run(verbose=False, dt=dt, data_dir='exp2')
df = pandas.DataFrame(pytry.read('exp2'))
seaborn.lineplot('dt', 'spike_count', data=df)
dts = (np.arange(0,100,5)+1)*0.001
ws = [0.1, 0.2, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0]
for dt in dts:
for w_fac in ws:
SEMDTrial().run(verbose=False, dt=dt, w_fac=w_fac, data_dir='exp3')
df = pandas.DataFrame(pytry.read('exp3'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='w_fac', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
dts = (np.arange(0,100,5)+1)*0.001
ws = [0.1, 0.2, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0]
for dt in dts:
for w_trig in ws:
SEMDTrial().run(verbose=False, dt=dt, w_trig=w_trig, data_dir='exp4')
df = pandas.DataFrame(pytry.read('exp4'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='w_trig', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
dts = (np.arange(0,100,5)+1)*0.001
syns = [0.001, 0.002, 0.005, 0.1, 0.2]
syns = [0.01, 0.02, 0.05]
for dt in dts:
for syn_trig in syns:
SEMDTrial().run(verbose=False, dt=dt, syn_trig=syn_trig, data_dir='exp5')
df = pandas.DataFrame(pytry.read('exp5'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='syn_trig', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
dts = (np.arange(0,100,5)+1)*0.001
syns = [0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2]
for dt in dts:
for syn_fac in syns:
SEMDTrial().run(verbose=False, dt=dt, syn_fac=syn_fac, data_dir='exp6')
df = pandas.DataFrame(pytry.read('exp6'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='syn_fac', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use star value of different reviews to filter comments.
Step2: <b>Cleaning and Preprocessing</b>
Step3: <b>Preparing Document-Term Matrix</b>
Step4: <b>Running LDA Model (Batch Wise LDA)</b>
Step5: <b>Examining the results</b>
Step6: Each generated topic is separated by a comma. Within each topic are the five most probable words to appear in that topic. The best topic terms will be the intersection of all three batches. Some things to think about, for the good app, the comments have common features like
Step7: For the bad apps, from the result, we can see most topics include the word "time". We can refer that customers are not satisfied for the using fluency of these apps. And for the updated version of these apps, they doesn't work sometimes, maybe because flashing back. Meanwhile, compared with the last version, these updated apps maybe designed not that good.
|
<ASSISTANT_TASK:>
Python Code:
good_app = app.loc[app['weighted_rating'] >=4.0]
bad_app = app.loc[app['weighted_rating'] <=2.5]
good_app = good_app.reset_index(drop=True)
bad_app = bad_app.reset_index(drop=True)
category = app['category']
cate_list = []
for i in category.unique():
cate = i.lower()
cate_list.append(cate)
first_good= good_app.loc[good_app['review1_star']>=4].reset_index(drop=True)['review1']
second_good = good_app.loc[good_app['review2_star']>=4].reset_index(drop=True)['review2']
third_good = good_app.loc[good_app['review3_star']>=4].reset_index(drop=True)['review3']
first_bad = bad_app.loc[bad_app['review1_star']<=2.5].reset_index(drop=True)['review1']
second_bad = bad_app.loc[bad_app['review2_star']<=2.5].reset_index(drop=True)['review2']
third_bad = bad_app.loc[bad_app['review3_star']<=2.5].reset_index(drop=True)['review3']
good_rev = first_good.append(second_good)
all_good = good_rev.append(third_good)
bad_rev = first_bad.append(second_bad)
all_bad = bad_rev.append(third_bad)
stop = set(stopwords.words('english')+[u'one',u'app',u'it',u'dont',u"i",u"'s","''","``",u'use',u'used',u'using',u'love',
u'would',u'great',u'app.',u'like',u'lot']+ cate_list)
exclude = set(string.punctuation)
lemma = WordNetLemmatizer()
def stem(tokens,stemmer = PorterStemmer().stem):
return [stemmer(w.lower()) for w in tokens if w not in stop]
def clean(doc):
stop_free = " ".join([i for i in doc.lower().split() if i not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
tokenize = nltk.word_tokenize
to_token = stem(tokenize(normalized))
tags = nltk.pos_tag(to_token)
dt_tags = [t for t in tags if t[1] in ["DT", "MD", "VBP","IN", "JJ","VB"]]
for tag in dt_tags:
normalized = " ".join(tok for tok in to_token if tok not in tag[0])
return normalized
doc_clean_g1 = [clean(doc).split() for doc in first_good]
doc_clean_g2 = [clean(doc).split() for doc in second_good]
doc_clean_g3 = [clean(doc).split() for doc in third_good]
doc_clean_b1 = [clean(doc).split() for doc in first_bad]
doc_clean_b2 = [clean(doc).split() for doc in second_bad]
doc_clean_b3 = [clean(doc).split() for doc in third_bad]
doc_clean_good = [clean(doc).split() for doc in all_good]
doc_clean_bad = [clean(doc).split() for doc in all_bad]
# Creating the term dictionary of our courpus, where every unique term is assigned an index.
dictionary_g1 = corpora.Dictionary(doc_clean_g1)
dictionary_g2 = corpora.Dictionary(doc_clean_g2)
dictionary_g3 = corpora.Dictionary(doc_clean_g3)
dictionary_b1 = corpora.Dictionary(doc_clean_b1)
dictionary_b2 = corpora.Dictionary(doc_clean_b2)
dictionary_b3 = corpora.Dictionary(doc_clean_b3)
dictionary_good = corpora.Dictionary(doc_clean_good)
dictionary_bad = corpora.Dictionary(doc_clean_bad)
# Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above.
doc_term_matrix_g1 = [dictionary_g1.doc2bow(doc) for doc in doc_clean_g1]
doc_term_matrix_g2 = [dictionary_g2.doc2bow(doc) for doc in doc_clean_g2]
doc_term_matrix_g3 = [dictionary_g3.doc2bow(doc) for doc in doc_clean_g3]
doc_term_matrix_b1 = [dictionary_b1.doc2bow(doc) for doc in doc_clean_b1]
doc_term_matrix_b2 = [dictionary_b2.doc2bow(doc) for doc in doc_clean_b2]
doc_term_matrix_b3 = [dictionary_b3.doc2bow(doc) for doc in doc_clean_b3]
doc_term_matrix_good = [dictionary_good.doc2bow(doc) for doc in doc_clean_good]
doc_term_matrix_bad = [dictionary_bad.doc2bow(doc) for doc in doc_clean_bad]
# Creating the object for LDA model using gensim library
Lda = gensim.models.ldamodel.LdaModel
# Running and Trainign LDA model on the document term matrix.
ldamodel_g1 = Lda(doc_term_matrix_g1, num_topics=3, id2word = dictionary_g1, passes=50)
ldamodel_g2 = Lda(doc_term_matrix_g2, num_topics=3, id2word = dictionary_g2, passes=50)
ldamodel_g3 = Lda(doc_term_matrix_g3, num_topics=3, id2word = dictionary_g3, passes=50)
ldamodel_b1 = Lda(doc_term_matrix_b1, num_topics=3, id2word = dictionary_b1, passes=50)
ldamodel_b2 = Lda(doc_term_matrix_b2, num_topics=3, id2word = dictionary_b2, passes=50)
ldamodel_b3 = Lda(doc_term_matrix_b3, num_topics=3, id2word = dictionary_b3, passes=50)
print(ldamodel_g1.print_topics(num_topics=3, num_words=5))
print(ldamodel_g2.print_topics(num_topics=3, num_words=5))
print(ldamodel_g3.print_topics(num_topics=3, num_words=5))
print(ldamodel_b1.print_topics(num_topics=3, num_words=5))
print(ldamodel_b2.print_topics(num_topics=3, num_words=5))
print(ldamodel_b3.print_topics(num_topics=3, num_words=5))
ldamodel_good = Lda(doc_term_matrix_good, num_topics=10, id2word = dictionary_good, passes=20)
ldamodel_bad = Lda(doc_term_matrix_bad, num_topics=10, id2word = dictionary_bad, passes=20)
print(ldamodel_good.print_topics(num_topics=5, num_words=3))
print(ldamodel_bad.print_topics(num_topics=5, num_words=3))
import pyLDAvis
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
good_rev = pyLDAvis.gensim.prepare(ldamodel_good, doc_term_matrix_good, dictionary_good)
bad_rev = pyLDAvis.gensim.prepare(ldamodel_bad, doc_term_matrix_bad, dictionary_bad)
pyLDAvis.save_html(good_rev,"good_rev.html")
good_rev
bad_rev
pyLDAvis.save_html(bad_rev,"bad_rev.html")
bad_rev
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Map Basics
Step2: The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network.
Step3: Roads
Step4: Advanced Visualizations
Step5: Writing your algorithm
Step6: Testing your Code
|
<ASSISTANT_TASK:>
Python Code:
# Run this cell first!
from helpers import Map, load_map, show_map
from student_code import shortest_path
%load_ext autoreload
%autoreload 2
map_10 = load_map('map-10.pickle')
show_map(map_10)
map_10.intersections
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[8]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map('map-40.pickle')
show_map(map_40)
map_40.roads[24]
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
path = shortest_path(map_40, 5, 34)
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
from test import test
test(shortest_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comparing the Errors
|
<ASSISTANT_TASK:>
Python Code:
num_states = 8
gamma = 0.9
true_values = gamma**np.arange(num_states)[::-1]
d_pi = np.ones(num_states)/num_states
D_pi = np.diag(d_pi)
print("True values:")
print(true_values)
print("On-policy distribution:")
print(d_pi)
def compute_value_dct(theta_lst, features):
return [{s: np.dot(theta, x) for s, x in features.items()} for theta in theta_lst]
def compute_values(theta_lst, X):
return [np.dot(X, theta) for theta in theta_lst]
def compute_errors(value_lst, error_func):
return [error_func(v) for v in value_lst]
def rmse_factory(true_values, d=None):
true_values = np.ravel(true_values)
# sensible default for weighting distribution
if d is None:
d = np.ones_like(true_values)
else:
d = np.ravel(d)
assert(len(d) == len(true_values))
# the actual root-mean square error
def func(v):
diff = true_values - v
return np.sqrt(np.mean(d*diff**2))
return func
# define the experiment
num_states = 8
num_features = 6
num_active = 3
num_runs = 50
max_steps = 5000
# set up environment
env = chicken.Chicken(num_states)
# Define the target policy
pol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})
# Define the behavior policy
pol_mu = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})
# state-dependent gamma
gm_dct = {s: 0.9 for s in env.states}
gm_dct[0] = 0
gm_func = parametric.MapState(gm_dct)
gm_p_func = parametric.MapNextState(gm_dct)
# set up algorithm parameters
update_params = {
'alpha': 0.03,
'beta': 0.002,
'gm': gm_func,
'gm_p': gm_p_func,
'lm': 0.0,
'lm_p': 0.0,
'interest': 1.0,
}
# Run all available algorithms
data = dict()
for name, alg in algos.algo_registry.items():
print(name)
run_lst = []
for i in range(num_runs):
print("Run: %d"%i, end="\r")
episode_data = dict()
# Want to use random features
phi = features.RandomBinary(num_features, num_active)
episode_data['features'] = {s: phi(s) for s in env.states}
# Set up the agent
_update_params = update_params.copy()
if name == 'ETD':
_update_params['alpha'] = 0.002
agent = OffPolicyAgent(alg(phi.length), pol_pi, pol_mu, phi, _update_params)
# Run the experiment
episode_data['steps'] = run_contextual(agent, env, max_steps)
run_lst.append(episode_data)
data[name] = run_lst
baseline = rmse_factory(np.zeros(num_states))
# True values & associated stationary distribution
theta_ls = np.array([ 0.4782969, 0.531441 , 0.59049, 0.6561, 0.729, 0.81, 0.9, 1.])
d_pi = np.ones(num_states)/num_states
D_pi = np.diag(d_pi)
# define the error/objective function
err_func = rmse_factory(theta_ls, d=d_pi)
baseline = err_func(np.zeros(num_states))
for name, experiment in data.items():
print(name)
errors = []
for episode in experiment:
feats = experiment[0]['features']
X = np.array([feats[k] for k in sorted(feats.keys())])
steps = experiment[0]['steps']
thetas = list(pluck('theta', steps))
# compute the values at each step
val_lst = compute_values(thetas, X)
# compute the errors at each step
err_lst = compute_errors(val_lst, err_func)
errors.append(err_lst)
# calculate the average error
clipped_errs = np.clip(errors, 0, 100)
avg_err = np.mean(clipped_errs, axis=0)
# plot the errors
fig, ax = plt.subplots()
ax.plot(avg_err)
# format the graph
ax.set_ylim(1e-2, 2)
ax.axhline(baseline, c='red')
ax.set_yscale('log')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import statsmodels.formula.api as smf
df=pd.read_csv('/home/sean/git/algorithms/class6/data/hanford.csv')
df
df.describe()
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, slope = lm.params
df.plot(kind="scatter",x="Exposure",y="Mortality", linewidths=0)
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red")
r_squared=slope**2
exposure=input("Enter county exposure:")
print("Expected mortality rate:", (slope*exposure)+intercept)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assessing the veracity of semantic markup for dataset pages
Step2: Import Modules
Step3: Upload Dataset
Step4: Load dataset in pandas.DataFrame
Step5: Select Model
Step6: Preprocessing Parameters
Step7: Data Preprocessing
Step8: Hyperparametes
Step15: Utility functions
Step16: Build Model
Step17: Train Model
Step18: Model perfomance on validation set
Step19: Model perfomance on testing set
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install adanet
!pip install --user --upgrade tensorflow-probability
from google.colab import files
import math
import tensorflow.compat.v2 as tf
import adanet
import pandas as pd
import io
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.preprocessing import text
uploaded = files.upload()
training_set = pd.read_csv('training_set.csv', keep_default_na=False)
eval_set = pd.read_csv('validation_set.csv', keep_default_na=False)
test_set = pd.read_csv('testing_set.csv', keep_default_na=False)
SELECTED_MODEL = 'lightweight_model' #@param {type:'string'} ["lightweight_model", "full_model"]
P_PARAMS_BY_MODEL = {
'lightweight_model': {
'vocab_size_by_feature': {
'description': 110211,
'name': 18720
},
'MAX_TOKENS': 400
},
'full_model': {
'vocab_size_by_feature': {
'description': 104383,
'name': 17495,
'author_or_creator': 1602,
'doi': 193,
'provider_or_publisher': 773,
'source_url': 17749
},
'MAX_TOKENS': 400
}
}
MODEL_P_PARAMS = P_PARAMS_BY_MODEL[SELECTED_MODEL]
tokenizers = {}
for feature_name, vocab_size in MODEL_P_PARAMS['vocab_size_by_feature'].items():
tokenizers[feature_name] = text.Tokenizer(num_words=vocab_size)
tokenizers[feature_name].fit_on_texts(training_set[feature_name])
H_PARAMS_BY_MODEL = {
'lightweight_model': {
'features': ['description', 'name'],
'LEARNING_RATE': 0.00677,
'TRAIN_STEPS': 500,
'SHUFFLE_BUFFER_SIZE': 2048,
'BATCH_SIZE': 128,
'CLIP_NORM': 0.00037,
'HIDDEN_UNITS': [186],
'DROPOUT': 0.28673,
'ACTIVATION_FN': tf.nn.selu,
'MAX_ITERATION_STEPS': 333333,
'DO_BATCH_NORM': True,
'MAX_TRAIN_STEPS': 1000
},
'full_model': {
'features': [
'author_or_creator', 'description', 'doi', 'has_date_created',
'has_date_modified', 'has_date_published', 'has_distribution',
'has_encoding_or_file_format', 'name', 'provider_or_publisher',
'source_url'
],
'LEARNING_RATE': 0.00076,
'TRAIN_STEPS': 500,
'SHUFFLE_BUFFER_SIZE': 2048,
'BATCH_SIZE': 128,
'CLIP_NORM': 0.25035,
'HIDDEN_UNITS': [329, 351, 292],
'DROPOUT': 0.08277,
'ACTIVATION_FN': tf.nn.selu,
'MAX_ITERATION_STEPS': 333333,
'DO_BATCH_NORM': False,
'MAX_TRAIN_STEPS': 1000
}
}
MODEL_H_PARAMS = H_PARAMS_BY_MODEL[SELECTED_MODEL]
def tokenize_and_pad(features):
Iterates over the features of a labeled sample, tokenizing and padding them.
Args:
features: A dictionary of feature values keyed by feature names. It includes
label as a feature
Returns:
A tuple with the processed features
tokenized_features = list()
for feature in MODEL_H_PARAMS['features']:
# Tokenize text features according to the corresponding vocabulary
if feature in MODEL_P_PARAMS['vocab_size_by_feature']:
# Handle missing features
if not features[feature]:
tokenized = [[MODEL_P_PARAMS['vocab_size_by_feature'][feature]]]
else:
tokenized = tokenizers[feature].texts_to_sequences([features[feature]])
tokenized_features.append([
sequence.pad_sequences(
tokenized,
maxlen=MODEL_P_PARAMS['MAX_TOKENS'],
padding='post',
truncating='post')
])
# Tokenize boolean features into binary values
else:
if features[feature]:
tokenized_features.append([1])
else:
tokenized_features.append([0])
tokenized_features.append(features['label'])
return tuple(tokenized_features)
def generator(dataset):
Returns a generator mapping dataset entries to tokenized features-label pairs.
def _gen():
for entry in dataset.iterrows():
yield tokenize_and_pad(entry[1])
return _gen
def preprocess(*args):
Tensorizes its arguments.
Args:
*args: Variable length arguments feature1, ..., featureK, label. Should be
in the same order as in MODEL_H_PARAMS['features']
Returns:
A pair of
1. A dictionary with the features keyed by their names
2. A label
m = {}
for feature, name in zip(args[:-1], MODEL_H_PARAMS['features']):
m[name] = feature
return m, [args[-1]]
def generate_output_types():
Returns a vector of output types corresponding to the tuple produced by the generator.
types = []
# Feature types
types = [tf.int32] * len(MODEL_H_PARAMS['features'])
# Label type
types.append(tf.bool)
return tuple(types)
def input_fn(partition, training, batch_size):
Generates an input_fn for the Estimator.
Args:
partition: One of 'train', 'test', and 'eval' for training, testing, and
validation sets respectively
training: If true, then shuffle dataset to add randomness between epochs
batch_size: Number of elements to combine in a single batch
Returns:
The input function
def _input_fn():
if partition == 'train':
dataset = tf.data.Dataset.from_generator(
generator(training_set), generate_output_types())
elif partition == 'test':
dataset = tf.data.Dataset.from_generator(
generator(test_set), generate_output_types())
elif partition == 'eval':
dataset = tf.data.Dataset.from_generator(
generator(eval_set), generate_output_types())
else:
print('Unknown partition')
return
if training:
dataset = dataset.shuffle(MODEL_H_PARAMS['SHUFFLE_BUFFER_SIZE'] *
batch_size).repeat()
dataset = dataset.map(preprocess).batch(batch_size)
iterator = tf.compat.v1.data.make_one_shot_iterator(dataset)
features, labels = iterator.get_next()
return features, labels
return _input_fn
def generate_feature_columns(embed):
Creates the feature columns that we will train the model on.
Args:
embed: If true, we embed the columns.
Returns:
A list with the feature columns.
feature_columns = []
for feature in MODEL_H_PARAMS['features']:
if feature in MODEL_P_PARAMS['vocab_size_by_feature']:
# vocab_size + 1 to handle missing features
num_buckets = MODEL_P_PARAMS['vocab_size_by_feature'][feature] + 1
else:
# All none-text features are booleans, so 2 buckets are enough
num_buckets = 2
column = tf.feature_column.categorical_column_with_identity(
key=feature, num_buckets=num_buckets)
if embed:
column = tf.feature_column.embedding_column(
column, dimension=math.ceil(math.log2(num_buckets)))
feature_columns.append(column)
return feature_columns
head = tf.estimator.BinaryClassHead()
adam = lambda: tf.keras.optimizers.Adam(
learning_rate=MODEL_H_PARAMS['LEARNING_RATE'],
clipnorm=MODEL_H_PARAMS['CLIP_NORM'])
estimator = adanet.AutoEnsembleEstimator(
head=head,
candidate_pool={
'linear':
tf.estimator.LinearEstimator(
head=head,
feature_columns=generate_feature_columns(False),
optimizer=adam),
'dnn':
tf.estimator.DNNEstimator(
head=head,
hidden_units=MODEL_H_PARAMS['HIDDEN_UNITS'],
feature_columns=generate_feature_columns(True),
optimizer=adam,
activation_fn=MODEL_H_PARAMS['ACTIVATION_FN'],
dropout=MODEL_H_PARAMS['DROPOUT'],
batch_norm=MODEL_H_PARAMS['DO_BATCH_NORM'])
},
max_iteration_steps=MODEL_H_PARAMS['MAX_ITERATION_STEPS'])
result = tf.estimator.train_and_evaluate(
estimator,
train_spec=tf.estimator.TrainSpec(
input_fn=input_fn(
'train', training=True, batch_size=MODEL_H_PARAMS['BATCH_SIZE']),
max_steps=MODEL_H_PARAMS['MAX_TRAIN_STEPS']),
eval_spec=tf.estimator.EvalSpec(
input_fn=input_fn(
'eval', training=False, batch_size=MODEL_H_PARAMS['BATCH_SIZE']),
steps=None,
start_delay_secs=1,
throttle_secs=1,
))[0]
print('AUC:', result['auc'], 'AUC_PR:', result['auc_precision_recall'],
'Recall:', result['recall'], 'Precision:', result['precision'])
ret = estimator.evaluate(
input_fn('test', training=False, batch_size=MODEL_H_PARAMS['BATCH_SIZE']))
print('AUC:', ret['auc'], 'AUC_PR:', ret['auc_precision_recall'], 'Recall:',
ret['recall'], 'Precision:', ret['precision'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lets start by generating some behavioral data from the social influence task. Here green advice/choice is encoded as 0 and the blue advice/choice is encoded as 1.
Step2: plot performance of different agents in different blocks
Step3: Fit simulated behavior
Step4: Compute fit quality and plot posterior estimates from a hierarchical parameteric model
Step5: fit HGF agent to simulated data
Step6: Plot posterior estimates from simulated data for the HGF agent
Step7: Test model comparison
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
sns.set(style = 'white', color_codes = True)
%matplotlib inline
import sys
import os
import os
cwd = os.getcwd()
sys.path.append(cwd[:-len('befit/examples/social_influence')])
import torch
from torch import ones, zeros, tensor
torch.manual_seed(1234)
nsub = 50 #number of subjects
trials = 120 #number of samples
from befit.tasks import SocialInfluence
from befit.simulate import Simulator
from befit.inference import Horseshoe, Normal
from befit.agents import RLSocInf, HGFSocInf, SGFSocInf
# load stimuli (trial offers, advices, and reliability of advices)
reliability = torch.from_numpy(np.load('advice_reliability.npy')).float()
reliability = reliability.reshape(trials, -1, 1).repeat(1, 1, nsub).reshape(trials, -1).unsqueeze(0)
offers = torch.from_numpy(np.load('offers.npy')).reshape(trials, -1, 1).repeat(1, 1, nsub)\
.reshape(trials, -1).unsqueeze(0)
stimuli = {'offers': offers,
'reliability': reliability}
socinfl = SocialInfluence(stimuli, nsub=nsub)
# RL agent
rl_agent = RLSocInf(runs=2*nsub, trials=trials)
trans_pars1 = torch.arange(-.5,.5,1/(2*nsub)).reshape(-1, 1) + tensor([[-2., 4., 0., 0.]])
rl_agent.set_parameters(trans_pars1)
sim1 = Simulator(socinfl, rl_agent, runs=2*nsub, trials=trials)
sim1.simulate_experiment()
# HGF agent
hgf_agent = HGFSocInf(runs=2*nsub, trials=trials)
trans_pars2 = torch.arange(-.5, .5, 1/(2*nsub)).reshape(-1, 1) + tensor([[2., 0., 4., 0., 0.]])
hgf_agent.set_parameters(trans_pars2)
sim2 = Simulator(socinfl, hgf_agent, runs=2*nsub, trials=trials)
sim2.simulate_experiment()
# SGF agent
sgf_agent = SGFSocInf(runs=2*nsub, trials=trials)
trans_pars3 = torch.arange(-.5, .5, 1/(2*nsub)).reshape(-1, 1) + tensor([[-2., -1., 4., 0., 0.]])
sgf_agent.set_parameters(trans_pars3)
sim3 = Simulator(socinfl, sgf_agent, runs=2*nsub, trials=trials)
sim3.simulate_experiment();
def posterior_accuracy(labels, df, vals):
for i, lbl in enumerate(labels):
std = df.loc[df['parameter'] == lbl].groupby(by='subject').std()
mean = df.loc[df['parameter'] == lbl].groupby(by='subject').mean()
print(lbl, np.sum(((mean+2*std).values[:, 0] > vals[i])*((mean-2*std).values[:, 0] < vals[i]))/(2*nsub))
def compute_mean_performance(outcomes, responses):
cc1 = (outcomes * responses > 0.).float() # accept reliable offer
cc2 = (outcomes * (1 - responses) < 0.).float() # reject unreliable offer
return torch.einsum('ijk->k', cc1 + cc2)/trials
perf1 = compute_mean_performance(sim1.stimulus['outcomes'][..., 0],
sim1.responses.float()).numpy().reshape(2, -1)
print('RL agent: ', np.median(perf1, axis = -1))
fig, ax = plt.subplots(1,2, sharex = True, sharey = True)
ax[0].hist(perf1[0]);
ax[1].hist(perf1[1]);
fig.suptitle('RL agent', fontsize = 20);
ax[0].set_ylim([0, 20]);
ax[0].set_xlim([.5, 1.]);
perf2 = compute_mean_performance(sim2.stimulus['outcomes'][..., 0],
sim2.responses.float()).numpy().reshape(2, -1)
print('HGF agent: ', np.median(perf2, axis = -1))
fig, ax = plt.subplots(1,2, sharex = True, sharey = True)
ax[0].hist(perf2[0]);
ax[1].hist(perf2[1]);
fig.suptitle('HGF agent', fontsize = 20);
ax[0].set_ylim([0, 20]);
ax[0].set_xlim([.5, 1.]);
perf3 = compute_mean_performance(sim3.stimulus['outcomes'][..., 0],
sim3.responses.float()).numpy().reshape(2, -1)
print('SGF agent: ', np.median(perf3, axis = -1))
fig, ax = plt.subplots(1,2, sharex = True, sharey = True)
ax[0].hist(perf3[0]);
ax[1].hist(perf3[1]);
fig.suptitle('SGF agent', fontsize = 20);
ax[0].set_ylim([0, 20]);
ax[0].set_xlim([.5, 1.]);
stimulus = sim1.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
rl_infer = Horseshoe(rl_agent, stimulus, sim1.responses)
rl_infer.infer_posterior(iter_steps=200)
labels = [r'$\alpha$', r'$\zeta$', r'$\beta$', r'$\theta$']
tp_df = rl_infer.sample_posterior(labels, n_samples=1000)
sim1.responses.dtype
labels = [r'$\alpha$', r'$\zeta$', r'$\beta$', r'$\theta$']
trans_pars_rl = tp_df.melt(id_vars='subject', var_name='parameter')
vals = [trans_pars1[:,0].numpy(), trans_pars1[:, 1].numpy(), trans_pars1[:, 2].numpy(), trans_pars1[:, 3].numpy()]
posterior_accuracy(labels, trans_pars_rl, vals)
plt.figure()
#plot convergence of stochasitc ELBO estimates (log-model evidence)
plt.plot(rl_infer2.loss[-400:])
g = sns.FacetGrid(trans_pars_rl, col="parameter", height=3, sharey=False);
g = (g.map(sns.lineplot, 'subject', 'value', ci='sd'));
labels = [r'$\alpha$', r'$\zeta$', r'$\beta$', r'bias']
for i in range(len(labels)):
g.axes[0,i].plot(np.arange(2*nsub), trans_pars1[:,i].numpy(),'ro', zorder = 0);
stimulus = sim2.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
hgf_infer = Horseshoe(hgf_agent, stimulus, sim2.responses)
hgf_infer.infer_posterior(iter_steps=200)
labels = [r'$\mu_0^2$', r'$\eta$', r'$\zeta$', r'$\beta$', r'$\theta$']
hgf_tp_df, hgf_mu_df, hgf_sigma_df = hgf_infer.sample_posterior(labels, n_samples=1000)
labels = [r'$\mu_0^2$', r'$\eta$', r'$\zeta$', r'$\beta$', r'$\theta$']
trans_pars_hgf = hgf_tp_df.melt(id_vars='subject', var_name='parameter')
vals = [trans_pars2[:, i].numpy() for i in range(len(labels))]
posterior_accuracy(labels, trans_pars_hgf, vals)
plt.figure()
#plot convergence of stochasitc ELBO estimates (log-model evidence)
plt.plot(hgf_infer.loss[-400:])
g = sns.FacetGrid(trans_pars_hgf, col="parameter", height=3, sharey=False);
g = (g.map(sns.lineplot, 'subject', 'value', ci='sd'));
for i in range(len(labels)):
g.axes[0,i].plot(np.arange(2*nsub), trans_pars2[:,i].numpy(),'ro', zorder = 0);
stimulus = sim3.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
sgf_infer = Horseshoe(sgf_agent, stimulus, sim3.responses)
sgf_infer.infer_posterior(iter_steps=200)
labels = [r'$\rho_1$', r'$h$', r'$\zeta$', r'$\beta$', r'$\theta$']
sgf_tp_df, sgf_mu_df, sgf_sigma_df = sgf_infer.sample_posterior(labels, n_samples=1000)
labels = [r'$\rho_1$', r'$h$', r'$\zeta$', r'$\beta$', r'$\theta$']
trans_pars_sgf = sgf_tp_df.melt(id_vars='subject', var_name='parameter')
vals = [trans_pars3[:, i].numpy() for i in range(len(labels))]
posterior_accuracy(labels, trans_pars_sgf, vals)
plt.figure()
#plot convergence of stochasitc ELBO estimates (log-model evidence)
plt.plot(sgf_infer.loss[-400:])
g = sns.FacetGrid(trans_pars_sgf, col="parameter", height=3, sharey=False);
g = (g.map(sns.lineplot, 'subject', 'value', ci='sd'));
for i in range(len(labels)):
g.axes[0,i].plot(np.arange(2*nsub), trans_pars3[:,i].numpy(),'ro', zorder = 0);
g = sns.PairGrid(sgf_mu_df)
g = g.map_diag(sns.kdeplot)
g = g.map_offdiag(plt.scatter)
g = sns.PairGrid(sgf_sigma_df)
g = g.map_diag(sns.kdeplot)
g = g.map_offdiag(plt.scatter)
#plt.plot(rl_infer.loss[-400:]);
plt.plot(hgf_infer.loss[-400:]);
plt.plot(sgf_infer.loss[-400:]);
stimulus = sim1.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
rl_infer = [Horseshoe(rl_agent, stimulus, sim1.responses),
Horseshoe(rl_agent, stimulus, sim2.responses),
Horseshoe(rl_agent, stimulus, sim3.responses)]
evidences = torch.zeros(3, 3, 2*nsub)
for i in range(3):
rl_infer[i].infer_posterior(iter_steps = 500)
evidences[0, i] = rl_infer[i].get_log_evidence_per_subject()
hgf_infer = [Horseshoe(hgf_agent, stimulus, sim1.responses),
Horseshoe(hgf_agent, stimulus, sim2.responses),
Horseshoe(hgf_agent, stimulus, sim3.responses)]
for i in range(3):
hgf_infer[i].infer_posterior(iter_steps = 500)
evidences[1, i] = hgf_infer[i].get_log_evidence_per_subject()
sgf_infer = [Horseshoe(sgf_agent, stimulus, sim1.responses),
Horseshoe(sgf_agent, stimulus, sim2.responses),
Horseshoe(sgf_agent, stimulus, sim3.responses)]
for i in range(3):
sgf_infer[i].infer_posterior(iter_steps = 500)
evidences[2, i] = sgf_infer[i].get_log_evidence_per_subject()
print((evidences[:, 0].argmax(dim=0) == 0).sum().float()/(2*nsub))
print((evidences[:, 1].argmax(dim=0) == 1).sum().float()/(2*nsub))
print((evidences[:, 2].argmax(dim=0) == 2).sum().float()/(2*nsub))
evidences.sum(-1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocessing
Step2: Visualization of ELC usage data
Step3: Calendar heat map of sign-ins
Step4: Sign-ins by course
Step5: Sign-ins by hour of day
Step6: Sign-ins by day of week
Step7: Sign-ins by month
Step8: Grouping of data by (visits, day, hour)
|
<ASSISTANT_TASK:>
Python Code:
#@title
#%%capture
import numpy as np #Linear algebra
import pandas as pd #Time series, datetime object manipulation
import matplotlib.pyplot as plt #plotting
#import seaborn as sb
#plt.style.use('fivethirtyeight') #Plot style preferred by author.
import calendar
from tabulate import tabulate #pretty display of tables
import plotly.express as px #Plotly interactive plots
from plotly.subplots import make_subplots
import warnings #suppress warning messages -- declutter
warnings.filterwarnings('ignore')
!pip install calmap #Calendar heat map
#!pip install qgrid #dynamic manipulation of tables.
#@title
#url = 'https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/DATAf19.csv' #data is stored at this URL
df = pd.read_csv('DATA_AY1920.csv') #df = dataframe. Read data from the URL
#df.head(5) #First 5 entries. Data exploration. Gives the reader an idea of what the ELC data looks like.
df=df.replace({'December': 12, 'January': 1, 'February': 2, 'March': 3, 'April': 4,
'May': 5, 'June': 6, 'July': 7, 'August': 8, 'September': 9,
'October': 10, 'November': 11}) #replace month names with month numbers
df=df.replace({'Monday': 1, 'Tuesday': 2, 'Wednesday': 3,
'Thursday': 4, 'Friday': 5, 'Saturday': 6,
'Sunday': 7}) #replace day names with day numbers
#df.head(5) #print data frame. Data exploration.
#df.dtypes #Uncomment this line and run if you want to show what the datatypes of each column in the time series is
df['mdy'] = pd.to_datetime((df.year*10000+df.month*100+df.date).apply(str),format='%Y%m%d')
#@title
n=5 #last n records
obj_mdy = df['visits'].groupby(df['mdy']).count() #Grouping number of visits by day
#obj_mdy.tail(n) #What does this grouped data look like?
xdata = np.array(obj_mdy.reset_index())[:,0]
ydata = np.array(obj_mdy.reset_index())[:,1]
df_mdy = pd.DataFrame({'mdy': xdata, 'visits': ydata})
fig = px.bar(df_mdy, x="mdy", y="visits")
fig.show()
#df_mdy.sample(n=5, random_state=1)
#df_day_hour['dayName'] = df_day_hour['day'].replace({1:'Mon', 2:'Tue', 3:'Wed', 4:'Thur', 5:'Fri', 6:'Sat', 7:'Sun'}) #replace day names with day numbers
#ax = obj_mdy.tail(len(df)).plot(kind='bar')
#ax.set_xticklabels(obj_mdy.tail(len(df)).index.strftime('%b-%d-%y'));
#plt.xlabel('Date')
#plt.ylabel('Number of sign-ins')
#plt.show()
#df_mdy.head(5)
df_mdy['DayOfOperation'] = np.arange(len(df_mdy))
df_mdy.plot.bar(x='DayOfOperation', y='visits', rot=90,figsize=(12,8))
plt.show()
#@title
import calmap #The Calendar (heat) map package is imported to provide higher quality visualization than bargraphs
#https://pythonhosted.org/calmap/#'
#plt.figure(figsize=(12,18))
#calmap.yearplot(df_mdy, year=2019, daylabels='MTWTRFSS')
#plt.show()
print('Calendar Map of ELC Sign-ins for Fall 2019')
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(111)
cax = calmap.yearplot(obj_mdy, year=2019, ax=ax, cmap='jet')
fig.colorbar(cax.get_children()[1], ax=cax, orientation='horizontal')
print('\n\n\n')
print('Calendar Map of ELC Sign-ins for Spring 2020')
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(111)
cax = calmap.yearplot(obj_mdy, year=20, ax=ax, cmap='jet')
fig.colorbar(cax.get_children()[1], ax=cax, orientation='horizontal')
#@title
obj_course = df['visits'].groupby(df['visits']).count()
course_data = np.array(obj_course)
#df_course = pd.DataFrame({'course': ['Statics', 'MechanicsOfMatls', 'Thermodynamics','Dynamics', 'MATLAB', 'StudySpace'],
# 'NumberOfSignins' : course_data})
#print(course_data)
df_course = pd.DataFrame({'course': ['Statics', 'MechanicsOfMatls', 'Thermodynamics','Dynamics', 'MATLAB', 'StudySpace'],
'NumberOfSignIns': course_data[0:6]})
df_course.plot.bar(x='course', y='NumberOfSignIns', rot=90,figsize=(12,8))
plt.show()
fig = px.bar(df_course, x="course", y="NumberOfSignIns")
fig.show()
#@title
obj_hr = df['visits'].groupby(df['hour']).count() #Grouping number of visits by hour
xdata = np.array(obj_hr.reset_index())[:,0]
ydata = np.array(obj_hr.reset_index())[:,1]
df_hr = pd.DataFrame({'hour': xdata, 'visits': ydata})
fig = px.bar(df_hr, x="hour", y="visits")
fig.show()
df_hr.plot.bar(x='hour', y='visits', rot=90,figsize=(12,8))
plt.show()
#df_hr.head(10) #What does this grouped data look like?
#plt.xlabel('Hour of day')
#plt.ylabel('Total number of sign-ins')
#ax2 = df_hr.plot(kind='bar')
#plt.plot()
#@title
obj_day = df['visits'].groupby(df['day']).count() #Grouping number of visits by day
#df_day.head(10) #What does this grouped data look like?
xdata = np.array(obj_day.reset_index())[:,0]
ydata = np.array(obj_day.reset_index())[:,1]
df_day = pd.DataFrame({'day': xdata, 'visits': ydata})
df_day["day"] = df_day["day"].replace({1:'Mon', 2:'Tue', 3:'Wed',
4:'Thur', 5:'Fri', 6:'Sat',
7:'Sun'})
fig = px.bar(df_day, x="day", y="visits")
fig.show()
df_day.plot.bar(x='day', y='visits', rot=90,figsize=(12,8))
plt.show()
#obj_day.plot(kind='bar')
#locs, labels = plt.xticks()
#plt.xticks(np.arange(7), ('Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat', 'Sun'), rotation=45)
#plt.xlabel('Day of week')
#plt.ylabel('Total sign-ins')
#plt.plot()
#@title
obj_month = df['visits'].groupby(df['month']).count() #Grouping number of visits by day
#print(df_month.head(10)) #What does this grouped data look like?
xdata = np.array(obj_month.reset_index())[:,0]
ydata = np.array(obj_month.reset_index())[:,1]
df_month = pd.DataFrame({'month': xdata, 'visits': ydata})
df_month["monthName"]=['Jan', 'Feb', 'Mar','Sep', 'Oct', 'Nov', 'Dec']
fig = px.bar(df_month, x="monthName", y="visits")
fig.show()
df_month.plot.bar(x='monthName', y='visits', rot=90,figsize=(12,8))
plt.show()
#ax = dobj_month.plot(kind='bar')
#ax.set_xticklabels(df_mdy.tail(n).index.strftime('%b-%d-%y'));
#ax.set_xticks(np.arange(df_month.shape[0]), ('Jan', 'Feb', 'Mar', 'Apr', 'Jun', 'Aug', 'Sep', 'Nov'));
#plt.xticks(np.arange(obj_month.shape[0]), ('Sep', 'Oct', 'Nov', 'Dec'), rotation=90)
#plt.xlabel('Month')
#plt.ylabel('Number of sign-ins')
#plt.show()
#@title
#url4 = 'https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/expenses_f19.csv'
#df4 = pd.read_csv(url4, error_bad_lines=False)
df4 = pd.read_csv('expenses_ay1920.csv', error_bad_lines=False)
print('ELC Expenses ($) on wages for AY 19-20')
print(tabulate(df4, headers='keys', tablefmt='psql'))
print('\n\n\n')
url2='https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/ay1920_costperhour.csv'
df2 = pd.read_csv(url2)
print('Average cost ($) per hour by hour of day, for the semester')
print(tabulate(df2, headers='keys', tablefmt='psql'))
#print('Mean cost per day ($):\n')
#print(df2.mean())
print('\n\n\n')
print('Average (rounded) number of coaches per hour')
url3='https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/ay1920_num_coaches_per_hr.csv'
df3 = pd.read_csv(url3)
print(tabulate(df3, headers='keys', tablefmt='psql'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating counters
Step2: Reading characters from input, and counting them. To add an item to the counter, simply increment its value.
Step3: Get the most common element from each counter. Since the most_common method returns a list of tuples (key-value), we need to access the first element from each of them respectively. And finally join them together into a string to get the string.
Step4: Part Two
|
<ASSISTANT_TASK:>
Python Code:
with open('../inputs/day06.txt', 'r') as f:
data = [line.strip() for line in f.readlines()]
from collections import Counter
counters = [Counter() for i in range(0, len(data[0]))]
for line in data:
for index, char in enumerate(line):
counters[index][char] += 1
answer = ''.join([counter.most_common(1)[0][0] for counter in counters])
answer = ''.join([counter.most_common()[-1][0] for counter in counters])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Data
Step2: Anomaly Models
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
from pathlib import Path
cwd = os.getcwd()
os.chdir(Path(cwd).parents[1])
from lsanomaly import LSAnomaly
import lsanomaly.notebooks.static_mix as demo
n_samples = 20
offset = 2.5
X, xx, yy = demo.data_prep(n_samples=n_samples, offset=offset)
sigma_candidates = [1, 2, 3]
rho_candidates = [0.1, 1, 10]
demo.plot_results(
X,
xx,
yy,
threshold=0.5,
sigma_candidates=sigma_candidates,
rho_candidates=rho_candidates
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we can see one of the images.
Step2: Train a network
Step3: Saving and loading networks
Step4: The simplest thing to do is simply save the state dict with torch.save. For example, we can save it to a file 'checkpoint.pth'.
Step5: Then we can load the state dict with torch.load.
Step6: And to load the state dict in to the network, you do model.load_state_dict(state_dict).
Step7: Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
Step8: Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(dataset=trainset,
batch_size=64,
shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/',
download=True,
train=False,
transform=transform)
testloader = torch.utils.data.DataLoader(dataset=testset,
batch_size=64,
shuffle=True)
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model,
trainloader,
testloader,
criterion,
optimizer,
epochs=2)
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
torch.save(model.state_dict(), './models/checkpoint.pth')
state_dict = torch.load('./models/checkpoint.pth')
print(state_dict.keys())
model.load_state_dict(state_dict)
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, './models/checkpoint.pth')
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('./models/checkpoint.pth')
print(model)
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
print(':')
print(param.data)
name, params = next(model.named_parameters())
name
params
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Bayesian Network in a Jupyter Notebook (BJN)
Step11: Random structure and parameter generators
Step13: Graph Utilities and Visualizations
Step16: Spanning tree and graph traversal
Step20: Samplers
Step27: The inference Engine
Step28: Examples
Step29: Definition of Model Parameters
Step30: Creation of the Inference Engine
Step31: Formulation of the Query
Step32: Example Query
Step33: In the second method, propagate_table, we assume that we have obtained a collection
Step34: The advantage of using propagate_table is that if several observations are given on
Step35: Example
Step36: Example
Step37: Example
Step38: A Random Graph
Step39: Obsolete
Step40: Example 5
Step41: Building an inference engine
Step42: https
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sc
from scipy.special import gammaln
from scipy.special import digamma
%matplotlib inline
from itertools import combinations
import pygraphviz as pgv
from IPython.display import Image
from IPython.display import display
def normalize(A, axis=None):
Normalize a probability table along a specified axis
Z = np.sum(A, axis=axis,keepdims=True)
idx = np.where(Z == 0)
Z[idx] = 1
return A/Z
def find(cond):
finds indices where the given condition is satisfied.
return list(np.where(cond)[0])
def random_alphabet(N=20, first_letter='A'):
Generates unique strings to be used as index_names
if N<27:
alphabet = [chr(i+ord(first_letter)) for i in range(N)]
else:
alphabet = ['X'+str(i) for i in range(N)]
return alphabet
def random_parents(alphabet, max_indeg=3):
Random DAG generation
N = len(alphabet)
print(alphabet)
indeg = lambda: np.random.choice(range(1,max_indeg+1))
parents = {a:[b for b in np.random.choice(alphabet[0:(1 if i==0 else i)], replace=False, size=min(indeg(),i))] for i,a in enumerate(alphabet)}
return parents
def random_cardinalities(alphabet, cardinality_choices=[2,3,4,5]):
Random cardinalities
return [np.random.choice(cardinality_choices) for a in alphabet]
def states_from_cardinalities(alphabet, cardinalities):
Generate generic labels for each state
return {a:[a+"_state_"+str(u) for u in range(cardinalities[i])] for i,a in enumerate(alphabet)}
def cardinalities_from_states(alphabet, states):
Count each cardinality according to the order implied by the alphabet list
return [len(states[a]) for a in alphabet]
def random_observations(cardinalities, visibles):
Samples a tensor of the shape of visibles. This function does not sample
from the joint distribution implied by the graph and the probability tables
return np.random.choice(range(10), size=clique_shape(cardinalities, visibles))
def random_dirichlet_cp_table(gamma, cardinalities, n, pa_n):
'''
gamma : Dirichlet shape parameter
cardinalities : List of number of states of each variable
n, pa_n : Output a table of form p(n | pa_n ), n is an index, pa_n is the list of parents of n
'''
N = len(cardinalities)
cl_shape = clique_shape(cardinalities, [n]+pa_n)
U = clique_prior_marginal(cardinalities, cl_shape)
return normalize(np.random.gamma(shape=gamma*U, size=cl_shape), axis=n)
def random_cp_tables(index_names, cardinalities, parents, gamma):
Samples a set of conditional probability tables consistent with the factorization
implied by the graph.
N = len(index_names)
theta = [[]]*N
for n,a in enumerate(index_names):
theta[n] = random_dirichlet_cp_table(gamma, cardinalities, n, index_names_to_num(index_names, parents[a]))
#print(a, parents[a])
#print(theta[n].shape)
#print('--')
return theta
def random_model(N=10, max_indeg=4):
Generates a random Bayesian Network
index_names = random_alphabet(N)
parents = random_parents(index_names)
cardinalities = random_cardinalities(index_names)
states = states_from_cardinalities(index_names, cardinalities)
return index_names, parents, cardinalities, states
def clique_shape(cardinalities, family):
N = len(cardinalities)
size = [1]*N
for i in family:
size[i] = cardinalities[i]
return size
def clique_prior_marginal(cardinalities, shape):
U = 1
for a1,a2 in zip(shape, cardinalities):
U = U*a2/a1
return U
def index_names_to_num(index_names, names):
name2idx = {name: i for i,name in enumerate(index_names)}
return [name2idx[nm] for nm in names]
def show_dag_image(index_names, parents, imstr='_BJN_tempfile.png'):
name2idx = {name: i for i,name in enumerate(index_names)}
A = pgv.AGraph(directed=True)
for i_n in index_names:
A.add_node(name2idx[i_n], label=i_n)
for j_n in parents[i_n]:
A.add_edge(name2idx[j_n], name2idx[i_n])
A.layout(prog='dot')
A.draw(imstr)
display(Image(imstr))
return
def show_ug_image(UG, imstr='_BJN_tempfile.png'):
A = pgv.AGraph(directed=False)
for i_n in range(UG.shape[0]):
A.add_node(i_n, label=i_n)
for j_n in find(UG[i_n,:]):
if j_n>i_n:
A.add_edge(j_n, i_n)
A.layout(prog='dot')
A.draw(imstr)
display(Image(imstr))
return
def make_cp_tables(index_names, cardinalities, cp_tables):
N = len(index_names)
theta = [[]]*N
for c in cp_tables:
if not isinstance(c, tuple):
nums = index_names_to_num(index_names, (c,))
else:
nums = index_names_to_num(index_names, c)
#print(nums)
n = nums[0]
idx = list(reversed(nums))
theta[n] = np.einsum(np.array(cp_tables[c]), idx, sorted(idx)).reshape(clique_shape(cardinalities,idx))
return theta
def make_adjacency_matrix(index_names, parents):
nVertex = len(index_names)
name2idx = {name: i for i,name in enumerate(index_names)}
## Build Graph data structures
# Adjacency matrix
adj = np.zeros((nVertex, nVertex), dtype=int)
for i_name in parents.keys():
i = name2idx[i_name]
for m_name in parents[i_name]:
j = name2idx[m_name]
adj[i, j] = 1
return adj
def make_families(index_names, parents):
nVertex = len(index_names)
adj = make_adjacency_matrix(index_names, parents)
# Possibly check topological ordering
# toposort(adj)
# Family, Parents and Children
fa = [[]]*nVertex
#pa = [[]]*nVertex
#ch = [[]]*nVertex
for n in range(nVertex):
p = find(adj[n,:])
#pa[n] = p
fa[n] = [n]+p
#c = find(adj[:,n])
#ch[n] = c
return fa
def permute_table(index_names, cardinalities, visible_names, X):
'''
Given a network with index_names and cardinalities, reshape a table X with
the given order as in visible_names so that it fits the storage convention of BNJNB.
'''
nums = index_names_to_num(index_names, visible_names)
osize = [cardinalities[n] for n in nums]
idx = list(nums)
shape = clique_shape(cardinalities,idx)
return np.einsum(X, idx, sorted(idx)).reshape(shape)
def make_cliques(families, cardinalities, visibles=None, show_graph=False):
'''
Builds the set of cliques of a triangulated graph.
'''
N = len(families)
if visibles:
C = families+[visibles]
else:
C = families
# Moral Graph
MG = np.zeros((N, N))
for F in C:
for edge in combinations(F,2):
MG[edge[0], edge[1]] = 1
MG[edge[1], edge[0]] = 1
# if show_graph:
# show_ug_image(MG,imstr='MG.png')
elim = []
Clique = []
visited = [False]*N
# Find an elimination sequence
# Based on greedy search
# Criteria, select the minimum induced clique size
for j in range(N):
min_clique_size = np.inf
min_idx = -1
for i in range(N):
if not visited[i]:
neigh = find(MG[i,:])
nm = np.prod(clique_shape(cardinalities, neigh+[i]))
if min_clique_size > nm:
min_idx = i
min_clique_size = nm
neigh = find(MG[min_idx,:])
temp = set(neigh+[min_idx])
is_subset = False
for CC in Clique:
if temp.issubset(CC):
is_subset=True
if not is_subset:
Clique.append(temp)
# Remove the node from the moral graph
for edge in combinations(neigh,2):
MG[edge[0], edge[1]] = 1
MG[edge[1], edge[0]] = 1
MG[min_idx,:] = 0
MG[:, min_idx] = 0
elim.append(min_idx)
visited[min_idx] = True
# if show_graph:
# show_ug_image(MG,imstr='MG'+str(j)+'.png')
return Clique, elim
def topological_order(index_names, parents):
returns a topological ordering of the graph
adj = make_adjacency_matrix(index_names, parents)
nVertex = len(index_names)
indeg = np.sum(adj, axis = 1)
zero_in = find(indeg==0)
topo_order = []
while zero_in:
n = zero_in.pop(0)
topo_order.append(n)
for j in find(adj[:,n]):
indeg[j] -= 1
if indeg[j] == 0:
zero_in.append(j)
if len(topo_order)<nVertex:
return []
else:
return topo_order
def mst(E, N):
Generate a Spanning Tree of a graph with N nodes by Kruskal's algorithm,
given preordered edge set E with each edge as (weight, v1, v2)
For a minimum spanning tree, use
E.sort()
mst(E, N)
For a maximum spanning tree, use
E.sort(reverse=True)
mst(E, N)
parent = list(range(N))
spanning_tree = {i:[] for i in range(N)}
def find_v(vertex):
v = vertex
while parent[v] != v:
v = parent[v]
return v
def union(v1, v2):
root1 = find_v(v1)
root2 = find_v(v2)
if root1 != root2:
parent[root2] = root1
for edge in E:
weight, v1, v2 = edge
p1, p2 = find_v(v1), find_v(v2)
if p1 != p2:
union(p1, p2)
spanning_tree[v1].append(v2)
spanning_tree[v2].append(v1)
return spanning_tree
def bfs(adj_list, root):
Breadth-first search starting from the root
adj_list : A list of lists where adj_list[n] denotes the set of nodes that can be reached from node n
Returns a BFS order, and a BFS tree as an array parent[i]
The root node has parent[rootnode] = -1
N = len(adj_list)
visited = [False]*N
parent = [-1]*N
queue = [root]
order = []
while queue:
v = queue.pop(0)
if not visited[v]:
visited[v] = True
for w in adj_list[v]:
if not visited[w]:
parent[w] = v
queue.append(w)
order.append(v)
return order, parent
def is_leaf(i, parent):
return not (i in parent)
def is_root(i, parent):
return parent[i] == -1
def make_list_receive_from(parent):
lst = [[] for i in range(len(parent)) ]
for i,p in enumerate(parent):
if p!= -1:
lst[p].append(i)
return lst
def sample_indices(index_names, parents, cardinalities, theta, num_of_samples=1):
'''
Sample directly the indices given a Bayesian network
'''
N = len(index_names)
order = topological_order(index_names, parents)
X = []
for count in range(num_of_samples):
x = [[]]*N
for n in order:
varname = index_names[n]
idx = index_names_to_num(index_names,parents[varname])
j = [0]*N
for i in idx:
j[i] = x[i]
I_n = cardinalities[n]
j[n] = tuple(range(I_n))
#print(j)
#print(theta[n][j])
x[n] = np.random.choice(I_n, p=theta[n][j].flatten())
#print(x)
X.append(x)
return X
def sample_states(var_names, states, index_names, parents, theta, num_of_samples=1):
Returns a dict with keys as state_name tuples and values as counts.
This function generates each sample separately, so if
num_of_samples is large, consider using sample_counts
N = len(index_names)
order = topological_order(index_names, parents)
X = dict()
nums = index_names_to_num(index_names,var_names)
cardinalities = cardinalities_from_states(index_names, states)
shape = clique_shape(cardinalities, nums)
for count in range(num_of_samples):
x = [[]]*N
for n in order:
varname = index_names[n]
idx = index_names_to_num(index_names,parents[varname])
j = [0]*N
for i in idx:
j[i] = x[i]
I_n = cardinalities[n]
j[n] = tuple(range(I_n))
#print(j)
#print(theta[n][j])
x[n] = np.random.choice(I_n, p=theta[n][j].flatten())
#print(x)
key = tuple((states[index_names[n]][x[n]] for n in nums))
X[key] = X.get(key, 0) + 1
return X
def counts_to_table(var_names, ev_counts, index_names, states):
Given observed variables names as var_names and
observations as key-value pairs {state_configuration: count}
create a table of counts.
A state configuration is a tuple (state_name_0, ..., state_name_{K-1})
where K is the lenght of var_names, and state_name_k is a state
from states[var_names[k]]
var_nums = list(index_names_to_num(index_names, var_names))
cardinalities = cardinalities_from_states(index_names, states)
shape = clique_shape(cardinalities, var_nums)
C = np.zeros(shape=shape)
N = len(index_names)
for rec in ev_counts.keys():
conf = [0]*N
for key, val in zip(var_names, rec):
s = states[key].index(val)
n = index_names_to_num(index_names, [key])[0]
conf[n] = s
#print(conf)
# Final value is the count that the pair is observed
C[tuple(conf)] += ev_counts[rec]
return C
def table_to_counts(T, var_names, index_names, states, clamped=[], threshold = 0):
Convert a table on index_names clamped on setdiff(index_names, var_names)
to a dict of counts. Keys are state configurations.
var_nums = list(index_names_to_num(index_names, var_names))
M = len(index_names)
ev_count = {}
for u in zip(*(T>threshold).nonzero()):
if not clamped:
key = tuple((states[v][u[var_nums[i]]] for i,v in enumerate(var_names)))
else:
key = tuple((states[v][u[var_nums[i]]] if clamped[var_nums[i]] is None else states[v][clamped[var_nums[i]]] for i,v in enumerate(var_names)))
ev_count[key] = T[u]
return ev_count
def clamped_pot(X, ev_states):
Returns a subslice of a table. Used for clamping conditional probability tables
to a given set of evidence.
X: table
ev_states: list of clamped states ev_states[i]==e (use None if not clamped)
# var is clamped, var not clamped
# ev_states[i]==e ev_states[i]==None
# var is member idx[i] = e idx[i] = slice(0, X.shape[i])
# var not member idx[i] = None idx[i] = slice(0, X.shape[i])
card = list(X.shape)
N = len(card)
idx = [[]]*N
for i,e in enumerate(ev_states):
if e is None and X.shape[i]>1: # the variable is unclamped or it is not a member of the potential
idx[i] = slice(0, X.shape[i])
else:
if X.shape[i]==1:
idx[i] = 0
else:
idx[i] = e
card[i] = 1
return X[tuple(idx)].reshape(card)
sz = (2,4,3,1,5)
T = np.random.choice([1,2,3,4], size=sz)
print(T.shape)
#print(T[1,:,:,0,0].shape)
print('*')
print(clamped_pot(T, [1, None, None, 7, 0]).shape)
def multiply(theta, idx):
Multiply a subset of a given list of potentials
par = [f(n) for n in idx for f in (lambda n: theta[n], lambda n: range(len(theta)))]+[range(len(theta))]
return np.einsum(*par)
def condition_and_multiply(theta, idx, ev_states):
Multiply a subset of a given list of potentials
par = [f(n) for n in idx for f in (lambda n: clamped_pot(theta[n], ev_states), lambda n: range(len(theta)))]+[range(len(theta))]
return np.einsum(*par)
def marginalize(Cp, idx, cardinalities):
return np.einsum(Cp, range(len(cardinalities)), [int(s) for s in sorted(idx)]).reshape(clique_shape(cardinalities,idx))
class Engine():
def __init__(self, index_names, parents, states, theta, visible_names=[]):
self.states = states
cardinalities = [len(states[a]) for a in index_names]
families = make_families(index_names, parents)
self.cardinalities = cardinalities
self.index_names = index_names
visibles = index_names_to_num(index_names, visible_names)
self.Clique, self.elim = make_cliques(families, cardinalities, visibles)
# Assign each conditional Probability table to one of the Clique potentials
# Clique2Pot is the assignments
self.Pot = families
self.Clique2Pot = np.zeros((len(self.Clique), len(self.Pot)))
selected = [False]*len(self.Pot)
for i,c in enumerate(self.Clique):
for j,p in enumerate(self.Pot):
if not selected[j]:
self.Clique2Pot[i,j] = set(p).issubset(c)
if self.Clique2Pot[i,j]:
selected[j] = True
# Find the root clique
# In our case it will be the one where all the visibles are a subset of
self.RootClique = -1
for i,c in enumerate(self.Clique):
if set(visibles).issubset(c):
self.RootClique = i
break
# Build the junction graph and compute a spanning tree
junction_graph_edges = []
for i,p in enumerate(self.Clique):
for j,q in enumerate(self.Clique):
ln = len(p.intersection(q))
if i<j and ln>0:
junction_graph_edges.append((ln,i,j))
junction_graph_edges.sort(reverse=True)
self.mst = mst(junction_graph_edges, len(self.Clique))
self.order, self.parent = bfs(self.mst, self.RootClique)
self.receive_from = make_list_receive_from(self.parent)
self.visibles = visibles
# Setup the data structures for the Junction tree algorithm
self.SeparatorPot = dict()
self.CliquePot = dict()
self.theta = theta
self.cardinalities_clamped = []
def propagate_observation(self, observed_configuration={}):
ev_names = list(observed_configuration.keys())
observed_states = [self.states[nm].index(observed_configuration[nm]) for nm in ev_names]
nums = index_names_to_num(self.index_names, ev_names)
#cardinalities_clamped = self.cardinalities.copy()
cardinalities_clamped = [1 if i in nums else c for i,c in enumerate(self.cardinalities)]
ev_states = [None]*len(self.cardinalities)
for i,e in zip(nums, observed_states):
ev_states[i] = e
# Collect stage
for c in reversed(self.order):
self.CliquePot[c] = np.ones(clique_shape(cardinalities_clamped, self.Clique[c]))
for p in self.receive_from[c]:
self.CliquePot[c] *= self.SeparatorPot[(p,c)]
# Prepare Clique Potentials
# Find probability tables that need to be multiplied into
# the Clique potential
idx = find(self.Clique2Pot[c, :])
if idx:
#print(idx)
#print(ev_states)
self.CliquePot[c] *= condition_and_multiply(self.theta, idx, ev_states)
# Set the separator potential
if not is_root(c, self.parent):
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.SeparatorPot[(c,self.parent[c])] = marginalize(self.CliquePot[c], idx, cardinalities_clamped)
# Distribution Stage
for c in self.order[1:]:
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.CliquePot[c] *= marginalize(self.CliquePot[self.parent[c]], idx, cardinalities_clamped)/self.SeparatorPot[(c,self.parent[c])]
self.cardinalities_clamped = cardinalities_clamped
self.values_clamped = ev_states
def propagate_table(self, X=None):
# Reset
self.values_clamped = [None]*len(self.cardinalities)
# Collect stage
for c in reversed(self.order):
self.CliquePot[c] = np.ones(clique_shape(self.cardinalities, self.Clique[c]))
for p in self.receive_from[c]:
self.CliquePot[c] *= self.SeparatorPot[(p,c)]
# Prepare Clique Potentials
# Find probability tables that need to be multiplied into
# the Clique potential
idx = find(self.Clique2Pot[c, :])
if idx:
self.CliquePot[c] *= multiply(self.theta, idx)
# Set the separator potential
if not is_root(c, self.parent):
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.SeparatorPot[(c,self.parent[c])] = marginalize(self.CliquePot[c], idx, self.cardinalities)
if X is not None:
SepX = marginalize(self.CliquePot[self.RootClique], self.visibles, self.cardinalities)
# Note: Take care of zero divide
self.CliquePot[self.RootClique] *= X/SepX
# Distribution Stage
for c in self.order[1:]:
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.CliquePot[c] *= marginalize(self.CliquePot[self.parent[c]], idx, self.cardinalities)/self.SeparatorPot[(c,self.parent[c])]
# def propagate(self, ev_names=[],ev_counts=None):
#
# if ev_names:
# X = evidence_to_table(ev_names, ev_counts, self.index_names, self.cardinalities, self.states)
# else:
# X = None
def compute_ESS(self, X=[]):
Compute Expected Sufficient Statistics for each probability table
E_S = dict()
self.propagate_table(X)
for c in self.order:
for n in find(self.Clique2Pot[c, :]):
E_S[n] = marginalize(self.CliquePot[c], self.Pot[n], self.cardinalities)
return E_S
def compute_marginal(self, var_names, normalization=False):
Compute a marginal table on variables in var_names
if the variables are the subset of a clique, otherwise returns None.
var_names can be forced to be a subset of a clique by specifying
Engine(..., visible_names=var_names)
var_indices = index_names_to_num(self.index_names, var_names)
idx = set(var_indices)
j = None
for c in self.order:
if idx.issubset(self.Clique[c]):
j = c
break
if j is not None:
if self.cardinalities_clamped:
if normalization:
return normalize(marginalize(self.CliquePot[j], var_indices, self.cardinalities_clamped))
else:
return marginalize(self.CliquePot[j], var_indices, self.cardinalities_clamped)
else:
if normalization:
return normalize(marginalize(self.CliquePot[j], var_indices, self.cardinalities))
else:
return marginalize(self.CliquePot[j], var_indices, self.cardinalities)
else:
print('Desired marginal is not a subset of any clique')
return None
def singleton_marginals(self, var_names, normalization=False):
For each variable in var_names compute its marginal
L = {}
var_indices = index_names_to_num(self.index_names, var_names)
for j, v in enumerate(var_names):
marg = self.compute_marginal([v])
if normalization:
marg = normalize(marg)
if self.values_clamped[var_indices[j]] is None:
L[v] = {self.states[v][i]: p for i,p in enumerate(marg.flatten())}
else:
L[v] = {self.states[v][self.values_clamped[var_indices[j]]]: p for i,p in enumerate(marg.flatten())}
return L
def sample_table(self, var_names, num_of_samples=1):
#self.propagate_observation({})
P = self.compute_marginal(var_names)
if P is not None:
return np.random.multinomial(num_of_samples, P.flatten()).reshape(P.shape)
else:
return None
def marginal_table(self, marg_names, normalization=False):
clamped = self.values_clamped
return table_to_counts(self.compute_marginal(marg_names, normalization), marg_names, self.index_names, self.states, clamped)
## Define the model
index_names = ['Box', 'Fruit']
parents = {'Box': [], 'Fruit': ['Box']}
show_dag_image(index_names, parents)
states = {'Box': ['Box1', 'Box2'],
'Fruit': ['Apple', 'Orange', 'Banana']}
cardinalities = cardinalities_from_states(index_names, states)
# Conditional Probability Tables
cp_tables = {('Box'): [0.5, 0.5],
('Fruit', 'Box'): [[10./15, 4./15, 1./15],[2./10, 6./10, 2./10]]
}
# Initialize the correct index order for strided access by computing the necessary permutations
theta = make_cp_tables(index_names, cardinalities, cp_tables)
eng = Engine(index_names, parents, states, theta)
# Using the BJN
eng.propagate_observation()
#eng.propagate_table()
print(eng.compute_marginal(['Fruit']))
# Independent verification
print(marginalize(multiply(theta, [0,1]), [1], cardinalities))
#obs = {'Fruit': 'Orange', 'Box': 'Box1'}
#obs = {'Fruit': 'Orange'}
obs = {'Fruit':'Banana'}
eng = Engine(index_names, parents, states, theta)
eng.propagate_observation(obs)
marg_names = ['Box']
print(normalize(eng.compute_marginal(marg_names)))
# Independent verification
#print(normalize(multiply(theta, [0,1])[:,1]))
print(normalize(multiply(theta, [0,1])[:,2]))
ev_names = ['Fruit']
ev_counts = {('Banana',): 1}
ev_table = counts_to_table(ev_names, ev_counts, index_names, states)
eng = Engine(index_names, parents, states, theta, visible_names=ev_names)
eng.propagate_table(ev_table)
marg_names = ['Box']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
# Independent verification
print(normalize(multiply(theta, [0,1])[:,2]))
ev_names = ['Fruit']
ev_counts = {('Orange',):3, ('Banana',):1}
#ev_counts = {('Orange',):2, ('Banana',):5, ('Apple',):12}
#ev_counts = {('Orange',):1}
C = counts_to_table(ev_names, ev_counts, index_names, states)
eng.propagate_table(C)
marg_names = ['Box']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
eng.marginal_table(marg_names)
ev_names = ['Fruit','Box']
ev_counts = {('Apple','Box1'):12, ('Orange','Box2'):2, ('Banana','Box2'):5}
eng = Engine(index_names, parents, states, theta, visible_names=ev_names)
C = counts_to_table(ev_names, ev_counts, index_names, states)
eng.propagate_table(C)
marg_names = ['Fruit', 'Box']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
var_names = ['Fruit', 'Box']
ev_counts = sample_states(var_names, states, index_names, parents, theta, num_of_samples=1000)
C = counts_to_table(var_names, ev_counts, index_names, states)
#eng = Engine(index_names, visible_names, parents, cardinalities, theta)
#C = evidence_to_counts(X, index_names)
print(ev_counts)
print(C)
from itertools import product
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
## Define the model
index_names = ['Die1', 'Die2', 'Sum']
parents = {'Die1': [], 'Die2': [], 'Sum': ['Die1', 'Die2']}
states = {'Die1': list(range(1,7)), 'Die2': list(range(1,7)), 'Sum': list(range(2,13))}
cardinalities = cardinalities_from_states(index_names, states)
show_dag_image(index_names, parents)
S = np.zeros(cardinalities)
for i,j in product(range(1,7),repeat=2):
S[i-1,j-1,i+j-2] = 1
cp_tables = {('Die1',): [1/6, 1/6,1/6,1/6,1/6,1/6],
('Die2',): [1/6, 1/6,1/6,1/6,1/6,1/6],
('Sum','Die1','Die2'): S
}
theta = make_cp_tables(index_names, cardinalities, cp_tables )
eng = Engine(index_names, parents, states, theta)
eng.propagate_observation({'Sum':8})
marg = eng.compute_marginal(['Die1', 'Die2'], normalization=True)
marg
vname = 'Die2'
eng.propagate_observation({'Sum':9})
marg = eng.singleton_marginals([vname], states)
a = [x for x in marg[vname].keys()]
b = [marg[vname][x] for x in marg[vname].keys()]
plt.title(vname)
plt.bar(a, normalize(b))
plt.ylim([0, 1])
plt.show()
ev_names = ['Sum']
ev_counts = {(2,):1, (3,):1, (4,):1}
C = counts_to_table(ev_names, ev_counts, index_names, states)
eng.propagate_table(C)
marg_names = ['Die1']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
#marg_names = ['Sum','Die2', 'Die1']
marg_names = ['Die2', 'Die1']
eng.propagate_observation({'Sum':9})
clamped = eng.values_clamped
table_to_counts(eng.compute_marginal(marg_names), marg_names, index_names, states, clamped)
eng.compute_marginal(marg_names)
marg_names = ['Sum','Die2', 'Die1']
eng.propagate_observation({'Sum': 11})
eng.marginal_table(marg_names, normalization=True)
marg_names = ['Sum','Die2', 'Die1']
T = eng.singleton_marginals(marg_names, normalization=True)
T
## Define the model
index_names = ['i', 'j', 'k']
cardinalities = [10, 20, 3]
parents = {'i': [], 'j': ['k'], 'k': ['i']}
show_dag_image(index_names, parents)
#parents = {'k': [], 'i': ['k'], 'j': ['k']}
visible_names = ['i','j']
visibles = index_names_to_num(index_names, visible_names)
# [A][S][T|A][L|S][B|S][E|T:L][X|E][D|B:E]
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
parents = {'A':[], 'S':[], 'T':['A'], 'L':['S'], 'B':['S'], 'E':['T','L'], 'X':['E'], 'D':['B','E']}
## A Method for systematically entering the conditional probability tables
# P(A = Yes) = 0.01
# P(S = Yes) = 0.5
# P(T=Positive | A=Yes) = 0.05
# P(T=Positive | A=No) = 0.01
# P(L=Positive | S=Yes) = 0.1
# P(L=Positive | S=No) = 0.01
# P(B=Positive | S=Yes) = 0.6
# P(B=Positive | S=No) = 0.3
# P(E=True | L=Positive, T=Positive) = 1
# P(E=True | L=Positive, T=Negative) = 1
# P(E=True | L=Negative, T=Positive) = 1
# P(E=True | L=Negative, T=Negative) = 0
# P(X=Positive | E=True) = 0.98
# P(X=Positive | E=False) = 0.05
# P(D=Positive | E=True, B=Positive) = 0.9
# P(D=Positive | E=True, B=Negative) = 0.7
# P(D=Positive | E=False, B=Positive) = 0.8
# P(D=Positive | E=False, B=Negative) = 0.1
states = {'A':['No', 'Yes'],
'S':['No', 'Yes'],
'T':['Negative','Positive'],
'L':['Negative','Positive'],
'B':['Negative','Positive'],
'E':['False', 'True'],
'X':['Negative','Positive'],
'D':['Negative','Positive']}
cardinalities = cardinalities_from_states(index_names, states)
# Conditional Probability Tables
cp_tables = {('A',): [0.99, 0.01],
('S',): [0.5, 0.5],
('T','A'): [[0.99, 0.01],[0.95,0.05]],
('L','S'): [[0.99, 0.01],[0.9,0.1]],
('B','S'): [[0.7,0.3],[0.4, 0.6]],
('E','T','L'): [[[1.,0.],[0.,1.]] , [[0.,1.],[0.,1.]]],
('X','E'): [[0.95, 0.05], [0.02, 0.98]],
('D','B','E'):[[[0.9,0.1],[0.2,0.8]],[[0.3,0.7],[0.1,0.9]]]
}
# Todo: write a converter from a standard bn format
theta = make_cp_tables(index_names, cardinalities, cp_tables)
show_dag_image(index_names, parents)
eng = Engine(index_names, parents, states, theta)
eng.propagate_observation({'X': 'Positive', 'D':'Positive', 'S':'Yes'})
#normalize(eng.compute_marginal(['T']))
eng.marginal_table(['T'])
eng.marginal_table(['T'])
vis_names = ['A','B']
eng = Engine(index_names, parents, states, theta, visible_names=vis_names)
eng.propagate_observation({'X': 'Positive', 'D':'Positive', 'S':'Yes'})
eng.marginal_table(vis_names)
marg = eng.compute_marginal(['A'])
marg
eng.marginal_table(['A'])
#eng.propagate_observation({'X': 'Positive', 'D':'Positive', 'S':'Yes'})
#eng.propagate_observation({'A':'Yes','S':'Yes','X':'Positive','D':'Positive'})
#eng.propagate_observation({'A':'Yes','S':'Yes'})
eng.propagate_observation({'X': 'Positive', 'A':'Yes', 'D':'Positive','S':'Yes'})
#eng.propagate_observation({'S':'Yes'})
#eng.propagate_observation({})
eng.singleton_marginals(['A','T','L','B'], normalization=True)
normalize(eng.compute_marginal(['T','L']))
eng.propagate_observation({'X':'Positive'})
var_names = ['T','L']
X = eng.sample_table(var_names, num_of_samples=10000)
table_to_counts(X, index_names, index_names, states, clamped=eng.values_clamped)
visibles = index_names_to_num(index_names, var_names)
visibles
X.shape
P = multiply(eng.theta, range(len(cardinalities)))
P /= marginalize(P, visibles, cardinalities)
E_S = X*P
marginalize(E_S, [1], cardinalities)
#index_names = [a for a in set(['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D'])]
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
parents = random_parents(index_names)
show_dag_image(index_names, parents)
visible_names = ['X','A','B']
visibles = index_names_to_num(index_names, visible_names)
families = make_families(index_names, parents)
cardinalities = random_cardinalities(index_names)
states = states_from_cardinalities(index_names, cardinalities)
gamma = 0.01
theta = random_cp_tables(index_names, cardinalities, parents, gamma)
X = random_observations(cardinalities, visibles)
print(parents)
print(cardinalities)
print(states)
theta[0].shape
index_names, parents, cardinalities, states = random_model(10, max_indeg=4)
show_dag_image(index_names, parents)
#theta = make_random_cp_tables(index_names, cardinalities, parents, gamma=0.01)
visible_names = index_names[-4:]
families = make_families(index_names, parents)
visibles = index_names_to_num(index_names, visible_names)
Clique, elim_seq = make_cliques(families, cardinalities, visibles, show_graph=False)
print(elim_seq)
print(Clique)
index_names = ['X1', 'X2', 'X3', 'X4', 'Y1', 'Y2', 'Y3', 'Y4']
parents = {'X1':[], 'Y1':['X1'], 'X2':['X1'], 'Y2':['X2'], 'X3':['X2'], 'Y3':['X3'], 'X4':['X3'], 'Y4':['X4'] }
cardinalities = [2]*8
show_dag_image(index_names, parents)
#index_names = [a for a in set(['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D'])]
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
visible_names = ['X', 'A', 'D']
parents = random_parents(index_names)
cardinalities = random_cardinalities(index_names)
states = random_states(index_names, cardinalities)
show_dag_image(index_names, parents)
families = make_families(index_names, parents)
visibles = make_visibles(index_names, visible_names)
Clique, elim_seq = make_cliques(families, cardinalities, visibles=None)
gamma = 0.01
theta = make_random_cp_tables(index_names, cardinalities, parents, gamma)
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
parents = {'A':[], 'S':[], 'T':['A'], 'L':['S'], 'B':['S'], 'E':['T','L'], 'X':['E'], 'D':['B','E']}
show_dag_image(index_names, parents)
cardinalities = [2,2,2,2,2,2,2,2]
states = states_from_cardinalities(index_names, cardinalities)
visible_names = ['A', 'X', 'D']
visibles = index_names_to_num(index_names, visible_names)
## Generate random potentials
gamma = 0.01
theta = random_cp_tables(index_names, cardinalities, parents, gamma)
#X = random_observations(cardinalities, visibles)
eng = Engine(index_names, parents, states, theta, visible_names)
eng.propagate_observation({})
eng.sample_table(visible_names, num_of_samples=1000 )
eng.propagate_observation()
eng.marginal_table(visible_names)
X = eng.sample_table(visible_names, num_of_samples=1000)
X
E_S_new = eng.compute_ESS()
P = multiply(eng.theta, range(len(cardinalities)))
E_S = P
E_S_new = eng.propagate(X)
P = multiply(eng.theta, range(len(cardinalities)))
Px = marginalize(P, visibles, cardinalities)
P /= Px
E_S = X*P
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import clear_output, display, HTML
from matplotlib import rc
mpl.rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
mpl.rc('text', usetex=True)
fig = plt.figure(figsize=(5,5))
ax = plt.gca()
ln = plt.Line2D([0],[0])
ax.add_line(ln)
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
ax.set_axis_off()
plt.close(fig)
def set_line(th):
ln.set_xdata([np.cos(th), -np.cos(th)])
ln.set_ydata([np.sin(th), -np.sin(th)])
display(fig)
interact(set_line, th=(0.0, 2*np.pi,0.01))
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.2f',
)
w = widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
display(w)
w.get_interact_value()
w = widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
display(w)
w.value
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.set_title(0, 'An accordion')
tab_nest.set_title(1, 'Copy of the accordion')
tab_nest
items = [widgets.Label(str(i)) for i in range(4)]
w2 = widgets.HBox(items)
display(w2)
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
w = widgets.Dropdown(
options=[('One', 1), ('Two', 2), ('Three', 3)],
value=2,
description='Number:',
)
display(w)
w = widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast','Ultra'],
description='Speed:',
disabled=False,
button_style='warning', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
#icons=['check'] * 3
)
display(w)
w.
w = widgets.Select(
options=['Linux', 'Windows', 'OSX', '?'],
value='OSX',
rows=4,
description='OS:',
disabled=False
)
w2 = widgets.Select(
options=['A', 'B', 'C','D', '?'],
value='?',
rows=5,
description='Class:',
disabled=True
)
display(w, w2)
w2.disabled = False
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
from IPython.display import display
button = widgets.Button(description="Click Me!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("Button clicked.")
print(b)
button.on_click(on_button_clicked)
int_range = widgets.IntSlider()
output2 = widgets.Output()
display(int_range, output2)
def on_value_change(change):
with output2:
print(change['new'])
int_range.observe(on_value_change, names='value')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your GCP account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: AutoML constants
Step12: Clients
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
Step25: Response
Step26: Example output
Step27: Response
Step28: Example output
Step29: Example output
Step30: Example output
Step31: Response
Step32: Example output
Step33: Example output
Step34: Response
Step35: Example output
Step36: Request
Step37: Example output
Step38: Response
Step39: Example output
|
<ASSISTANT_TASK:>
Python Code:
! pip3 install google-cloud-automl
! pip3 install google-cloud-storage
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION gs://$BUCKET_NAME
! gsutil ls -al gs://$BUCKET_NAME
import json
import os
import sys
import time
from google.cloud import automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.struct_pb2 import Value
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-entity/dataset.csv"
! gsutil cat $IMPORT_FILE | head -n 10
dataset = {
"display_name": "entity_" + TIMESTAMP,
"text_extraction_dataset_metadata": {},
}
print(
MessageToJson(
automl.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
request = clients["automl"].create_dataset(parent=PARENT, dataset=dataset)
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
input_config = {"gcs_source": {"input_uris": [IMPORT_FILE]}}
print(
MessageToJson(
automl.ImportDataRequest(name=dataset_id, input_config=input_config).__dict__[
"_pb"
]
)
)
request = clients["automl"].import_data(name=dataset_id, input_config=input_config)
result = request.result()
print(MessageToJson(result))
model = {
"display_name": "entity_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"text_extraction_model_metadata": {},
}
print(
MessageToJson(automl.CreateModelRequest(parent=PARENT, model=model).__dict__["_pb"])
)
request = clients["automl"].create_model(parent=PARENT, model=model)
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split("/")[-1]
print(model_short_id)
request = clients["automl"].list_model_evaluations(parent=model_id, filter="")
import json
model_evaluations = [json.loads(MessageToJson(me.__dict__["_pb"])) for me in request]
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
print(json.dumps(model_evaluations, indent=2))
request = clients["automl"].get_model_evaluation(name=evaluation_slice)
print(MessageToJson(request.__dict__["_pb"]))
import tensorflow as tf
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
gcs_input_uri = "gs://" + BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"id": 0, "text_snippet": {"content": test_item}}
f.write(json.dumps(data) + "\n")
! gsutil cat $gcs_input_uri
input_config = {"gcs_source": {"input_uris": [gcs_input_uri]}}
output_config = {
"gcs_destination": {"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"}
}
print(
MessageToJson(
automl.BatchPredictRequest(
name=model_id, input_config=input_config, output_config=output_config
).__dict__["_pb"]
)
)
request = clients["prediction"].batch_predict(
name=model_id, input_config=input_config, output_config=output_config
)
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
destination_uri = output_config["gcs_destination"]["output_uri_prefix"][:-1]
! gsutil ls $destination_uri/*
! gsutil cat $destination_uri/prediction*/*.jsonl
request = clients["automl"].deploy_model(name=model_id)
result = request.result()
print(MessageToJson(result))
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
payload = {"text_snippet": {"content": test_item, "mime_type": "text/plain"}}
request = automl.PredictRequest(
name=model_id,
payload=payload,
)
print(MessageToJson(request.__dict__["_pb"]))
request = clients["prediction"].predict(request=request)
print(MessageToJson(request.__dict__["_pb"]))
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients["automl"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients["automl"].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๅ่ฏๅตๅ
ฅๅ้
Step2: ไฝฟ็จๅตๅ
ฅๅ้ๅฑ
Step3: ๅๅปบๅตๅ
ฅๅ้ๅฑๆถ๏ผๅตๅ
ฅๅ้็ๆ้ไผ้ๆบๅๅงๅ๏ผๅฐฑๅๅ
ถไปไปปไฝๅฑไธๆ ท๏ผใๅจ่ฎญ็ป่ฟ็จไธญ๏ผ้่ฟๅๅไผ ๆญๆฅ้ๆธ่ฐๆด่ฟไบๆ้ใ่ฎญ็ปๅ๏ผๅญฆไน ๅฐ็ๅ่ฏๅตๅ
ฅๅ้ๅฐ็ฒ็ฅๅฐ็ผ็ ๅ่ฏไน้ด็็ธไผผๆง๏ผๅ ไธบๅฎไปฌๆฏ้ๅฏน่ฎญ็ปๆจกๅ็็นๅฎ้ฎ้ข่ๅญฆไน ็๏ผใ
Step4: ๅฏนไบๆๆฌๆๅบๅ้ฎ้ข๏ผๅตๅ
ฅๅ้ๅฑ้็จๆดๆฐ็ปๆ็ 2D ๅผ ้๏ผๅ
ถๅฝข็ถไธบ (samples, sequence_length)๏ผๅ
ถไธญๆฏไธชๆก็ฎ้ฝๆฏไธไธชๆดๆฐๅบๅใๅฎๅฏไปฅๅตๅ
ฅๅฏๅ้ฟๅบฆ็ๅบๅใๆจๅฏไปฅๅจๅฝข็ถไธบ (32, 10)๏ผ32 ไธช้ฟๅบฆไธบ 10 ็ๅบๅ็ปๆ็ๆนๆฌก๏ผๆ (64, 15)๏ผ64 ไธช้ฟๅบฆไธบ 15 ็ๅบๅ็ปๆ็ๆนๆฌก๏ผ็ๆนๆฌกไธๆน้ฆๅ
ฅๅตๅ
ฅๅ้ๅฑใ
Step5: ๅฝ็ปๅฎไธไธชๅบๅๆนๆฌกไฝไธบ่พๅ
ฅๆถ๏ผๅตๅ
ฅๅ้ๅฑๅฐ่ฟๅๅฝข็ถไธบ (samples, sequence_length, embedding_dimensionality) ็ 3D ๆตฎ็นๅผ ้ใไธบไบไปๅฏๅ้ฟๅบฆ็ๅบๅ่ฝฌๆขไธบๅบๅฎ่กจ็คบ๏ผๆๅค็งๆ ๅๆนๆณใๆจๅฏไปฅๅ
ไฝฟ็จ RNNใๆณจๆๅๆๆฑ ๅๅฑ๏ผ็ถๅๅๅฐๅ
ถไผ ้็ปๅฏ้ๅฑใๆฌๆ็จไฝฟ็จๆฑ ๅ๏ผๅ ไธบๅฎๆ็ฎๅใๆฅไธๆฅ๏ผๅญฆไน ไฝฟ็จ RNN ่ฟ่กๆๆฌๅ็ฑปๆ็จๆฏไธไธชไธ้็้ๆฉใ
Step6: ่ทๅ็ผ็ ๅจ (tfds.features.text.SubwordTextEncoder)๏ผๅนถๅฟซ้ๆต่ง่ฏๆฑ่กจใ
Step7: ็ตๅฝฑ่ฏ่ฎบ็้ฟๅบฆๅฏไปฅไธๅใๆไปฌๅฐไฝฟ็จ padded_batch ๆนๆณๆฅๆ ๅๅ่ฏ่ฎบ็้ฟๅบฆใ
Step8: ๅฏผๅ
ฅๆถ๏ผ่ฏ่ฎบ็ๆๆฌๆฏๆดๆฐ็ผ็ ็๏ผๆฏไธชๆดๆฐไปฃ่กจ่ฏๆฑ่กจไธญ็็นๅฎๅ่ฏๆๅ่ฏ้จๅ๏ผใ
Step9: ๅๅปบไธไธช็ฎๅๆจกๅ
Step10: ็ผ่ฏๅ่ฎญ็ปๆจกๅ
Step11: ้่ฟ่ฟ็งๆนๆณ๏ผๆไปฌ็ๆจกๅๅฏไปฅ่พพๅฐ็บฆ 88% ็้ช่ฏๅ็กฎ็๏ผ่ฏทๆณจๆ๏ผ่ฏฅๆจกๅ่ฟๅบฆๆๅ๏ผๅ ๆญค่ฎญ็ปๅ็กฎ็่ฆ้ซๅพๅค๏ผใ
Step12: ๆฃ็ดขๅญฆไน ็ๅตๅ
ฅๅ้
Step13: ็ฐๅจ๏ผๆไปฌๅฐๆ้ๅๅ
ฅ็ฃ็ใ่ฆไฝฟ็จ Embedding Projector๏ผๆไปฌๅฐไปฅๅถ่กจ็ฌฆๅ้็ๆ ผๅผไธไผ ไธคไธชๆไปถ๏ผไธไธชๅ้ๆไปถ๏ผๅ
ๅซๅตๅ
ฅๅ้๏ผๅไธไธชๅ
ๆฐๆฎๆไปถ๏ผๅ
ๅซๅ่ฏ๏ผใ
Step14: ๅฆๆๆจๆญฃๅจ Colaboratory ไธญ่ฟ่กๆฌๆ็จ๏ผๅๅฏไปฅไฝฟ็จไปฅไธไปฃ็ ๆฎตๅฐ่ฟไบๆไปถไธ่ฝฝๅฐๆฌๅฐ่ฎก็ฎๆบไธ๏ผๆไฝฟ็จๆไปถๆต่งๅจ๏ผView -> Table of contents -> File browser๏ผใ
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
embedding_layer = layers.Embedding(1000, 5)
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
(train_data, test_data), info = tfds.load(
'imdb_reviews/subwords8k',
split = (tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True, as_supervised=True)
encoder = info.features['text'].encoder
encoder.subwords[:20]
train_batches = train_data.shuffle(1000).padded_batch(10)
test_batches = test_data.shuffle(1000).padded_batch(10)
train_batch, train_labels = next(iter(train_batches))
train_batch.numpy()
embedding_dim=16
model = keras.Sequential([
layers.Embedding(encoder.vocab_size, embedding_dim),
layers.GlobalAveragePooling1D(),
layers.Dense(16, activation='relu'),
layers.Dense(1)
])
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(
train_batches,
epochs=10,
validation_data=test_batches, validation_steps=20)
import matplotlib.pyplot as plt
history_dict = history.history
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss=history_dict['loss']
val_loss=history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.figure(figsize=(12,9))
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.figure(figsize=(12,9))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim((0.5,1))
plt.show()
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import io
encoder = info.features['text'].encoder
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for num, word in enumerate(encoder.subwords):
vec = weights[num+1] # skip 0, it's padding.
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import pandas as pd
pd.get_dummies(df1.town)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: More than one State object
Step7: And here's run_simulation, which is a solution to the exercise at the end of the previous notebook.
Step8: Now we can create more than one State object
Step9: Whenever we call a function, we indicate which State object to work with
Step10: And you can confirm that the different objects are getting updated independently
Step11: Negative bikes
Step14: We can fix this problem using the return statement to exit the function early if an update would cause negative bikes.
Step15: Now if you run the simulation again, it should behave.
Step16: Comparison operators
Step17: Whereas == compares two values and returns True if they are equal.
Step18: You can use == in an if statement.
Step19: But if you use = in an if statement, you get an error.
Step20: Exercise
Step23: Next we need versions of bike_to_wellesley and bike_to_olin that update the metrics.
Step24: Now when we run a simulation, it keeps track of unhappy customers.
Step25: After the simulation, we can print the number of unhappy customers at each location.
Step27: Exercises
Step29: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# set the random number generator
np.random.seed(7)
def step(state, p1, p2):
Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
state.wellesley -= 1
state.olin += 1
def decorate_bikeshare():
Add a title and label the axes.
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
def run_simulation(state, p1, p2, num_steps):
Simulate the given number of time steps.
state: State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
num_steps: number of time steps
results = TimeSeries()
for i in range(num_steps):
step(state, p1, p2)
results[i] = state.olin
plot(results, label='Olin')
bikeshare1 = State(olin=10, wellesley=2)
bikeshare2 = State(olin=2, wellesley=10)
bike_to_olin(bikeshare1)
bike_to_wellesley(bikeshare2)
bikeshare1
bikeshare2
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
if state.olin == 0:
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
if state.wellesley == 0:
return
state.wellesley -= 1
state.olin += 1
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
x = 5
x == 5
if x == 5:
print('yes, x is 5')
# If you remove the # from the if statement and run it, you'll get
# SyntaxError: invalid syntax
#if x = 5:
# print('yes, x is 5')
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0)
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
if state.olin == 0:
state.olin_empty += 1
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
bikeshare.olin_empty
bikeshare.wellesley_empty
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0)
# Solution
def step(state, p1, p2):
Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
state.clock += 1
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
# Solution
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
# Solution
bikeshare
# Solution
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0, t_first_empty=-1)
# Solution
def step(state, p1, p2):
Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
state.clock += 1
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
if state.t_first_empty != -1:
return
if state.olin_empty + state.wellesley_empty > 0:
state.t_first_empty = state.clock
# Solution
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
# Solution
bikeshare
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this part of the lecture we explain Stochastic Gradient Descent (SGD) which is an optimization method commonly used in neural networks. We will illustrate the concepts with concrete examples.
Step2: You want to find parameters (weights) $a$ and $b$ such that you minimize the error between the points and the line $a\cdot x + b$. Note that here $a$ and $b$ are unknown. For a regression problem the most common error function or loss function is the mean squared error.
Step3: Suppose we believe $a = 10$ and $b = 5$ then we can compute y_hat which is our prediction and then compute our error.
Step4: So far we have specified the model (linear regression) and the evaluation criteria (or loss function). Now we need to handle optimization; that is, how do we find the best values for $a$ and $b$? How do we find the best fitting linear regression.
Step5: Nearly all of deep learning is powered by one very important algorithm
Step6: Nearly all of deep learning is powered by one very important algorithm
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from fastai.learner import *
# Here we generate some fake data
def lin(a,b,x): return a*x+b
def gen_fake_data(n, a, b):
x = s = np.random.uniform(0,1,n)
y = lin(a,b,x) + 0.1 * np.random.normal(0,3,n)
return x, y
x, y = gen_fake_data(50, 3., 8.)
plt.scatter(x,y, s=8); plt.xlabel("x"); plt.ylabel("y");
def mse(y_hat, y): return ((y_hat - y) ** 2).mean()
y_hat = lin(10,5,x)
mse(y_hat, y)
def mse_loss(a, b, x, y): return mse(lin(a,b,x), y)
mse_loss(10, 5, x, y)
# generate some more data
x, y = gen_fake_data(10000, 3., 8.)
x.shape, y.shape
x,y = V(x),V(y)
# Create random weights a and b, and wrap them in Variables.
a = V(np.random.randn(1), requires_grad=True)
b = V(np.random.randn(1), requires_grad=True)
a,b
learning_rate = 1e-3
for t in range(10000):
# Forward pass: compute predicted y using operations on Variables
loss = mse_loss(a,b,x,y)
if t % 1000 == 0: print(loss.data[0])
# Computes the gradient of loss with respect to all Variables with requires_grad=True.
# After this call a.grad and b.grad will be Variables holding the gradient
# of the loss with respect to a and b respectively
loss.backward()
# Update a and b using gradient descent; a.data and b.data are Tensors,
# a.grad and b.grad are Variables and a.grad.data and b.grad.data are Tensors
a.data -= learning_rate * a.grad.data
b.data -= learning_rate * b.grad.data
# Zero the gradients
a.grad.data.zero_()
b.grad.data.zero_()
def gen_fake_data2(n, a, b):
x = s = np.random.uniform(0,1,n)
y = lin(a,b,x) + 0.1 * np.random.normal(0,3,n)
return x, np.where(y>10, 1, 0).astype(np.float32)
x,y = gen_fake_data2(10000, 3., 8.)
x,y = V(x),V(y)
def nll(y_hat, y):
y_hat = np.clip(y_hat, 1e-5, 1-1e-5)
return (y*y_hat.log() + (1-y)*(1-y_hat).log()).mean()
a = V(np.random.randn(1), requires_grad=True)
b = V(np.random.randn(1), requires_grad=True)
learning_rate = 1e-2
for t in range(3000):
p = (-lin(a,b,x)).exp()
y_hat = 1/(1+p)
loss = nll(y_hat,y)
if t % 1000 == 0:
print(loss.data[0], np.mean(to_np(y)==(to_np(y_hat)>0.5)))
# print(y_hat)
loss.backward()
a.data -= learning_rate * a.grad.data
b.data -= learning_rate * b.grad.data
a.grad.data.zero_()
b.grad.data.zero_()
from matplotlib import rcParams, animation, rc
from ipywidgets import interact, interactive, fixed
from ipywidgets.widgets import *
rc('animation', html='html5')
rcParams['figure.figsize'] = 3, 3
x, y = gen_fake_data(50, 3., 8.)
a_guess,b_guess = -1., 1.
mse_loss(y, a_guess, b_guess, x)
lr=0.01
def upd():
global a_guess, b_guess
y_pred = lin(a_guess, b_guess, x)
dydb = 2 * (y_pred - y)
dyda = x*dydb
a_guess -= lr*dyda.mean()
b_guess -= lr*dydb.mean()
fig = plt.figure(dpi=100, figsize=(5, 4))
plt.scatter(x,y)
line, = plt.plot(x,lin(a_guess,b_guess,x))
plt.close()
def animate(i):
line.set_ydata(lin(a_guess,b_guess,x))
for i in range(30): upd()
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(0, 20), interval=100)
ani
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Theory and Algorithm
Step2: The results for a single realization of the simulation are plotted below. As expected, the estimated bias value tends towards the true value, while the position and velocity error terms accumulate over time.
Step3: Filtered Position, Velocity, and Acceleration for a Single Realization
Step4: Monte Carlo Averaged Error
Step5: It is clear from the figure above that the ensamble averaged error is approximately zero for each of the states. However, further checks need to be evoked to evaluate the performance of the Kalman Filter.
Step6: Although not immediately intuitive, the top block of the Kalman gain matrix is the identity matrix. This is because we are defining our measurements as the error between the euler integrated velocity and position and the accelerometer measured. Since the corruption in the accelerometer and GPS measurements is a zero-mean white noise process, the bias is the driving influence in our measurements and is the only term that the Kalman filter needs to correct. The orthogonality checks that follow further confirm the validity of our Kalman filter derivation.
Step7: With the error variances calculated for each time step and simulation, and the average error variance determined, it is possible to determine the difference between the error betweeen the average error variance and the single realization error variance. We expect difference to be approximately zero when averaged over a signifiant number of runs.
Step8: Orthogonality
Step9: The plot above indicates that the estimated state is approximately orthogonal to the error in the state estimation as expected. The significant magnitude in the (2,1) and (2,2) element of the correlation matrix is likely a result of the white noise processes. This was confirmed by varying the white noise processes and seeing a resulting increase or decrease in the magnitude of the (2,1) element.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fsize = 15
time_of_sim = 30 # s
freq_acc = 50 # frequency of accelerometer measurements
freq_meas = 2
dt = 1./freq_acc
dt_meas = 1./freq_meas
time_steps_acc = freq_acc*time_of_sim+1
time_steps_meas = freq_meas*time_of_sim+1
time = np.linspace(0, time_of_sim, time_steps_acc)
time_meas = np.linspace(0, time_of_sim, time_steps_meas)
Omega = 0.1
a_truth = 10*np.sin(0.1*time)
v_truth = np.zeros(time_steps_acc)
x_truth = np.zeros(time_steps_acc)
v_truth[0] = 100
x_truth[0] = 0
for idx in range(1,len(time)):
v_truth[idx] = dt*a_truth[idx-1] + v_truth[idx-1]
x_truth[idx] = dt*dt*a_truth[idx-1]/2. + v_truth[idx-1]*dt + x_truth[idx-1]
fig, ax = plt.subplots(3,1,sharex=True,figsize=(15,10))
ax[0].plot(time, x_truth)
ax[0].set_ylabel('Position $(m)$',size=fsize)
ax[1].plot(time, v_truth)
ax[1].set_ylabel('Velocity $(m/s)$',size=fsize)
ax[2].plot(time, a_truth)
ax[2].set_ylabel('Acceleration $(m/s^2)$',size=fsize)
ax[2].set_xlabel('Time $(s)$', size=fsize)
fig.suptitle('Position, Velocity, and Acceleration from Truth Model',size=25,y=0.93)
for i in range(0,3):
ax[i].grid()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
Nsimulations = 10000
ns = 3 # number of states
no = 2 # number of outputs
error_x = np.zeros((time_steps_acc,Nsimulations))
error_v = np.zeros((time_steps_acc,Nsimulations))
error_bias = np.zeros((time_steps_acc,Nsimulations))
for k in range(0,Nsimulations):
# calculate the bias
b = np.random.normal(0, np.sqrt(0.01))
# Define the acceleration from the model
# start by defining a white noise process
w = np.random.normal(0, np.sqrt(0.0001),size=(time_steps_acc))
# corrupted state values
ac = np.zeros((time_steps_acc,1))
ac[:,0] = (a_truth + b + w).T
vc = np.zeros((time_steps_acc,1))
vc[0] = v_truth[0]
xc = np.zeros((time_steps_acc,1))
xc[0] = x_truth[0]
for idx in range(1,len(time)):
vc[idx] = dt*ac[idx-1,0] + vc[idx-1,0]
xc[idx] = dt*dt*ac[idx-1,0]/2. + dt*vc[idx-1,0] + xc[idx-1,0]
# stochastic simulation
# state is error position, error velocity, bias
state = np.zeros((ns,1,time_steps_acc))
z = np.zeros((no,1,time_steps_acc))
residual = np.zeros((no,1,time_steps_acc))
P = np.zeros((ns,ns,time_steps_acc))
M = np.zeros((ns,ns,time_steps_acc))
K = np.zeros((ns,no,time_steps_acc))
V = np.array([[1,0],[0,0.0016]])
H = np.array([[1,0,0],[0,1,0]])
W = 0.0004
Gamma = np.zeros((ns,1)) # Gamma is not time-varying
Gamma[:,0] = np.array([dt**2/2.,dt,0])
Phi = np.array([[1,dt,-dt**2/2.],[0,1,-dt],[0,0,1]])
I = np.eye(ns)
# initialize matricies
M[:,:,0] = np.array([[1,0,0],[0,0.04,0],[0,0,np.sqrt(0.01)]])
igps = []
for i in range(0,len(time)-1):
if time[i]%(1./freq_meas) == 0:
# Update covariances
igps.append(i)
K[:,:,i] = np.matmul(np.matmul(M[:,:,i],H.T),
np.linalg.inv(np.matmul(
H,np.matmul(M[:,:,i],H.T))))
P[:,:,i] = np.matmul((I-np.matmul(K[:,:,i],H)),M[:,:,i])
# measure the things
z_x = x_truth[i] - xc[i] + np.random.normal(0,1)
z_v = v_truth[i] - vc[i] + np.random.normal(0,0.04)
z[:,:,i] = np.array([z_x,z_v])
# update the state
residual[:,:,i] = z[:,:,i] - \
np.matmul(H,np.matmul(Phi[:,:],state[:,:,i-1]))
state[:,:,i] = np.matmul(Phi[:,:],state[:,:,i-1]) + \
np.matmul(K[:,:,i],residual[:,:,i])
M[:,:,i+1] = np.matmul(np.matmul(Phi[:,:],P[:,:,i]),Phi[:,:].T) + \
np.matmul(Gamma*W,Gamma.T)
else:
# if no measurement at current time step, update the a priori covariance
state[:,:,i] = np.matmul(Phi[:,:], state[:,:,i-1])
M[:,:,i+1] = np.matmul(np.matmul(Phi[:,:],M[:,:,i]),Phi[:,:].T) + \
np.matmul(Gamma*W,Gamma.T)
# propogate the last state
i += 1
state[:,:,i] = np.matmul(Phi[:,:], state[:,:,i-1])
error_x[:,k] = x_truth - np.squeeze(xc) - np.squeeze(state[0,0,:])
error_v[:,k] = v_truth - np.squeeze(vc) - np.squeeze(state[1,0,:])
error_bias[:,k] = np.ones(len(x_truth))*b - np.squeeze(state[2,0,:])
ave_error_x = 1./Nsimulations*np.sum(error_x,axis=1)
ave_error_v = 1./Nsimulations*np.sum(error_v,axis=1)
ave_error_bias = 1./Nsimulations*np.sum(error_bias,axis=1)
# plotting things
fig, ax = plt.subplots(3,1,sharex=True,figsize=(15,10))
fsize = 15
ax[0].plot(time,np.squeeze(state[0,:,:]))
ax[0].set_ylabel('x error $(m)$',size=fsize)
ax[1].plot(time,np.squeeze(state[1,:,:]))
ax[1].set_ylabel('v error $(m/s)$',size=fsize)
ax[2].plot(time,np.squeeze(state[2,:,:]))
ax[2].axhline(y=b,color='r')
ax[2].set_xlabel('Time $(s)$',size=fsize)
ax[2].set_ylabel('bias $(m/s^2)$',size=fsize)
ax[2].legend(['Estimated Bias', 'True Bias'], prop={'size': 11})
fig.suptitle('State for a Single Realization',size=25,y=0.93)
for i in range(0,3):
ax[i].grid()
plt.show()
fig, ax = plt.subplots(3,1,sharex=True,figsize=(15,10))
# extract the gps measurements
idx_meas = np.where(z[0,0,:] != 0)
z1 = z[0,0,idx_meas]
z2 = z[1,0,idx_meas]
time_gps = time[idx_meas]
ax[0].plot(time, x_truth-np.squeeze(xc))
ax[0].plot(time_gps, np.squeeze(z1))
ax[0].plot(time, x_truth-np.squeeze(xc)-np.squeeze(state[0,:,:]), '--g')
ax[0].set_ylabel('Position $(m)$',size=fsize)
ax[0].legend(['$p_E-p_c$', 'GPS Measurement', 'Filtered Error'], prop={'size': 13})
ax[1].plot(time, v_truth-np.squeeze(vc))
ax[1].plot(time_gps, np.squeeze(z2))
ax[1].plot(time, v_truth-np.squeeze(vc)-np.squeeze(state[1,:,:]), '--g')
ax[1].set_ylabel('Velocity $(m/s)$',size=fsize)
ax[1].legend(['$v_E-v_c$', 'GPS Measurement', 'Filtered Error'], prop={'size': 13})
ax[2].plot(time, a_truth-np.squeeze(ac))
ax[2].axhline(y=-b, color='r')
ax[2].plot(time, a_truth-np.squeeze(ac)+np.squeeze(state[2,:,:]), '--g')
ax[2].set_ylabel('Acceleration $(m/s^2)$',size=fsize)
ax[2].set_xlabel('Time $(s)$', size=fsize)
ax[2].legend(['$a_E-a_c$', 'Bias Value', 'Filtered Error'], prop={'size': 13})
fig.suptitle('State Error and Measurements for One Realization',size=22,y=0.93)
for i in range(0,3):
ax[i].grid()
plt.show()
fig, ax = plt.subplots(3,1,sharex=True,figsize=(15,10))
ax[0].plot(time, ave_error_x)
ax[0].set_ylabel('Position Error $(m)$', size=fsize)
ax[1].plot(time, ave_error_v)
ax[1].set_ylabel('Velocity Error $(m/s)$', size=fsize)
ax[2].plot(time, ave_error_bias)
ax[2].set_ylabel('Bias Error $(m/s^2)$', size=fsize)
ax[2].set_xlabel('Time (s)', size=fsize)
fig.suptitle('Averaged Ensamble Error',size=25,y=0.93)
for i in range(0,3):
ax[i].grid()
plt.show()
Kidx = np.where(K[0,0,:]!=0)
Kgain = K[:,:,Kidx]
t_idx = time[Kidx]
fig, ax = plt.subplots(3,2,sharex=True,figsize=(15,10))
ax[0,0].scatter(t_idx, np.squeeze(Kgain[0,0,:]))
ax[0,0].set_ylabel('K[0,0]', size=fsize)
ax[1,0].scatter(t_idx, np.squeeze(Kgain[1,0,:]))
ax[1,0].set_ylabel('K[1,0]', size=fsize)
ax[0,1].scatter(t_idx, np.squeeze(Kgain[0,1,:]))
ax[0,1].set_ylabel('K[0,1]', size=fsize)
ax[1,1].scatter(t_idx, np.squeeze(Kgain[1,1,:]))
ax[1,1].set_ylabel('K[1,1]', size=fsize)
ax[2,1].scatter(t_idx, np.squeeze(Kgain[2,1,:]))
ax[2,1].set_xlabel('Time (s)', size=fsize)
ax[2,1].set_ylabel('K[2,1]', size=fsize)
ax[2,0].scatter(t_idx, np.squeeze(Kgain[2,0,:]))
ax[2,0].set_ylabel('K[2,0]', size=fsize)
ax[2,0].set_xlabel('Time (s)', size=fsize)
fig.suptitle('Kalman Gain Matrix Elements',size=25,y=0.93)
for i in range(0,3):
for j in range(0,2):
ax[i,j].grid()
plt.show()
j = 0
k = 1
Pave = np.zeros((ns,ns,time_steps_acc)) # error variance averaged over all runs
Peach = np.zeros((ns,ns,time_steps_acc,Nsimulations)) # error variances for each simulation
error_P = np.zeros((ns,ns,time_steps_acc,Nsimulations))
error_P_mean = np.zeros((ns,ns,time_steps_acc))
for idx_t, current_time in enumerate(time):
for idx_sim in range(0,Nsimulations):
el = np.array([[error_x[idx_t,idx_sim],error_v[idx_t,idx_sim],
error_bias[idx_t,idx_sim]]])
el_ave = np.array([[ave_error_x[idx_t],ave_error_v[idx_t],
ave_error_bias[idx_t]]])
Peach[:,:,idx_t,idx_sim] = np.matmul((el-el_ave).T,(el-el_ave))
Pave[:,:,idx_t] += 1./Nsimulations*Peach[:,:,idx_t,idx_sim]
error_P[:,:,idx_t,idx_sim] = Pave[:,:,idx_t] - Peach[:,:,idx_t,idx_sim]
error_P_mean[:,:,idx_t] += 1./Nsimulations*error_P[:,:,idx_t,idx_sim]
fig, ax = plt.subplots(3,3,sharex=True,figsize=(15,10))
fig.suptitle('$P^{ave}-P_k$ Matrix Elements',size=25,y=0.93)
for i in range(0,3):
for j in range(0,3):
ax[i,j].grid()
ax[i,j].plot(time, np.squeeze(error_P_mean[i,j,:]))
plt.show()
true_state = np.zeros((ns,1,time_steps_acc))
true_state[0,0,:] = np.squeeze(x_truth)-np.squeeze(xc)
true_state[1,0,:] = np.squeeze(v_truth)-np.squeeze(vc)
true_state[2,0,:] = np.squeeze(a_truth)-np.squeeze(ac)
orthogonality_check = np.zeros((ns,ns,time_steps_acc))
k = time_steps_acc-1
for i in range(0,k):
orthogonality_check[:,:,i] = np.matmul((np.squeeze(true_state[:,:,i])-
np.squeeze(state[:,:,i]))[:,np.newaxis],state[:,:,i].T)
fig, ax = plt.subplots(3,3,sharex=True,figsize=(15,10))
fig.suptitle('Orthogonality of the State Estimate and the State Estimate Error',size=25,y=0.93)
for i in range(0,3):
for j in range(0,3):
ax[i,j].plot(time, np.squeeze(orthogonality_check[i,j,:]))
ax[i,j].grid()
plt.show()
np.matmul(residual[:,:,igps[4]],residual[:,:,igps[1]].T)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and process review dataset
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: The SFrame products now contains one column for each of the 193 important_words.
Step4: Split data into training and validation sets
Step5: Convert SFrame to NumPy array
Step6: Note that we convert both the training and validation sets into NumPy arrays.
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
Step8: Derivative of log likelihood with respect to a single coefficient
Step9: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
Step10: Quiz Question
Step11: Quiz Question
Step12: Quiz Question
Step13: Averaging the gradient across a batch
Step14: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Step15: Compare convergence behavior of stochastic gradient ascent
Step16: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
Step17: Quiz Question. When you set batch_size = len(feature_matrix_train), as each iteration passes, how does the average log likelihood in the batch change?
Step18: Log likelihood plots for stochastic gradient ascent
Step19: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
Step20: Smoothing the stochastic gradient ascent curve
Step21: Checkpoint
Step22: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
Step23: Quiz Question
Step24: Plotting the log likelihood as a function of passes for each step size
Step25: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
len(train_data)
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size]==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += (1/batch_size) * step_size * derivative
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)
2 * (50000 / 100)
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Update the following notebook settings
Step2: Ensure the repo is up to date
Step3: Cherry picks for minor release
Step4: Run tests localy
Step5: Update release related stuff
Step6: Manual step
Step7: Commit changes
Step8: Update translations
Step9: Tag new version
Step10: Prepare stable branch for major release
Step11: Build the gem
Step12: Bump the develop version for major release
Step13: PUSH the changes upstream If everything is correct
Step14: Now the new release is in upstream repo
|
<ASSISTANT_TASK:>
Python Code:
%cd ..
NEW_VERSION = '3.0.0'
LAST_VERSION = '2.5.1'
DEVELOP_VERSION = '2.6.0-develop'
NEXT_FUTURE_VERSION = '3.0.0'
MAJOR_RELEASE = True
STABLE_BRANCH = '2.5-stable'
GIT_REMOTE_UPSTREAM = 'origin'
WORK_BRANCH = 'master' if MAJOR_RELEASE else STABLE_BRANCH
CHERRY_PICKS = []
! git checkout {WORK_BRANCH}
! git fetch {GIT_REMOTE_UPSTREAM}
! git rebase {GIT_REMOTE_UPSTREAM}/{WORK_BRANCH}
if not MAJOR_RELEASE:
for cp in CHERRY_PICKS:
! git cherry-pick -x {cp}
! bundle update
! bundle exec rake test
! sed -i 's/Gem::Version.new .*/Gem::Version.new "{NEW_VERSION}"/' lib/hammer_cli/version.rb
# Parse git changelog
from IPython.display import Markdown as md
from subprocess import check_output
from shlex import split
import re
def format_log_entry(entry):
issues = re.findall(r'[^(]#([0-9]+)', entry)
entry = re.sub(r'([fF]ixes|[rR]efs)[^-]*-\s*(.*)', r'\2', entry)
entry = '* ' + entry.capitalize()
entry = re.sub(r'\(#([0-9]+)\)', r'([PR #\1](https://github.com/theforeman/hammer-cli/pull/\1))', entry)
for i in issues:
referenced_issues.append(i)
entry = entry + ', [#%s](http://projects.theforeman.org/issues/%s)' % (i, i)
return entry
def skip(entry):
if re.match(r'Merge pull', entry) or \
re.match(r'^i18n', entry) or \
re.match(r'^Bump to version', entry):
return True
else:
return False
referenced_issues = []
git_log_cmd = 'git log --pretty=format:"%%s" %s..HEAD' % LAST_VERSION
log = check_output(split(git_log_cmd)).decode('utf8').split('\n')
change_log = [format_log_entry(e) for e in log if not skip(e)]
md('\n'.join(change_log))
# Write release notes
from datetime import datetime
import fileinput
import sys
fh = fileinput.input('doc/release_notes.md', inplace=True)
for line in fh:
print(line.rstrip())
if re.match(r'========', line):
print('### %s (%s)' % (NEW_VERSION, datetime.today().strftime('%Y-%m-%d')))
for entry in change_log:
print(entry)
print('')
fh.close()
! git add -u
! git status
! git diff --cached
! git commit -m "Bump to {NEW_VERSION}"
if MAJOR_RELEASE:
! make -C locale/ tx-update
! git tag {NEW_VERSION}
if MAJOR_RELEASE:
! git checkout -b {STABLE_BRANCH}
! git push {GIT_REMOTE_UPSTREAM} {STABLE_BRANCH}
! git checkout {WORK_BRANCH}
! rake build
! gem push pkg/hammer_cli-{NEW_VERSION}.gem
if MAJOR_RELEASE:
! sed -i 's/Gem::Version.new .*/Gem::Version.new "{DEVELOP_VERSION}"/' lib/hammer_cli/version.rb
if MAJOR_RELEASE:
! git add -u
! git status
if MAJOR_RELEASE:
! git diff --cached
if MAJOR_RELEASE:
! git commit -m "Bump to {DEVELOP_VERSION}"
! git push {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}
! git push --tags {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}
print('\n')
print('\n'.join(change_log))
print('\n\nhttps://github.com/theforeman/hammer-cli/releases/new?tag=%s' % NEW_VERSION)
from IPython.display import Markdown as md
md('### Create new hammer-cli release in Redmine \n' + \
'<a href="https://projects.theforeman.org/projects/hammer-cli/versions/new" target="_blank">https://projects.theforeman.org/projects/hammer-cli/versions/new</a>\n\n' + \
'Set name to hammer-cli-%s' % (NEXT_FUTURE_VERSION if MAJOR_RELEASE else NEW_VERSION))
if not MAJOR_RELEASE:
print('Set fixed in versions to %s in following issues:' % NEW_VERSION)
for i in referenced_issues:
print('- https://projects.theforeman.org/issues/%s' % i)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Listwise ranking
Step2: We can then import all the necessary packages
Step3: We will continue to use the MovieLens 100K dataset. As before, we load the datasets and keep only the user id, movie title, and user rating features for this tutorial. We also do some houskeeping to prepare our vocabularies.
Step4: Data preprocessing
Step5: We can inspect an example from the training data. The example includes a user id, a list of 10 movie ids, and their ratings by the user.
Step6: Model definition
Step7: Training the models
Step8: Mean squared error model
Step9: Pairwise hinge loss model
Step10: Listwise model
Step11: Comparing the models
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
!pip install -q tensorflow-ranking
import pprint
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_ranking as tfr
import tensorflow_recommenders as tfrs
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
})
movies = movies.map(lambda x: x["movie_title"])
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
tf.random.set_seed(42)
# Split between train and tests sets, as before.
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
# We sample 50 lists for each user for the training data. For each list we
# sample 5 movies from the movies the user rated.
train = tfrs.examples.movielens.sample_listwise(
train,
num_list_per_user=50,
num_examples_per_list=5,
seed=42
)
test = tfrs.examples.movielens.sample_listwise(
test,
num_list_per_user=1,
num_examples_per_list=5,
seed=42
)
for example in train.take(1):
pprint.pprint(example)
class RankingModel(tfrs.Model):
def __init__(self, loss):
super().__init__()
embedding_dimension = 32
# Compute embeddings for users.
self.user_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_user_ids),
tf.keras.layers.Embedding(len(unique_user_ids) + 2, embedding_dimension)
])
# Compute embeddings for movies.
self.movie_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_titles),
tf.keras.layers.Embedding(len(unique_movie_titles) + 2, embedding_dimension)
])
# Compute predictions.
self.score_model = tf.keras.Sequential([
# Learn multiple dense layers.
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(64, activation="relu"),
# Make rating predictions in the final layer.
tf.keras.layers.Dense(1)
])
self.task = tfrs.tasks.Ranking(
loss=loss,
metrics=[
tfr.keras.metrics.NDCGMetric(name="ndcg_metric"),
tf.keras.metrics.RootMeanSquaredError()
]
)
def call(self, features):
# We first convert the id features into embeddings.
# User embeddings are a [batch_size, embedding_dim] tensor.
user_embeddings = self.user_embeddings(features["user_id"])
# Movie embeddings are a [batch_size, num_movies_in_list, embedding_dim]
# tensor.
movie_embeddings = self.movie_embeddings(features["movie_title"])
# We want to concatenate user embeddings with movie emebeddings to pass
# them into the ranking model. To do so, we need to reshape the user
# embeddings to match the shape of movie embeddings.
list_length = features["movie_title"].shape[1]
user_embedding_repeated = tf.repeat(
tf.expand_dims(user_embeddings, 1), [list_length], axis=1)
# Once reshaped, we concatenate and pass into the dense layers to generate
# predictions.
concatenated_embeddings = tf.concat(
[user_embedding_repeated, movie_embeddings], 2)
return self.score_model(concatenated_embeddings)
def compute_loss(self, features, training=False):
labels = features.pop("user_rating")
scores = self(features)
return self.task(
labels=labels,
predictions=tf.squeeze(scores, axis=-1),
)
epochs = 30
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
mse_model = RankingModel(tf.keras.losses.MeanSquaredError())
mse_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
mse_model.fit(cached_train, epochs=epochs, verbose=False)
hinge_model = RankingModel(tfr.keras.losses.PairwiseHingeLoss())
hinge_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
hinge_model.fit(cached_train, epochs=epochs, verbose=False)
listwise_model = RankingModel(tfr.keras.losses.ListMLELoss())
listwise_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
listwise_model.fit(cached_train, epochs=epochs, verbose=False)
mse_model_result = mse_model.evaluate(cached_test, return_dict=True)
print("NDCG of the MSE Model: {:.4f}".format(mse_model_result["ndcg_metric"]))
hinge_model_result = hinge_model.evaluate(cached_test, return_dict=True)
print("NDCG of the pairwise hinge loss model: {:.4f}".format(hinge_model_result["ndcg_metric"]))
listwise_model_result = listwise_model.evaluate(cached_test, return_dict=True)
print("NDCG of the ListMLE model: {:.4f}".format(listwise_model_result["ndcg_metric"]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To initialize the extractor, you'll only need the starting and ending epoch of the time period you wish to visualize and the number of sample points. The larger the sample point size, the more accurate the trajectory and the bigger your packets. Finding that sweet spot between reasonable package size and visual accuracy depends on the specific orbit. Generally, you'll need a bigger sample for faster satellites. You could also "break" your orbit into different parts and define the sample size individually (for example, this could be useful when the satellite accelerates within a certain time interval).
Step2: To add an orbit you can simply call add_orbit and pass your Orbit along with an optional precision parameter (rtol). However, there are also many optional parameters you can pass to the extractor to specify the visual characteristics of your trajectory
Step3: You can now export the extractor packets by simply calling extractor.packets and load it to the Cesium app as described here
Step4: Landing on Mars
Step5: Of course, when defining a new attractor you want to be able to identify something other than it's shape. For this reason, the extractor allows you to easily set the UV map by simply providing a valid URL.
Step6: Return to Flatland
Step7: Ground track plotting
|
<ASSISTANT_TASK:>
Python Code:
from poliastro.czml.extract_czml import CZMLExtractor
from poliastro.examples import molniya
start_epoch = molniya.epoch
end_epoch = molniya.epoch + molniya.period
N = 80
extractor = CZMLExtractor(
start_epoch,
end_epoch,
N
)
extractor.add_orbit(molniya,
id_name="MolniyaOrbit",
path_width=2,
label_text="Molniya",
label_fill_color=[125, 80, 120, 255]
)
extractor.packets
from poliastro.bodies import Mars
mars_uv = "https://upload.wikimedia.org/wikipedia/commons/f/fd/Mars_2020_LandingSites_Final_8-full.jpg"
extractor = CZMLExtractor(
start_epoch,
end_epoch,
N,
attractor=Mars,
pr_map=mars_uv
)
extractor.packets
extractor = CZMLExtractor(
start_epoch,
end_epoch,
N,
scene3D=False
)
extractor.packets
extractor = CZMLExtractor(
start_epoch,
end_epoch,
N
)
extractor.add_orbit(molniya,
groundtrack_show=True,
groundtrack_lead_time=20,
groundtrack_trail_time=20
)
extractor.packets
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this section we will use the Boston Housing Data.
Step2: <img src='https
Step3: Model (Introducing Tensorboard)
Step4: Learning in a TF Session
Step5: Using Keras
Step6: Be more specific with hyperparameters...
Step7: Simplicity is pretty impressive right?
Step8: Multi-Layer Fully Connected Networks
Step9: What does the cost function behavior mean over the traning in the above plot?
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_boston
boston_dataset = load_boston()
print(boston_dataset.DESCR)
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
boston_dataset = load_boston()
boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
X = boston.drop(boston.columns[-1],axis=1)
Y = pd.DataFrame(np.array(boston_dataset.target), columns=['labels'])
boston.head()
# Split datasets.
X_train, X_test , Y_train, Y_test = train_test_split(X,Y, test_size=0.3 ,random_state=22)
# Normalize Data
from sklearn.preprocessing import StandardScaler
# Define the Preprocessing Method and Fit Training Data to it
scaler = StandardScaler()
scaler.fit(X)
# Make X_train to be the Scaled Version of Data
# This process scales all the values in all 6 columns and replaces them with the new values
X_train = pd.DataFrame(data=scaler.transform(X_train), columns=X_train.columns, index=X_train.index)
X_test = pd.DataFrame(data=scaler.transform(X_test), columns=X_test.columns, index=X_test.index)
X_train = np.array(X_train)
Y_train = np.array(Y_train)
X_test = np.array(X_test)
Y_test = np.array(Y_test)
# As it is a regression problem the output is a neuron.
output_var = Y_train.shape[1]
print(output_var, ' output variables')
dims = X_train.shape[1]
print(dims, 'input variables')
Y_train.shape
import tensorflow as tf
# Parameters
learning_rate = 0.01
training_epochs = 150
display_step = 1
# tf Graph Input
x = tf.placeholder(tf.float32, [None, dims])
y = tf.placeholder(tf.float32, [None,1])
# Try to print a placeholder.
x
# Construct (linear) model
with tf.name_scope("model") as scope:
# Set model weights
W = tf.Variable(tf.zeros([dims, output_var]))
b = tf.Variable(tf.zeros([output_var]))
activation = tf.add(tf.matmul(x, W), b) # Softmax
# Add summary ops to collect data
w_h = tf.summary.histogram("weights_histogram", W)
b_h = tf.summary.histogram("biases_histograms", b)
tf.summary.scalar('mean_weights', tf.reduce_mean(W))
tf.summary.scalar('mean_bias', tf.reduce_mean(b))
# Minimize error using cross entropy
# Note: More name scopes will clean up graph representation
with tf.name_scope("cost_function") as scope:
cost = tf.reduce_mean(tf.square(activation-y))
# Create a summary to monitor the cost function
tf.summary.scalar("cost_function", cost)
tf.summary.histogram("cost_histogram", cost)
with tf.name_scope("train") as scope:
# Set the Optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Launch the graph
with tf.Session() as session:
# Initializing the variables
session.run(tf.global_variables_initializer())
cost_epochs = []
# Training cycle
for epoch in range(training_epochs):
_, c = session.run(fetches=[optimizer, cost], feed_dict={x: X_train, y: Y_train})
cost_epochs.append(c)
#writer.add_summary(summary=summary, global_step=epoch)
#print("accuracy epoch {}:{}".format(epoch, accuracy.eval({x: X_train, y: Y_train})))
# Print the Loss/Error after every 100 epochs
if epoch%10 == 0:
print('Epoch: {0}, Error: {1}'.format(epoch, c))
print("Training phase finished")
#plotting
plt.figure(figsize=(12,8))
plt.plot(range(len(cost_epochs)), cost_epochs, 'o', label='Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
#prediction = tf.argmax(activation, 1)
#print(prediction.eval({x: X_test}))
from keras.models import Sequential
from keras.layers import Dense, Activation
from livelossplot import PlotLossesKeras
from keras import backend as K
K.clear_session()
print("Building model...")
print('Model variables: ', dims)
model = Sequential()
model.add(Dense(output_var, input_shape=(dims,)))
print(model.summary())
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(X_train, Y_train, verbose=2,epochs=15)
import keras.optimizers as opts
K.clear_session()
print("Building model...")
print('Model variables: ', dims)
model = Sequential()
model.add(Dense(output_var, input_shape=(dims,)))
op = opts.SGD(lr=learning_rate)
model.compile(loss = 'mean_squared_error',
optimizer = op)
model.fit(X_train, Y_train,
verbose=1,
epochs=150,
validation_data=[X_test,Y_test],
callbacks=[PlotLossesKeras()])
model.summary()
from sklearn.model_selection import train_test_split
from keras.callbacks import EarlyStopping, ModelCheckpoint
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.15, random_state=42)
fBestModel = 'best_model.h5'
early_stop = EarlyStopping(monitor='val_loss', patience=2, verbose=1)
best_model = ModelCheckpoint(fBestModel, verbose=0, save_best_only=True)
model.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=50,
batch_size=128, verbose=True, callbacks=[best_model, early_stop])
K.clear_session()
print("Building model...")
model = Sequential()
model.add(Dense(256, input_shape=(dims,),activation='relu'))
model.add(Dense(256,activation='relu'))
model.add(Dense(output_var))
model.add(Activation('relu'))
model.compile(optimizer='sgd', loss='mean_squared_error')
model.summary()
model.fit(X_train, Y_train,
validation_data = (X_val, Y_val),
epochs=50,
callbacks=[PlotLossesKeras()])
K.clear_session()
print("Building model...")
model = Sequential()
model.add(Dense(256, input_shape=(dims,),activation='relu'))
# ...
# ...
# Play with it! add as much layers as you want! try and get better results.
model.add(Dense(output_var))
model.add(Activation('relu'))
model.compile(optimizer='sgd', loss='mean_squared_error')
model.summary()
model.fit(X_train, Y_train,
validation_data = (X_val, Y_val),
epochs=50,
callbacks=[PlotLossesKeras()])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load config from default location.
Step2: Create API endpoint instance as well as API resource instances (body and specification).
Step3: Fill required object fields (apiVersion, kind, metadata and spec).
Step4: Create Deployment using create_xxxx command for Deployments.
Step5: Use list_xxxx command for Deployment, to list Deployments.
Step6: Use read_xxxx command for Deployment, to display the detailed state of the created Deployment resource.
Step7: Use patch_xxxx command for Deployment, to make specific update to the Deployment.
Step8: Use replace_xxxx command for Deployment, to update Deployment with a completely new version of the object.
Step9: Use delete_xxxx command for Deployment, to delete created Deployment.
|
<ASSISTANT_TASK:>
Python Code:
from kubernetes import client, config
config.load_kube_config()
api_instance = client.AppsV1Api()
dep = client.V1Deployment()
spec = client.V1DeploymentSpec()
name = "my-busybox"
dep.metadata = client.V1ObjectMeta(name=name)
spec.template = client.V1PodTemplateSpec()
spec.template.metadata = client.V1ObjectMeta(name="busybox")
spec.template.metadata.labels = {"app":"busybox"}
spec.template.spec = client.V1PodSpec()
dep.spec = spec
container = client.V1Container()
container.image = "busybox:1.26.1"
container.args = ["sleep", "3600"]
container.name = name
spec.template.spec.containers = [container]
api_instance.create_namespaced_deployment(namespace="default",body=dep)
deps = api_instance.list_namespaced_deployment(namespace="default")
for item in deps.items:
print("%s %s" % (item.metadata.namespace, item.metadata.name))
api_instance.read_namespaced_deployment(namespace="default",name=name)
dep.metadata.labels = {"key": "value"}
api_instance.patch_namespaced_deployment(name=name, namespace="default", body=dep)
dep.spec.template.spec.containers[0].image = "busybox:1.26.2"
api_instance.replace_namespaced_deployment(name=name, namespace="default", body=dep)
api_instance.delete_namespaced_deployment(name=name, namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Example 1
Step3: Exemplo 3
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def phasecorr(f,h):
F = np.fft.fftn(f)
H = np.fft.fftn(h)
T = F * np.conjugate(H)
R = T/np.abs(T)
g = np.fft.ifftn(R)
return g.real
testing = (__name__ == "__main__")
if testing:
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
%matplotlib inline
import matplotlib.image as mpimg
if testing:
# 2D example
f1 = mpimg.imread("../data/cameraman.tif")
noise = np.random.rand(f1.shape[0],f1.shape[1])
f2 = ia.normalize(ia.ptrans(f1,(-1,50)) + 300 * noise)
g1 = ia.phasecorr(f1,f2)
i = np.argmax(g1)
row,col = np.unravel_index(i,g1.shape)
v = g1[row,col]
print(np.array(f1.shape) - np.array((row,col)))
if testing:
print('max at:(%d, %d)' % (row,col))
ia.adshow(ia.normalize(f1), "input image")
ia.adshow(ia.normalize(f2), "input image")
ia.adshow(ia.normalize(g1), "Correlation peak at (%d,%d) with %d" % (row,col,v))
if testing:
# 2D example
w1 = f1[27:69,83:147]
h3 = np.zeros_like(f1)
h3[:w1.shape[0],:w1.shape[1]] = w1
noise = np.random.rand(h3.shape[0],h3.shape[1])
h3 = ia.normalize(h3 + 100 * noise)
h3 = ia.ptrans(h3, - np.array(w1.shape, dtype=int)//2)
g9 = ia.phasecorr(f1,h3)
p3 = np.unravel_index(np.argmax(g9), g9.shape)
g11 = ia.ptrans(h3,p3)
ia.adshow(ia.normalize(f1), "Original 2D image - Cameraman")
ia.adshow(ia.normalize(w1), "2D Template")
ia.adshow(ia.normalize(h3), "2D Template same size as f1")
ia.adshow(ia.normalize(g9), "Cameraman - Correlation peak: %s"%str(p3))
ia.adshow(ia.normalize((g11*2.+f1)/3.), "Template translated mixed with original image")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read data created in the previous chapter.
Step2: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
Step3: Our input function for predictions is the same except we don't provide a label
Step4: Create feature columns for estimator
Step5: <h3> Linear Regression with tf.Estimator framework </h3>
Step6: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
Step7: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
Step8: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
Step11: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
def make_train_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000
)
def make_eval_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
shuffle = False,
queue_capacity = 1000
)
def make_prediction_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = None,
batch_size = 128,
shuffle = False,
queue_capacity = 1000
)
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_train_input_fn(df_train, num_epochs = 10))
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
predictions = model.predict(input_fn = make_prediction_input_fn(df_test))
for items in predictions:
print(items)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_train_input_fn(df_train, num_epochs = 100));
print_rmse(model, df_valid)
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
phase: 1 = train 2 = valid
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase)
else:
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, df)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Importing groundwater time series
Step2: 2. Creating a Pastas TimeSeries object
Step3: 3. Configuring a TimeSeries object
Step4: Predefined settings
Step5: 4. Let's explore the possibilities
Step6: Wait, what?
Step7: 5. Exporting the TimeSeries
|
<ASSISTANT_TASK:>
Python Code:
# Import some packages
import pastas as ps
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
gwdata = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
gwdata.plot(figsize=(15,4))
oseries = ps.TimeSeries(gwdata, name="Groundwater Level")
# Plot the new time series and the original
plt.figure(figsize=(10,4))
oseries.plot(label="pastas timeseries")
gwdata.plot(label="original")
plt.legend()
# Options 1
oseries = ps.TimeSeries(gwdata, name="Groundwater Level", settings="oseries")
print(oseries.settings)
# Option 2
oseries = ps.TimeSeries(gwdata, name="Groundwater Level", settings=dict(fill_nan="drop"))
print(oseries.settings)
# Options 3
oseries = ps.TimeSeries(gwdata, name="Groundwater Level", fill_nan="drop")
print(oseries.settings)
pd.DataFrame(ps.TimeSeries._predefined_settings).T
# Import observed precipitation series
precip = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
prec = ps.TimeSeries(precip, name="Precipitation", settings="prec")
# fig, ax = plt.subplots(2, 1, figsize=(10,8))
# prec.update_series(freq="D")
# prec.series.plot.bar(ax=ax[0])
# prec.update_series(freq="7D")
# prec.series.plot.bar(ax=ax[1])
# import matplotlib.dates as mdates
# ax[1].fmt_xdata = mdates.DateFormatter('%m')
# fig.autofmt_xdate()
prec.update_series(tmin="2011")
prec.plot()
prec.settings
data = prec.to_dict()
print(data.keys())
# Tadaa, we have our extended time series in weekly frequency back!
ts = ps.TimeSeries(**data)
ts.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)
import statsmodels.formula.api as smf # package we'll be using for linear regression
df = pd.read_csv('hanford.csv')
df.describe()
df.corr()
lm = smf.ols(formula="Mortality~Exposure",data=df).fit() #notice the formula regresses Y on X (Y~X)
intercept,slope=lm.params
lm.params
df.plot(kind='scatter', x='Exposure', y ='Mortality')
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red")
#y=mx+b
mortality=intercept*10+114.715631
mortality
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exam Instructions
Step2: MRjob class for calculating pairwise similarity using K-L Divergence as the similarity measure
Step4: Questions
Step5: C
Step6: MT9. The KL divergence on multinomials is defined only when they have nonzero entries.
Step7: QUESTION
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from __future__ import division
%reload_ext autoreload
%autoreload 2
%%writefile kltext.txt
1.Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from large volumes of data in various forms (data in various forms, data in various forms, data in various forms), either structured or unstructured,[1][2] which is a continuation of some of the data analysis fields such as statistics, data mining and predictive analytics, as well as Knowledge Discovery in Databases.
2.Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions.
import numpy as np
np.log(3)
!cat kltext.txt
%%writefile kldivergence.py
# coding: utf-8
from __future__ import division
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
import numpy as np
class kldivergence(MRJob):
# process each string character by character
# the relative frequency of each character emitting Pr(character|str)
# for input record 1.abcbe
# emit "a" [1, 0.2]
# emit "b" [1, 0.4] etc...
def mapper1(self, _, line):
index = int(line.split('.',1)[0])
letter_list = re.sub(r"[^A-Za-z]+", '', line).lower()
count = {}
for l in letter_list:
if count.has_key(l):
count[l] += 1
else:
count[l] = 1
for key in count:
yield key, [index, count[key]*1.0/len(letter_list)]
# on a component i calculate (e.g., "b")
# KullbackโLeibler divergence of Q from P is defined as (P(i) log (P(i) / Q(i))
def reducer1(self, key, values):
p = 0
q = 0
for v in values:
if v[0] == 1: #String 1
p = v[1]
else: # String 2
q = v[1]
if p and q:
yield (None, p*np.log(p/q))
#Aggegate components
def reducer2(self, key, values):
kl_sum = 0
for value in values:
kl_sum = kl_sum + value
yield "KLDivergence", kl_sum
def steps(self):
mr_steps = [self.mr(mapper=self.mapper1,
reducer=self.reducer1),
self.mr(reducer=self.reducer2)]
# mr_steps = [MRStep(mapper=self.mapper1, reducer=self.reducer1)]
return mr_steps
if __name__ == '__main__':
kldivergence.run()
%reload_ext autoreload
%autoreload 2
from mrjob.job import MRJob
from kldivergence import kldivergence
#dont forget to save kltext.txt (see earlier cell)
mr_job = kldivergence(args=['kltext.txt'])
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
print mr_job.parse_output_line(line)
words =
1.Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from large volumes of data in various forms (data in various forms, data in various forms, data in various forms), either structured or unstructured,[1][2] which is a continuation of some of the data analysis fields such as statistics, data mining and predictive analytics, as well as Knowledge Discovery in Databases.
2.Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions.
for char in ['p', 'k', 'f', 'q', 'j']:
if char not in words:
print char
%%writefile kldivergence_smooth.py
from __future__ import division
from mrjob.job import MRJob
import re
import numpy as np
class kldivergence_smooth(MRJob):
# process each string character by character
# the relative frequency of each character emitting Pr(character|str)
# for input record 1.abcbe
# emit "a" [1, (1+1)/(5+24)]
# emit "b" [1, (2+1)/(5+24) etc...
def mapper1(self, _, line):
index = int(line.split('.',1)[0])
letter_list = re.sub(r"[^A-Za-z]+", '', line).lower()
count = {}
# (ni+1)/(n+24)
for l in letter_list:
if count.has_key(l):
count[l] += 1
else:
count[l] = 1
for letter in ['q', 'j']:
if letter not in letter_list:
count[letter] = 0
for key in count:
yield key, [index, (1+count[key]*1.0)/(24+len(letter_list))]
def reducer1(self, key, values):
p = 0
q = 0
for v in values:
if v[0] == 1:
p = v[1]
else:
q = v[1]
yield (None, p*np.log(p/q))
# Aggregate components
def reducer2(self, key, values):
kl_sum = 0
for value in values:
kl_sum = kl_sum + value
yield "KLDivergence", kl_sum
def steps(self):
return [self.mr(mapper=self.mapper1,
reducer=self.reducer1),
self.mr(reducer=self.reducer2)
]
if __name__ == '__main__':
kldivergence_smooth.run()
%reload_ext autoreload
%autoreload 2
from kldivergence_smooth import kldivergence_smooth
mr_job = kldivergence_smooth(args=['kltext.txt'])
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
print mr_job.parse_output_line(line)
%%writefile spam.txt
0002.2001-05-25.SA_and_HP 0 0 good
0002.2001-05-25.SA_and_HP 0 0 very good
0002.2001-05-25.SA_and_HP 1 0 bad
0002.2001-05-25.SA_and_HP 1 0 very bad
0002.2001-05-25.SA_and_HP 1 0 very bad, very BAD
%%writefile spam_test.txt
0002.2001-05-25.SA_and_HP 1 0 good? bad! very Bad!
%%writefile NaiveBayes.py
import sys
import re
from mrjob.job import MRJob
from mrjob.step import MRStep
from mrjob.protocol import TextProtocol, TextValueProtocol
# Prevents broken pipe errors from using ... | head
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)
def sum_hs(counts):
h_total, s_total = 0, 0
for h, s in counts:
h_total += h
s_total += s
return (h_total, s_total)
class NaiveBayes(MRJob):
MRJob.OUTPUT_PROTOCOL = TextValueProtocol
def mapper(self, _, lines):
_, spam, subject, email = lines.split("\t")
words = re.findall(r'[a-z]+', (email.lower()+" "+subject.lower()))
if spam == "1":
h, s = 0, 1
else:
h, s = 1, 0
yield "***Total Emails", (h, s)
for word in words:
yield word, (h, s)
yield "***Total Words", (h, s)
def combiner(self, key, count):
yield key, sum_hs(count)
def reducer_init(self):
self.total_ham = 0
self.total_spam = 0
def reducer(self, key, count):
ham, spam = sum_hs(count)
if key.startswith("***"):
if "Words" in key:
self.total_ham, self.total_spam = ham, spam
elif "Emails" in key:
total = ham + spam
yield "_", "***Priors\t%.10f\t%.10f" % (ham/total, spam/total)
else:
pg_ham, pg_spam = ham/self.total_ham, spam/self.total_spam
yield "_", "%s\t%.10f\t%.10f" % (key, pg_ham, pg_spam)
if __name__ == "__main__":
NaiveBayes.run()
!cat spam.txt | python NaiveBayes.py --jobconf mapred.reduce.tasks=1 -q | head
def inverse_vector_length(x1, x2):
norm = (x1**2 + x2**2)**.5
return 1.0/norm
inverse_vector_length(1, 5)
0 --> .2
%matplotlib inline
import numpy as np
import pylab
import pandas as pd
data = pd.read_csv("Kmeandata.csv", header=None)
pylab.plot(data[0], data[1], 'o', linewidth=0, alpha=.5);
%%writefile Kmeans.py
from numpy import argmin, array, random
from mrjob.job import MRJob
from mrjob.step import MRStep
from itertools import chain
import os
#Calculate find the nearest centroid for data point
def MinDist(datapoint, centroid_points):
datapoint = array(datapoint)
centroid_points = array(centroid_points)
diff = datapoint - centroid_points
diffsq = diff*diff
# Get the nearest centroid for each instance
minidx = argmin(list(diffsq.sum(axis = 1)))
return minidx
#Check whether centroids converge
def stop_criterion(centroid_points_old, centroid_points_new,T):
oldvalue = list(chain(*centroid_points_old))
newvalue = list(chain(*centroid_points_new))
Diff = [abs(x-y) for x, y in zip(oldvalue, newvalue)]
Flag = True
for i in Diff:
if(i>T):
Flag = False
break
return Flag
class MRKmeans(MRJob):
centroid_points=[]
k=3
def steps(self):
return [
MRStep(mapper_init = self.mapper_init, mapper=self.mapper,combiner = self.combiner,reducer=self.reducer)
]
#load centroids info from file
def mapper_init(self):
# print "Current path:", os.path.dirname(os.path.realpath(__file__))
self.centroid_points = [map(float,s.split('\n')[0].split(',')) for s in open("Centroids.txt").readlines()]
#open('Centroids.txt', 'w').close()
# print "Centroids: ", self.centroid_points
#load data and output the nearest centroid index and data point
def mapper(self, _, line):
D = (map(float,line.split(',')))
yield int(MinDist(D, self.centroid_points)), (D[0],D[1],1)
#Combine sum of data points locally
def combiner(self, idx, inputdata):
sumx = sumy = num = 0
for x,y,n in inputdata:
num = num + n
sumx = sumx + x
sumy = sumy + y
yield idx,(sumx,sumy,num)
#Aggregate sum for each cluster and then calculate the new centroids
def reducer(self, idx, inputdata):
centroids = []
num = [0]*self.k
for i in range(self.k):
centroids.append([0,0])
for x, y, n in inputdata:
num[idx] = num[idx] + n
centroids[idx][0] = centroids[idx][0] + x
centroids[idx][1] = centroids[idx][1] + y
centroids[idx][0] = centroids[idx][0]/num[idx]
centroids[idx][1] = centroids[idx][1]/num[idx]
yield idx,(centroids[idx][0],centroids[idx][1])
if __name__ == '__main__':
MRKmeans.run()
%reload_ext autoreload
%autoreload 2
from numpy import random
from Kmeans import MRKmeans, stop_criterion
mr_job = MRKmeans(args=['Kmeandata.csv', '--file=Centroids.txt'])
#Geneate initial centroids
centroid_points = []
k = 3
for i in range(k):
centroid_points.append([random.uniform(-3,3),random.uniform(-3,3)])
with open('Centroids.txt', 'w+') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in centroid_points)
# Update centroids iteratively
i = 0
while(1):
# save previous centoids to check convergency
centroid_points_old = centroid_points[:]
print "iteration"+str(i)+":"
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
key,value = mr_job.parse_output_line(line)
print key, value
centroid_points[key] = value
# Update the centroids for the next iteration
with open('Centroids.txt', 'w') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in centroid_points)
print "\n"
i = i + 1
if(stop_criterion(centroid_points_old,centroid_points,0.01)):
break
print "Centroids\n"
print centroid_points
pylab.plot(data[0], data[1], 'o', linewidth=0, alpha=.5);
for point in centroid_points:
pylab.plot(point[0], point[1], '*',color='pink',markersize=20)
for point in [(-4.5,0.0), (4.5,0.0), (0.0,4.5)]:
pylab.plot(point[0], point[1], '*',color='red',markersize=20)
pylab.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
# YOUR CODE HERE
data = np.load("trajectory.npz")
t = data["t"]
x = data["x"]
y = data["y"]
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
# YOUR CODE HERE
#She turned me into a
newt = np.linspace(min(t), max(t), 200)
x_approx = interp1d(t, x, kind="cubic")
y_approx = interp1d(t, y, kind="cubic")
newx = x_approx(newt)
newy = y_approx(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
# YOUR CODE HERE
plt.plot(newx, newy, marker="o", color="burlywood", label="Interpolated")
plt.plot(x, y, marker="o", color="red", linestyle="", label="Original")
plt.legend(bbox_to_anchor=(1,1), loc=2)
assert True # leave this to grade the trajectory plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The source space
Step2: Fixed dipole orientations
Step3: Restricting the dipole orientations in this manner leads to the following
Step4: The direction of the estimated current is now restricted to two directions
Step5: When computing the source estimate, the activity at each of the three dipoles
Step6: Limiting orientations, but not fixing them
Step7: Discarding dipole orientation information
|
<ASSISTANT_TASK:>
Python Code:
import mne
import numpy as np
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')
subject = 'sample'
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
dip_ori = lh['nn'][lh['vertno']]
dip_len = len(dip_pos)
dip_times = [0]
white = (1.0, 1.0, 1.0) # RGB values for a white color
actual_amp = np.ones(dip_len) # misc amp to create Dipole instance
actual_gof = np.ones(dip_len) # misc GOF to create Dipole instance
dipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)
trans = mne.read_trans(trans_fname)
fig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)
coord_frame = 'mri'
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white',
coord_frame=coord_frame, fig=fig)
# Mark the position of the dipoles with small red dots
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='sphere', subject=subject,
subjects_dir=subjects_dir, coord_frame=coord_frame, scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the dipoles as arrows pointing along the surface normal
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='arrow', subject=subject,
subjects_dir=subjects_dir, coord_frame='head', scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
# Compute the source estimate for the 'left - auditory' condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
mne.viz.set_3d_view(figure=brain_fixed, focalpoint=(0., 0., 50))
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the three dipoles defined at each location in the source space
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, fwd=fwd,
surfaces='white', coord_frame='head', fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
mne.viz.set_3d_view(figure=brain_mag, focalpoint=(0., 0., 50))
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
mne.viz.set_3d_view(figure=brain_loose, focalpoint=(0., 0., 50))
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
mne.viz.set_3d_view(figure=brain, focalpoint=(0., 0., 50))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we use
|
<ASSISTANT_TASK:>
Python Code:
# Author: Jaakko Leppakangas <jaeilepp@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
from mayavi import mlab
import mne
from mne.datasets.sample import data_path
print(__doc__)
data_path = data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname)
subjects_dir = op.join(data_path, 'subjects')
mne.viz.plot_trans(raw.info, trans=None, subject='sample',
subjects_dir=subjects_dir, meg_sensors=[], eeg_sensors=[],
head='outer_skin', skull=['inner_skull', 'outer_skull'])
mlab.view(40, 60)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Source of Data
Step2: Illustrate Markdown Parsing Using mdparse
Step5: Download And Pre-Process Data
Step6: Note
Step7: Cached pre-processed data
|
<ASSISTANT_TASK:>
Python Code:
from mdparse.parser import transform_pre_rules, compose
import pandas as pd
from tqdm import tqdm_notebook
from fastai.text.transform import defaults
df = pd.read_csv(f'https://storage.googleapis.com/issue_label_bot/language_model_data/000000000000.csv.gz').sample(5)
df.head(1)
pd.set_option('max_colwidth', 1000)
df['clean_body'] = ''
for i, b in tqdm_notebook(enumerate(df.body), total=len(df)):
try:
df['clean_body'].iloc[i] = compose(transform_pre_rules+defaults.text_pre_rules)(b)
except:
print(f'error at: {i}')
break
df[['body', 'clean_body']]
from fastai.text.transform import ProcessPoolExecutor, partition_by_cores
import numpy as np
from fastai.core import parallel
from itertools import chain
transforms = transform_pre_rules + defaults.text_pre_rules
def process_dict(dfdict, _):
process the data, but allow failure.
t = compose(transforms)
title = dfdict['title']
body = dfdict['body']
try:
text = 'xxxfldtitle '+ t(title) + ' xxxfldbody ' + t(body)
except:
return None
return {'url': dfdict['url'], 'text':text}
def download_data(i, _):
Since the data is in 100 chunks already, just do the processing by chunk.
fn = f'https://storage.googleapis.com/issue_label_bot/language_model_data/{str(i).zfill(12)}.csv.gz'
dicts = [process_dict(d, 0) for d in pd.read_csv(fn).to_dict(orient='rows')]
df = pd.DataFrame([d for d in dicts if d])
df.to_csv(f'/ds/IssuesLanguageModel/data/1_processed_csv/processed_part{str(i).zfill(4)}.csv', index=False)
return df
dfs = parallel(download_data, list(range(100)), max_workers=31)
dfs_rows = sum([x.shape[0] for x in dfs])
print(f'number of rows in pre-processed data: {dfs_rows:,}')
del dfs
from pathlib import Path
from random import shuffle
# shuffle the files
p = Path('/ds/IssuesLanguageModel/data/1_processed_csv/')
files = p.ls()
shuffle(files)
# show a preview of files
files[:5]
valid_df = pd.concat([pd.read_csv(f) for f in files[:10]]).dropna().drop_duplicates()
train_df = pd.concat([pd.read_csv(f) for f in files[10:]]).dropna().drop_duplicates()
print(f'rows in train_df:, {train_df.shape[0]:,}')
print(f'rows in valid_df:, {valid_df.shape[0]:,}')
valid_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/valid_df.hdf')
train_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/train_df.hdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check that the bornagain python module is correctly installed
|
<ASSISTANT_TASK:>
Python Code:
print('hello, world!')
from __future__ import print_function #needed for python2/python3 compatibility
try:
import bornagain as ba
print("successfully loaded bornagain module")
major, minor = ba.major_version_number, ba.minor_version_number
print("BornAgain version number is {0}.{1}".format(major, minor))
except:
print("could not import bornagain module!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we need a function to actually time the reference implementation. We can do external timing use the time module, and the Java program also returns internal timings, which we can parse out and save to a dataframe. In practice this is just a matter of using subprocess an the appropriate commandline parameters for the reference code.
Step2: With that in hand we can run the code over a range of dimensions and dataset sizes and aggregate the results together in indexed pandas series or dataframes.
Step3: Now it is just a matter of saving these off to disk for later use.
Step4: Now we need to build up hdbscan timings, preferably over a range of hdbscan versions to show how the performance of the code has evolved (and improved!). To do this I pulled down historical versions and fudged them so that they exist in different namespaces and can live side by side. We can import them all like so ...
Step5: Now we simply go through each version and run over a range of dimensions and dataset sizes (ranging up to smaller sizes in the case of early versions which were memory constrained).
Step6: Finally we can run the current code (soon to be version 0.6).
|
<ASSISTANT_TASK:>
Python Code:
import sklearn.datasets
import numpy as np
import pandas as pd
import subprocess
import time
def get_reference_timings(data, filename='tmp_data.csv',
jarfile='/Users/leland/Source/HDBSCAN_Star/HDBSCAN_Star.jar',
min_points=5, min_cluster_size=5):
# Create the required csv file
pd.DataFrame(data).to_csv('tmp_data.csv', header=False, index=False)
# Run the clustering via a subprocess call and grab the output as it
# has timing information to be parsed
start_time = time.time()
internal_timing = subprocess.check_output(['java', '-jar', jarfile,
'file={}'.format(filename),
'minPts={}'.format(min_points),
'minClSize={}'.format(min_cluster_size),
'compact=true'])
time_taken = time.time() - start_time
# Parse internal timing info into a pandas series for later use
result_dict = {}
for line in internal_timing.split('\n'):
if ':' in line:
key, value = line.split(':')
key = key.replace(' (ms)', '')
key = key.replace('Time to ', '')
key = key.replace('Overall ', '')
value = int(value)
result_dict[key] = value
internal_timing = pd.Series(result_dict)
return time_taken, internal_timing
internal_timing = {}
external_timing = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 8000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
(external_timing[(dataset_dimension, dataset_size)],
internal_timing[(dataset_dimension, dataset_size)]) = get_reference_timings(data)
internal_timing_df = pd.DataFrame(internal_timing).T
external_timing_series = pd.Series(external_timing)
internal_timing_df.to_csv('reference_impl_internal_timings.csv')
external_timing_series.to_csv('reference_impl_external_timings.csv')
import hdbscan01
import hdbscan02
import hdbscan03
import hdbscan04
import hdbscan05
import hdbscan
hdbscan01_timings = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 2000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
start_time = time.time()
hdbscan01.HDBSCAN().fit(data)
time_taken = time.time() - start_time
hdbscan01_timings[(dataset_dimension, dataset_size)] = time_taken
hdbscan01_timings_series = pd.Series(hdbscan01_timings).T
hdbscan01_timings_series.to_csv('hdbscan01_timings.csv')
hdbscan02_timings = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 2000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
start_time = time.time()
hdbscan02.HDBSCAN().fit(data)
time_taken = time.time() - start_time
hdbscan02_timings[(dataset_dimension, dataset_size)] = time_taken
hdbscan02_timings_series = pd.Series(hdbscan02_timings).T
hdbscan02_timings_series.to_csv('hdbscan02_timings.csv')
hdbscan03_timings = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 4000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
start_time = time.time()
hdbscan03.HDBSCAN().fit(data)
time_taken = time.time() - start_time
hdbscan03_timings[(dataset_dimension, dataset_size)] = time_taken
hdbscan03_timings_series = pd.Series(hdbscan03_timings).T
hdbscan03_timings_series.to_csv('hdbscan03_timings.csv')
hdbscan04_timings = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 8000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
start_time = time.time()
hdbscan04.HDBSCAN().fit(data)
time_taken = time.time() - start_time
hdbscan04_timings[(dataset_dimension, dataset_size)] = time_taken
hdbscan04_timings_series = pd.Series(hdbscan04_timings).T
hdbscan04_timings_series.to_csv('hdbscan04_timings.csv')
hdbscan05_timings = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 8000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
start_time = time.time()
hdbscan05.HDBSCAN().fit(data)
time_taken = time.time() - start_time
hdbscan05_timings[(dataset_dimension, dataset_size)] = time_taken
hdbscan05_timings_series = pd.Series(hdbscan05_timings).T
hdbscan05_timings_series.to_csv('hdbscan05_timings.csv')
hdbscan06_timings = {}
for dataset_dimension in (2,5,10,25,50):
for dataset_size in np.arange(1,17) * 8000:
data, _ = sklearn.datasets.make_blobs(dataset_size,
n_features=dataset_dimension,
centers=dataset_dimension)
start_time = time.time()
hdbscan.HDBSCAN().fit(data)
time_taken = time.time() - start_time
hdbscan06_timings[(dataset_dimension, dataset_size)] = time_taken
hdbscan06_timings_series = pd.Series(hdbscan06_timings).T
hdbscan06_timings_series.to_csv('hdbscan06_timings.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We've seen in a previous tutorial <tut-raw-class> how to plot data
Step2: It may not be obvious when viewing this tutorial online, but by default, the
Step3: If the data have been filtered, vertical dashed lines will automatically
Step4: Alternatively, you can plot the PSD for every sensor on its own axes, with
Step5: This plot is also interactive; hovering over each "thumbnail" plot will
Step6: Plotting sensor locations from Raw objects
Step7: Plotting projectors from Raw objects
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
raw.plot()
raw.plot_psd(average=True)
midline = ['EEG 002', 'EEG 012', 'EEG 030', 'EEG 048', 'EEG 058', 'EEG 060']
raw.plot_psd(picks=midline)
raw.plot_psd_topo()
raw.copy().pick_types(meg=False, eeg=True).plot_psd_topo()
raw.plot_sensors(ch_type='eeg')
raw.plot_projs_topomap(colorbar=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Try TSNE and time it
Step2: Try PCA instead
Step3: Append all view_items for PCA processing
Step4: Append all buy_items for PCA processing
Step5: Save the file for further processing
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import os
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
os.chdir('/Users/Walkon302/Desktop/deep-learning-models-master/view2buy')
# Read the preprocessed file, containing the user profile and item features from view2buy folder
df = pd.read_pickle('user_fea_for_eval.pkl')
# Drop the first column, which is the original data format.
df.drop('0', axis = 1, inplace = True)
# Check the data
df.head()
# Slice the data into 100k items
df = df.iloc[0:100000, :]
# Calculate the average view sec for all view items per user
avg_view_sec = pd.DataFrame(df.groupby(['user_id', 'buy_spu'])['view_secondes'].mean())
# Reset the index and rename the column
avg_view_sec.reset_index(inplace=True)
avg_view_sec.rename(columns = {'view_secondes':'avg_view_sec'}, inplace=True)
# Check the data
avg_view_sec.head()
# Merge avg item view into data
df = pd.merge(df, avg_view_sec, on=['user_id', 'buy_spu'])
# Calculate the weights for view item vec
df['weight_of_view'] = df['view_secondes']/df['avg_view_sec']
df.head()
# Generate view_item_vec and buy_item_vec
view_item_vec = df['view_features']
buy_item_vec = df['buy_features']
print 'view_item', len(view_item_vec), 'buy_item', len(buy_item_vec)
# Generate TSNE model
model = TSNE(n_components=10, random_state=0)
# Time the tSNE with 250 samples
%%time
a = pd.DataFrame()
for i, j in enumerate(view_item_vec.iloc[0:250]):
a = pd.concat([a, pd.DataFrame(j).transpose()], axis = 0)
vt = model.fit_transform(a)
# Time the tSNE with 500 samples
%%time
a = pd.DataFrame()
for i, j in enumerate(view_item_vec.iloc[0:500]):
a = pd.concat([a, pd.DataFrame(j).transpose()], axis = 0)
vt = model.fit_transform(a)
# Time the tSNE with 1000 samples
%%time
a = pd.DataFrame()
for i, j in enumerate(view_item_vec.iloc[0:1000]):
a = pd.concat([a, pd.DataFrame(j).transpose()], axis = 0)
vt = model.fit_transform(a)
# Generate TSNE model
model = PCA(n_components=200, random_state=0)
%%time
view_item = []
for i in view_item_vec:
view_item.append(i)
view_item= np.array(view_item)
%%time
pca_view_vec = model.fit_transform(view_item)
# 200 dimensions of PCA can explain 85% of variables. Beyond that, e.g., 300 D, my computer will run out of memory (8g)
sum(model.explained_variance_ratio_)
%%time
buy_item = []
for i in buy_item_vec:
buy_item.append(i)
buy_item= np.array(buy_item)
%%time
pca_buy_vec = model.fit_transform(buy_item)
# Incert pca result to data
df['pca_view'] = pca_view_vec.tolist()
df['pca_buy'] = pca_buy_vec.tolist()
# Check the data
df.head()
df = pd.read_pickle('df_weighted.pkl')
# Calculate the weighted pca_view
df['weighted_view_pca'] = df.apply(lambda x: [y*x['weight_of_view'] for y in x['pca_view']], axis=1)
# Calculate the weighted pca_buy
df['weighted_buy_pca'] = df.apply(lambda x: [y*x['weight_of_view'] for y in x['pca_buy']], axis=1)
# Check the data
df.head()
df.to_pickle('top100k_user_pca.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tree ensembles (RandomForestClassifier)
|
<ASSISTANT_TASK:>
Python Code:
# import
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
df = pd.read_csv("data/ad.data", header=None)
explanatory_variable_columns = set(df.columns.values)
response_variable_column = df[len(df.columns.values)-1]
# The last column describes the targets
explanatory_variable_columns.remove(len(df.columns.values)-1)
y = [1 if e == 'ad.' else 0 for e in response_variable_column]
X = df[list(explanatory_variable_columns)]
#X.replace(to_replace=' *\?', value=-1, regex=True, inplace=True)
X.replace(['?'], [-1])
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (150, 155, 160),
'clf__min_samples_split': (1, 2, 3),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
#grid_search.fit(X_train, y_train)
print( 'Best score: %0.3f' % grid_search.best_score_)
print( 'Best parameters set:')
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print( '\t%s: %r' % (param_name, best_parameters[param_name]))
predictions = grid_search.predict(X_test)
print ('Accuracy:', accuracy_score(y_test, predictions))
print ('Confusion Matrix:', confusion_matrix(y_test, predictions))
print ('Classification Report:', classification_report(y_test, predictions))
pipeline = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (5, 10, 20, 50),
'clf__max_depth': (50, 150, 250),
'clf__min_samples_split': (1, 2, 3),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
#grid_search.fit(X_train, y_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first thing we do is load in the BLASTP output we generated, so that we can plot some of the key features. We do that using the ex02.read_data() function in the cell below. This puts the data into a dataframe called data_fwd.
Step2: <div class="alert alert-warning">
Step3: There are 5265 rows in this table, one for each of the query protein sequences in the P. syringae B728a annotation.
Step4: <div class="alert alert-warning">
Step5: <div class="alert alert-warning">
Step6: <div class="alert alert-warning">
Step7: We can inspect the dataframe of RBBH using the .head() and .describe() methods, by executing the cells below.
Step8: It is inevitable that the RBBH set will have the same or fewer protein pairs in it, than the number of proteins in the smallest of the forward and reverse protein sets. But how many proteins have been filtered in this comparison? We can find out by executing the cell below.
Step9: <div class="alert alert-warning">
Step10: <div class="alert alert-warning">
Step11: <div class="alert alert-warning">
Step12: Visualising RBBH with ACT
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
# Import helper module
from helpers import rbbh
# Load one-way BLAST results into a data frame called data_fwd
data_fwd = rbbh.read_data("data/pseudomonas_blastp/B728a_vs_NCIMB_11764.tab")
# Show first few lines of the loaded data
data_fwd.head()
# Show descriptive statistics for the table
data_fwd.describe()
# Plot a histogram of alignment lengths for the BLAST data
data_fwd.alignment_length.hist(bins=100)
# Plot a histogram of percentage identity for the BLAST data
data_fwd.identity.hist(bins=100)
# Plot a histogram of query_coverage for the BLAST data
data_fwd.query_coverage.hist(bins=100)
# Plot a histogram of percentage coverage for the BLAST data
data_fwd.subject_coverage.hist(bins=100)
# Plot 2D histogram of subject sequence (match) coverage against query
# sequence coverag
rbbh.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
rbbh.plot_hist2d(data_fwd.query_coverage, data_fwd.identity,
"one-way query COV", "one-way match PID",
"one-way coverage/identity comparison")
# Load one-way BLAST results into a data frame called data_fwd
data_rev = rbbh.read_data("data/pseudomonas_blastp/NCIMB_11764_vs_B728a.tab")
# Calculate RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1 and df2 are the forward and reverse BLAST
# results (filtered, if any filters were used), and rbbh is the dataframe of
# reciprocal best BLAST hits
df1, df2, data_rbbh = rbbh.find_rbbh(data_fwd, data_rev)
# Peek at the first few lines of the RBBH results
data_rbbh.head()
# Show summary statistics for RBBH
data_rbbh.describe()
#ย Report the size of each of the forward and reverse input, and rbbh output dataframes
s = '\n'.join(["Forward BLAST input: {0} proteins",
"Reverse BLAST input: {1} proteins",
"RBBH output: {2} proteins"])
print(s.format(len(data_fwd), len(data_rev), len(data_rbbh)))
print("(min difference = {0})".format(min(len(data_fwd), len(data_rev)) - len(data_rbbh)))
# Histogram of forward match percentage identity (one-way)
data_fwd.identity.hist(bins=100)
# Histogram of forward match percentage identity (RBBH)
data_rbbh.identity_x.hist(bins=100)
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches, and those retained after calculating RBBH
rbbh.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
rbbh.plot_hist2d(data_rbbh.query_coverage_x, data_rbbh.subject_coverage_x,
"RBBH (fwd) query COV", "RBBH (fwd) subject COV",
"RBBH_comparisons.ipynbH coverage comparison")
# Calculate ID and coverage-filtered RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1_filtered and df2_filtered are the
# filtered forward and reverse BLAST results , and rbbh_filtered is the
# dataframe of reciprocal best BLAST hits
df1_filtered, df2_filtered, rbbh_filtered = rbbh.find_rbbh(data_fwd, data_rev, pid=40, cov=70)
# Histogram of forward match percentage identity (RBBH, filtered)
rbbh_filtered.identity_x.hist(bins=100)
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches retained after calculating RBBH and
# filtering on percentage identity and coverage
rbbh.plot_hist2d(rbbh_filtered.query_coverage_x, rbbh_filtered.subject_coverage_x,
"filtered RBBH (fwd) query COV", "filtered_RBBH (fwd) subject COV",
"filtered RBBH coverage comparison")
# Read feature locations for each Pseudomonas file
features = rbbh.read_genbank("data/pseudomonas/GCF_000988485.1_ASM98848v1_genomic.gbff",
"data/pseudomonas/GCF_000293885.2_ASM29388v3_genomic.gbff")
# Write a .crunch file of filtered RBBH for the Pseudomonas comparisons
rbbh.write_crunch(rbbh_filtered, features,
fwd="GCF_000988485.1_ASM98848v1_genomic",
rev="GCF_000293885.2_ASM29388v3_genomic",
outdir="data/pseudomonas_blastp",
filename="B728a_rbbh_NCIMB_11764.crunch")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Env setup
Step2: Object detection imports
Step3: Model preparation
Step4: Download Model
Step5: Load a (frozen) Tensorflow model into memory.
Step6: Loading label map
Step7: Helper code
Step8: Detection
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if tf.__version__ < '1.4.0':
raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')
# This is needed to display the images.
%matplotlib inline
from utils import label_map_util
from utils import visualization_utils as vis_util
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to Graphistry + Test
Step2: Connect to TigerGraph and Test
Step3: Query Tigergraph
Step4: Visualize result of TigerGraph query
Step5: In-Tool UI Walkthrough
Step6: Adding Graphs
Step7: Custom Nodes and Attributes + Saving Sessions
|
<ASSISTANT_TASK:>
Python Code:
TIGER_CONFIG = {
'fqdn': 'http://MY_TIGER_SERVER:9000'
}
#!pip install graphistry
import pandas as pd
import requests
### COMMON ISSUES: wrong server, wrong key, wrong protocol, network notebook->graphistry firewall permissions
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
graphistry.__version__
### EXPECTED RESULT: Visualization of a curved triangle
### COMMON ISSUES: Blank box as HTTPS not configured on Graphistry server so browser disallows iframe. Try plot(render=False)
g = graphistry\
.edges(pd.DataFrame({'s': [0,1,2], 'd': [1,2,0], 'a': ['quick', 'brown', 'fox'] }))\
.bind(source='s', destination='d')
g.plot() #g.plot(render=False)
### EXPECTED RESULT: {'GET /statistics': ...}
### COMMON ISSUES: returns '{}' (may need to run a few times); wrong fqdn; firewall issues; ...
requests.get(TIGER_CONFIG['fqdn'] + '/statistics?seconds=60').json()
# string -> dict
def query_raw(query_string):
url = TIGER_CONFIG['fqdn'] + "/query/" + query_string
r = requests.get(url)
return r.json()
def flatten (lst_of_lst):
try:
if type(lst_of_lst[0]) == list:
return [item for sublist in lst_of_lst for item in sublist]
else:
return lst_of_lst
except:
print('fail', lst_of_lst)
return lst_of_lst
#str * dict -> dict
def named_edge_to_record(name, edge):
record = {k: edge[k] for k in edge.keys() if not (type(edge[k]) == dict) }
record['type'] = name
nested = [k for k in edge.keys() if type(edge[k]) == dict]
if len(nested) == 1:
for k in edge[nested[0]].keys():
record[k] = edge[nested[0]][k]
else:
for prefix in nested:
for k in edge[nested[prefix]].keys():
record[prefix + "_" + k] = edge[nested[prefix]][k]
return record
def query(query_string):
results = query_raw(query_string)['results']
out = {}
for o in results:
for k in o.keys():
if type(o[k]) == list:
out[k] = flatten(o[k])
out = flatten([[named_edge_to_record(k,v) for v in out[k]] for k in out.keys()])
print('# results', len(out))
return pd.DataFrame(out)
def graph_edges(edges):
return graphistry.bind(source='from_id', destination='to_id').edges(edges)
df = query("connection_mining?A=1&B=10&k=1000")
print('rows: ', len(df))
df.sample(3)
### EXPECTED RESULT: GRAPH VISUALIZATION
### COMMON ISSUES: try inspecting query_raw('connection_mining?A=1&B=10&k=2')
graph_edges(query("connection_mining?A=1&B=10&k=1000")).plot()
df = pd.read_csv('https://github.com/graphistry/pygraphistry/raw/master/demos/data/transactions.csv')
df.sample(10)
hg = graphistry.hypergraph(df[:1000], entity_types=['Source', 'Destination', 'Transaction ID'])
print('Hypergraph parts', hg.keys())
hg['graph'].plot()
help(graphistry.hypergraph)
df1 = query("connection_mining?A=1&B=10&k=1000").assign(data_source='query1')
df2 = query("connection_mining?A=1&B=12&k=1000").assign(data_source='query2')
edges2 = pd.concat([df1, df2], ignore_index=True)
graph_edges(edges2).plot()
conn = query("connection_mining?A=1&B=10&k=1000")
froms = conn.rename(columns={'from_id': 'id', 'from_type': 'node_type'})[['id', 'node_type']]
tos = conn.rename(columns={'to_id': 'id', 'to_type': 'node_type'})[['id', 'node_type']]
nodes = pd.concat([froms, tos], ignore_index=True).drop_duplicates().dropna()
nodes.sample(3)
nodes['node_type'].unique()
#https://labs.graphistry.com/docs/docs/palette.html
type2color = {
'phone_call': 0,
'citizen': 1,
'bank_account': 2,
'phone_number': 3,
'bank_transfer_event': 4,
'hotel_room_event': 5
}
nodes['color'] = nodes['node_type'].apply(lambda type_str: type2color[type_str])
nodes.sample(3)
g = graphistry.bind(source='from_id', destination='to_id').edges(conn)
#updating colors
g = g.bind(node='id', point_color='color').nodes(nodes)
#saving sessions
g = g.settings(url_params={'workbook': 'my_workbook1'})
g.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create some plot data
Step2: Define range of data to make sparklines
Step3: Output to new DataFrame of Sparklines
Step4: Insert Sparklines into source DataFrame
Step5: Detailed Formatting
Step6: Example Data and Sparklines Layout
Step7: Export to HTML
Step8: HTML text for rendering elsewhere
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
import sparklines
density_func = 78
mean, var, skew, kurt = stats.chi.stats(density_func, moments='mvsk')
x_chi = np.linspace(stats.chi.ppf(0.01, density_func),
stats.chi.ppf(0.99, density_func), 100)
y_chi = stats.chi.pdf(x_chi, density_func)
x_expon = np.linspace(stats.expon.ppf(0.01), stats.expon.ppf(0.99), 100)
y_expon = stats.expon.pdf(x_expon)
a_gamma = 1.99
x_gamma = np.linspace(stats.gamma.ppf(0.01, a_gamma),
stats.gamma.ppf(0.99, a_gamma), 100)
y_gamma = stats.gamma.pdf(x_gamma, a_gamma)
n = 100
np.random.seed(0) # keep generated data the same for git commit
data = [np.random.rand(n),
np.random.randn(n),
np.random.beta(2, 1, size=n),
np.random.binomial(3.4, 0.22, size=n),
np.random.exponential(size=n),
np.random.geometric(0.5, size=n),
np.random.laplace(size=n),
y_chi,
y_expon,
y_gamma]
function = ['rand',
'randn',
'beta',
'binomial',
'exponential',
'geometric',
'laplace',
'chi',
'expon',
'gamma']
df = pd.DataFrame(data)
df['function'] = function
df
a = df.ix[:, 0:100]
df_out = pd.DataFrame()
df_out['sparkline'] = sparklines.create(data=a)
sparklines.show(df_out[['sparkline']])
df['sparkline'] = sparklines.create(data=a)
sparklines.show(df[['function', 'sparkline']])
df_out = pd.DataFrame()
df_out['sparkline'] = sparklines.create(data=a,
color='#1b470a',
fill_color='#99a894',
fill_alpha=0.2,
point_color='blue',
point_fill='none',
point_marker='*',
point_size=3,
figsize=(6, 0.25))
sparklines.show(df_out[['sparkline']])
df_copy = df[['function', 'sparkline']].copy()
df_copy['value'] = df.ix[:, 100]
df_copy['change'] = df.ix[:,98] - df.ix[:,99]
df_copy['change_%'] = df_copy.change / df.ix[:,99]
sparklines.show(df_copy)
sparklines.to_html(df_copy, 'pandas_sparklines_demo')
html = sparklines.to_html(df_copy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to stop RP from reporting all sorts of stuff for this example so we set a specific environment variable to tell RP to do so. If you want to see what RP reports change it to REPORT.
Step2: We will import the appropriate parts from AdaptiveMD as we go along so it is clear what it needed at what stage. Usually you will have the block of imports at the beginning of your script or notebook as suggested in PEP8.
Step3: Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.
Step4: Now we have a handle for our project. First thing is to set it up to work on a resource.
Step5: TaskGenerators
Step6: Here we used a special prefix that can point to specific locations.
Step7: To explain this we have now an OpenMMEngine which uses the previously made pdb File object and uses the location defined in there. The same some Files for the OpenMM XML files and some args to store each frame (to keep it fast) and run using the CPU kernel.
Step8: The modeller
Step9: Again we name it pyemma for later reference.
Step10: Note, that you cannot add the same engine twice. But if you create a new engine it will be considered different and hence you can store it again.
Step11: Next we create the parameter for the engine to run the simulation. Since it seemed appropriate we use a Trajectory object (a special File with initial frame and length) as the input. You could of course pass these things separately, but this way, we can actualy reference the no yet existing trajectory and do stuff with it.
Step12: This says, initial is alanine.pdb run for 100 frames and is named xxxxxxxx.dcd.
Step13: That's it, just that a trajectory description and turn it into a task that contains the shell commands and needed files, etc.
Step14: Now we have to wait. To see, if we are done, you can check the scheduler if it is still running tasks.
Step15: or you wait until it becomes idle using .wait()
Step16: If all went as expected we will now have our first trajectory.
Step17: Excellent, so cleanup and close our queue
Step18: and close the project.
|
<ASSISTANT_TASK:>
Python Code:
import sys, os, time
# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')
os.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'
from adaptivemd import Project
from adaptivemd import OpenMMEngine
from adaptivemd import PyEMMAAnalysis
from adaptivemd import File, Directory, WorkerScheduler
from adaptivemd import DT
# Project.delete('test')
project = Project('test')
from adaptivemd import LocalCluster AllegroCluster
resource_id = 'local.jhp'
if resource_id == 'local.jhp':
project.initialize(LocalJHP())
elif resource_id == 'local.sheep':
project.initialize(LocalSheep())
elif resource_id == 'fub.allegro':
project.initialize(AllegroCluster())
pdb_file = File('file://../files/alanine/alanine.pdb').named('initial_pdb').load()
engine = OpenMMEngine(
pdb_file=pdb_file,
system_file=File('file://../files/alanine/system.xml').load(),
integrator_file=File('file://../files/alanine/integrator.xml').load(),
args='-r --report-interval 1 -p CPU --store-interval 1'
).named('openmm')
engine.name
modeller = PyEMMAAnalysis(
pdb_file=pdb_file
).named('pyemma')
project.generators.add(engine)
project.generators.add(modeller)
project.files.one
sc = WorkerScheduler(project.resource)
sc.enter(project)
t = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True)). extend(50).extend(100)
sc(t)
import radical.pilot as rp
rp.TRANSFER
sc.advance()
for f in project.trajectories:
print f.basename, f.length, DT(f.created).time
for t in project.tasks:
print t.stderr.objs['worker']
print project.generators
t1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True))
t2 = t1.extend(100)
t2.trajectory.restart
project.tasks.add(t2)
for f in project.trajectories:
print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__)
for f in project.files:
print f.drive, f.path, f.created, f.__time__, f.exists, hex(f.__uuid__)
w = project.workers.last
print w.state
print w.command
for t in project.tasks:
print t.state, t.worker.hostname if t.worker else 'None'
sc.advance()
t1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100))
t2 = t1.extend(100)
project.tasks.add(t2)
# from adaptivemd.engine import Trajectory
# t3 = engine.task_run_trajectory(Trajectory('staging:///trajs/0.dcd', pdb_file, 100)).extend(100)
# t3.dependencies = []
# def get_created_files(t, s):
# if t.is_done():
# print 'done', s
# return s - set(t.added_files)
# else:
# adds = set(t.added_files)
# rems = set(s.required[0] for s in t._pre_stage)
# print '+', adds
# print '-', rems
# q = set(s) - adds | rems
# if t.dependencies is not None:
# for d in t.dependencies:
# q = get_created_files(d, q)
# return q
# get_created_files(t3, {})
for w in project.workers:
print w.hostname, w.state
w = project.workers.last
print w.state
print w.command
w.command = 'shutdown'
for t in project.tasks:
print t.state, t.worker.hostname if t.worker else 'None'
for f in project.trajectories:
print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__)
project.trajectories.one[0]
t = engine.task_run_trajectory(project.new_trajectory(project.trajectories.one[0], 100))
project.tasks.add(t)
print project.files
print project.tasks
t = modeller.execute(list(project.trajectories))
project.tasks.add(t)
from uuid import UUID
project.storage.tasks._document.find_one({'_dict': {'generator' : { '_dict': }}})
genlist = ['openmm']
scheduler = sc
prefetch = 1
while True:
scheduler.advance()
if scheduler.is_idle:
for _ in range(prefetch):
tasklist = scheduler(project.storage.tasks.consume_one())
if len(tasklist) == 0:
break
time.sleep(2.0)
scheduler = project.get_scheduler(cores=1)
trajectory = project.new_trajectory(engine['pdb_file'], 100)
trajectory
task = engine.task_run_trajectory(trajectory)
scheduler(task)
scheduler.is_idle
print scheduler.generators
# scheduler.wait()
print project.files
print project.trajectories
scheduler.exit()
project.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computing the eigenvalues and the eigenvectors
Step2: The @ operator stands, in this context, for matrix multiplication.
Step3: Modal Response
Step4: The definition of time vectoris a bit complicated...
|
<ASSISTANT_TASK:>
Python Code:
M = np.array(((2.0, 0.0), ( 0.0, 1.0)))
K = np.array(((3.0,-2.0), (-2.0, 2.0)))
p = np.array(( 0.0, 1.0))
w = 2.0
evals, Psi = eigh(K, M)
Mstar = Psi.T@M@Psi
Kstar = Psi.T@K@Psi
pstar = Psi.T@p
print(evals,end='\n\n')
print(Psi,end='\n\n')
print(Mstar,end='\n\n')
print(Kstar,end='\n\n')
print(pstar,end='\n\n')
print(Mstar[0,1]*150*10E6*1000)
L = np.sqrt(evals)
DAF = 1.0/(L**2-w**2)
beta = w/L
t = np.linspace(0,60,601)[:,None]
q = pstar*DAF*(np.sin(w*t)-beta*np.sin(L*t))
curves = plt.plot(t,q)
plt.legend(curves,['q1', 'q2'])
plt.title('Modal Response')
plt.xlabel('$\omega_0t$')
plt.ylabel('$q_i/\Delta_{st}$');
x = (Psi@q.T).T
curves = plt.plot(t, x)
plt.legend(curves,['x1', 'x2'])
plt.title('Structural Response')
plt.xlabel('$\omega_0t$')
plt.ylabel('$X_i/\Delta_{st}$');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div id='intro' />
Step2: en donde cada punto interior (azul), representa un punto donde queremos conocer el valor de la funciรณn $u(x,y)$. Consideraremos ademรกs que $h_x$ y $h_y$ el space step de la malla.
Step3: La funciรณn solve_laplace() es la encargada de construir el sistema lineal correspondiente para al problema P a resolver.
Step4: Ecuaciรณn de Helmholtz
Step5: La funciรณn solve_herlmotz() es la encargada de construir el sistema lineal correspondiente para al problema P a resolver.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from mpl_toolkits.mplot3d import axes3d
from matplotlib import pyplot as plt
from ipywidgets import interact
from ipywidgets import IntSlider
import sympy as sym
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
sym.init_printing()
%matplotlib inline
def plot(x,y,w,elev=40,azim=230):
# Plot the solution
X,Y = np.meshgrid(y,x)
W = w.reshape(X.shape)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.plot_wireframe(X, Y, W)
#ax.plot_surface(X, Y, W, alpha=0.25)
plt.xlabel("y")
plt.ylabel("x")
#ax.set_zlim(0.,1.)
#plt.savefig("sol%dx%d.png"%(Nx+1,Ny+1))
ax.view_init(elev,azim)
plt.show()
x = np.linspace(0., 1., 10)
y = np.linspace(0., 1., 10)
xgrid, ygrid = np.meshgrid(x, y, sparse=False)
plt.figure(figsize=(8,8))
plt.scatter(xgrid.ravel(), ygrid.ravel())
plt.plot((0,1),(0,0), 'g--' ,lw=2)
plt.plot((1,1),(0,1), 'g--' ,lw=2)
plt.plot((1,0),(1,1), 'g--' ,lw=2)
plt.plot((0,0),(1,0), 'g--' ,lw=2)
plt.title('Esquema de discretizacion uniforme')
plt.xlim(-0.1,1.1)
plt.ylim(-0.1,1.1)
plt.axis('equal')
plt.show()
# Problema 1
xmin, xmax = -1, 1.
ymin, ymax = -1, 1.
f = lambda x,y : x*y
bottom = lambda x : 0# np.sin(np.pi*x)
top = lambda x : 0 #np.sin(np.pi*x)
left = lambda y : 0
right = lambda y: 0
P1 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problema 2
xmin, xmax = 0., 1.
ymin, ymax = 0., 1.
f = lambda x,y : x
bottom = lambda x : np.sin(np.pi*x)
top = lambda x : np.sin(np.pi*x)
left = lambda y : 0
right = lambda y: 0
P2 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problema 3
xmin, xmax = -1, 1.
ymin, ymax = -1, 1.
f = lambda x,y : 0
bottom = lambda x : np.sin(np.pi*x)
top = lambda x : -np.sin(np.pi*x)
left = lambda y : 0
right = lambda y: 0
P3 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problema 4
xmin, xmax = 0, 1.
ymin, ymax = 0, 1.
f = lambda x,y : x*np.exp(y)
bottom = lambda x : x
top = lambda x : x*np.exp(1)
left = lambda y : 0*y
right = lambda y: np.exp(y)
P4 = {"f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
P_Poisson=[('P1', P1), ('P2', P2),('P3', P3),('P4', P4)]
def solve_laplace(P, Nx=30, Ny=30,flag_plot=False,elev=40,azim=230):
# Discretize x and y
x = np.linspace(P["xmin"], P["xmax"], Nx+1)
y = np.linspace(P["ymin"], P["ymax"], Ny+1)
# Define the discretization parameters
dx = x[1]-x[0]
dy = y[1]-y[0]
# Create the matrix and the right hand size vector
A = np.zeros([(Nx+1)*(Ny+1), (Nx+1)*(Ny+1)])
b = np.zeros([(Nx+1)*(Ny+1), 1])
# Define global indexing
def index(i, j, nCols=(Ny+1)):
return j + i*nCols
# Fill up the matrix and right hand side vector
for i in range(Nx+1):
for j in range(Ny+1):
k = index(i,j)
if j==0: # y=ymin, bottom
A[k,k] = 1.
b[k] = P["b"](x[i])
elif i==Nx: # x=xmax, right
A[k,k] = 1.
b[k] = P["r"](y[j])
elif j==Ny: # y=ymax, top
A[k,k] = 1.
b[k] = P["t"](x[i])
elif i==0: # x=xmin, left
A[k,k] = 1.
b[k] = P["l"](y[j])
else:
A[k, k] = -2./dx**2 - 2./dy**2
A[k,index(i+1,j)] = 1./dx**2
A[k,index(i-1,j)] = 1./dx**2
A[k,index(i,j-1)] = 1./dy**2
A[k,index(i,j+1)] = 1./dy**2
b[k] = P["f"](x[i], y[j])
# Solve the linear system
w = np.linalg.solve(A, b)
if flag_plot:
plot(x,y,w,elev,azim)
return
return x, y, w
elev_widget = IntSlider(min=0, max=180, step=10, value=40)
azim_widget = IntSlider(min=0, max=360, step=10, value=230)
interact(solve_laplace,P=P_Poisson,Nx=(5,50,5),Ny=(5,50,5),flag_plot=[True],elev=elev_widget,azim=azim_widget)
# Problem 1
Lambda = 0.1
xmin, xmax = 0., 1.
ymin, ymax = 0., 1.
f = lambda x,y : 0
bottom = lambda x : 0
top = lambda x : 0
left = lambda y : 1
right = lambda y: 1
P1 = {"Lambda":Lambda, "f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problem 2
Lambda = 2.0
xmin, xmax = 0., 1.
ymin, ymax = 0., 1.
f = lambda x,y : 0
bottom = lambda x : 0
top = lambda x : 0
left = lambda y : 1
right = lambda y: 1
P2 = {"Lambda":Lambda, "f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
# Problem 3
Lambda = 0.0
xmin, xmax = -1, 1.
ymin, ymax = -1, 1.
f = lambda x,y : 0
bottom = lambda x : 0
top = lambda x : 0
left = lambda y : np.sin(np.pi*y)
right = lambda y: -np.sin(np.pi*y)
P3 = {"Lambda":Lambda, "f":f, "b":bottom, "t":top, "l":left, "r":right,
"xmin":xmin, "xmax":xmax, "ymin":ymin, "ymax":ymax}
P_Helmholtz=[('P1', P1), ('P2', P2),('P3', P3)]
def solve_helmholtz(P, Nx=30, Ny=30,flag_plot=False,elev=40,azim=230):
# Discretize x and y
x = np.linspace(P["xmin"], P["xmax"], Nx+1)
y = np.linspace(P["ymin"], P["ymax"], Ny+1)
L = P["Lambda"]
# Define the discretization parameters
dx = x[1]-x[0]
dy = y[1]-y[0]
# Create the matrix and the right hand size vector
A = np.zeros([(Nx+1)*(Ny+1), (Nx+1)*(Ny+1)])
b = np.zeros([(Nx+1)*(Ny+1), 1])
# Define global indexing
def index(i, j, nCols=(Ny+1)):
return j + i*nCols
# Fill up the matrix and right hand side vector
for i in range(Nx+1):
for j in range(Ny+1):
k = index(i,j)
if j==0: # y=ymin, bottom
A[k,k] = -1.5/dy
A[k,index(i,j+1)] = 2.0/dy
A[k,index(i,j+2)] =-0.5/dy
b[k] = P["b"](x[i])
elif i==Nx: # x=xmax, right
A[k,k] = 1.
b[k] = P["r"](y[j])
elif j==Ny: # y=ymax, top
A[k,k] = 1.5/dy
A[k,index(i,j-1)] = -2.0/dy
A[k,index(i,j-2)] = +0.5/dy
b[k] = P["t"](x[i])
elif i==0: # x=xmin, left
A[k,k] = 1.
b[k] = P["l"](y[j])
else:
A[k, k] = -2./dx**2 - 2./dy**2 - L
A[k,index(i+1,j)] = 1./dx**2
A[k,index(i-1,j)] = 1./dx**2
A[k,index(i,j-1)] = 1./dy**2
A[k,index(i,j+1)] = 1./dy**2
b[k] = P["f"](x[i], y[j])
# Solve the linear system
w = np.linalg.solve(A, b)
if flag_plot:
plot(x,y,w,elev,azim)
return
return x, y, w
elev_widget = IntSlider(min=0, max=180, step=10, value=40)
azim_widget = IntSlider(min=0, max=360, step=10, value=230)
interact(solve_helmholtz,P=P_Helmholtz,Nx=(5,50,5),Ny=(5,50,5),flag_plot=[True],elev=elev_widget,azim=azim_widget)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Motivating Example Using sklearn
Step2: Remember that the form of data we will use always is
Step3: Training and Test Datasets
Step4: Tuning the Model
Step5: Below is an example of using the cv_score function for a basic logistic regression model without regularization.
Step6: <div class="span5 alert alert-info">
Step7: <div class="span5 alert alert-info">
Step8: This value of C scored high and performed better than the default model. Cross-validation and grid search is necessary in order to optimize your model. You improve your prediction results and also increase the probability that your model will perform as well with new, unseen data. You lower the probability that your model only did well by chance on a particular set of data.
Step9: Similar values were obtained using built-in tools.
Step10: We then use this best classifier to fit the entire training set. This is done inside the do_classify function which takes a dataframe indf as input. It takes the columns in the list featurenames as the features used to train the classifier. The column targetname sets the target. The classification is done by setting those samples for which targetname has value target1val to the value 1, and all others to 0. We split the dataframe into 80% training and 20% testing by default, standardizing the dataset if desired. (Standardizing a data set involves scaling the data so that it has 0 mean and is described in units of its standard deviation. We then train the model on the training set using cross-validation. Having obtained the best classifier using cv_optimize, we retrain on the entire training set and calculate the training and testing accuracy, which we print. We return the split data and the trained classifier.
Step11: Logistic Regression
Step12: So we then come up with our rule by identifying
Step13: In the figure here showing the results of the logistic regression, we plot the actual labels of both the training(circles) and test(squares) samples. The 0's (females) are plotted in red, the 1's (males) in blue. We also show the classification boundary, a line (to the resolution of a grid square). Every sample on the red background side of the line will be classified female, and every sample on the blue side, male. Notice that most of the samples are classified well, but there are misclassified people on both sides, as evidenced by leakage of dots or squares of one color ontothe side of the other color. Both test and traing accuracy are about 92%.
Step14: Discriminative vs Generative Classifier
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
import sklearn.model_selection
c0=sns.color_palette()[0]
c1=sns.color_palette()[1]
c2=sns.color_palette()[2]
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light,
cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):
h = .02
X=np.concatenate((Xtr, Xte))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
#plt.figure(figsize=(10,6))
if zfunc:
p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z=zfunc(p0, p1)
else:
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
ZZ = Z.reshape(xx.shape)
if mesh:
plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)
if predicted:
showtr = clf.predict(Xtr)
showte = clf.predict(Xte)
else:
showtr = ytr
showte = yte
ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold,
s=psize, alpha=alpha,edgecolor="k")
# and testing points
ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold,
alpha=alpha, marker="s", s=psize+10)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
return ax,xx,yy
def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light,
cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):
ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False,
colorscale=colorscale, cdiscrete=cdiscrete,
psize=psize, alpha=alpha, predicted=True)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)
cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)
return ax
dflog = pd.read_csv("data/01_heights_weights_genders.csv")
dflog.head()
# your turn
plt.scatter(dflog.Weight[dflog.Gender == "Male"], dflog.Height[dflog.Gender == "Male"], alpha=0.2, c="red")
plt.scatter(dflog.Weight[dflog.Gender == "Female"], dflog.Height[dflog.Gender == "Female"], alpha=0.2, c="blue")
plt.xlabel('Weight')
plt.ylabel('Height')
plt.title('Weight vs Height')
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Split the data into a training and test set.
Xlr, Xtestlr, ylr, ytestlr = train_test_split(dflog[['Height','Weight']].values,
(dflog.Gender == "Male").values,random_state=5)
clf = LogisticRegression()
# Fit the model on the trainng data.
clf.fit(Xlr, ylr)
# Print the accuracy from the testing data.
print(accuracy_score(clf.predict(Xtestlr), ytestlr))
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
def cv_score(clf, x, y, score_func=accuracy_score):
result = 0
nfold = 5
for train, test in KFold(nfold).split(x): # split data into train/test groups, 5 times
clf.fit(x[train], y[train]) # fit
result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data
return result / nfold # average
clf = LogisticRegression()
score = cv_score(clf, Xlr, ylr)
print(score)
#the grid of parameters to search over
Cs = [0.001, 0.1, 1, 10, 100]
# your turn
best_score = 0
best_C = 0
for C in Cs:
model = LogisticRegression(C=C)
score = cv_score(model, Xlr, ylr)
print("C =", C, " Score =", score)
if score > best_score:
best_score = score
best_C = C
print("\nThe best C is", best_C, "with a score of", best_score)
# your turn
model = LogisticRegression(C=best_C)
model.fit(Xlr, ylr)
print("The score is", accuracy_score(model.predict(Xtestlr), ytestlr))
# your turn
from sklearn.model_selection import GridSearchCV
model2 = LogisticRegression()
params = {'C': Cs}
model_cv = GridSearchCV(model2, param_grid=params, cv=5, scoring="accuracy")
model_cv.fit(Xlr, ylr)
print("Best params =", model_cv.best_params_)
print("Best score =", model_cv.best_score_)
model2 = model_cv.best_estimator_
model2.fit(Xlr, ylr)
print("The score is", accuracy_score(model2.predict(Xtestlr), ytestlr))
def cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5):
gs = sklearn.model_selection.GridSearchCV(clf, param_grid=parameters, cv=n_folds)
gs.fit(Xtrain, ytrain)
print("BEST PARAMS", gs.best_params_)
best = gs.best_estimator_
return best
from sklearn.model_selection import train_test_split
def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8):
subdf=indf[featurenames]
if standardize:
subdfstd=(subdf - subdf.mean())/subdf.std()
else:
subdfstd=subdf
X=subdfstd.values
y=(indf[targetname].values==target1val)*1
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)
clf = cv_optimize(clf, parameters, Xtrain, ytrain)
clf=clf.fit(Xtrain, ytrain)
training_accuracy = clf.score(Xtrain, ytrain)
test_accuracy = clf.score(Xtest, ytest)
print("Accuracy on training data: {:0.2f}".format(training_accuracy))
print("Accuracy on test data: {:0.2f}".format(test_accuracy))
return clf, Xtrain, ytrain, Xtest, ytest
h = lambda z: 1. / (1 + np.exp(-z))
zs=np.arange(-5, 5, 0.1)
plt.plot(zs, h(zs), alpha=0.5);
dflog.head()
clf_l, Xtrain_l, ytrain_l, Xtest_l, ytest_l = do_classify(LogisticRegression(),
{"C": [0.01, 0.1, 1, 10, 100]},
dflog, ['Weight', 'Height'], 'Gender','Male')
plt.figure()
ax=plt.gca()
points_plot(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, alpha=0.2);
clf_l.predict_proba(Xtest_l)
plt.figure()
ax = plt.gca()
points_plot_prob(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, psize=20, alpha=0.1);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Game
Step2: Catcher Model
Step3: Test the agent
Step5: Show Playing
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
sys.path.append(os.path.join('..'))
import keras.backend as K
K.set_image_dim_ordering('th') # needs to be set since it defaults to tensorflow now
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Flatten
from keras.optimizers import SGD
from x.environment import Catcher
from x.models import KerasModel
from x.memory import ExperienceReplay
from x.agent import DiscreteAgent
num_actions = 3
nb_filters, nb_rows, nb_cols = 32, 3, 3
grid_x, grid_y = 11, 11
epoch = 100
batch = 50
memory_len = 500
gamma = 0.9
epsilon = 0.1
# keras model
keras_model = Sequential()
keras_model.add(Convolution2D(nb_filters, nb_rows, nb_cols, input_shape=(1, grid_x, grid_y), activation='relu', subsample=(2, 2)))
keras_model.add(Convolution2D(nb_filters, nb_rows, nb_cols, activation='relu'))
keras_model.add(Convolution2D(num_actions, nb_rows, nb_cols))
keras_model.add(MaxPooling2D(keras_model.output_shape[-2:]))
keras_model.add(Flatten())
# X wrapper for Keras
model = KerasModel(keras_model)
# Memory
M = ExperienceReplay(memory_length=memory_len)
# Agent
A = DiscreteAgent(model, M)
# SGD optimizer + MSE cost + MAX policy = Q-learning as we know it
A.compile(optimizer=SGD(lr=0.2), loss="mse", policy_rule="max")
# To run an experiment, the Agent needs an Enviroment to iteract with
catcher = Catcher(grid_size=grid_x, output_shape=(1, grid_x, grid_y))
A.learn(catcher, epoch=epoch, batch_size=batch)
out_dir = 'rl_dir'
if not os.path.exists(out_dir): os.mkdir(out_dir)
A.play(catcher, epoch=100, visualize={'filepath': os.path.join(out_dir, 'demo.gif'), 'n_frames': 270, 'gray': True})
from IPython.display import HTML
import base64
with open(os.path.join(out_dir, 'demo.gif'), 'rb') as in_file:
data_str = base64.b64encode(in_file.read()).decode("ascii").replace("\n", "")
data_uri = "data:image/png;base64,{0}".format(data_str)
HTML(<img src="{0}" width = "256px" height = "256px" />.format(data_uri))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Random sampling
Step2: Sobol
Step3: Classic Latin hypercube sampling
Step4: Centered Latin hypercube sampling
Step5: Maximin optimized hypercube sampling
Step6: Correlation optimized hypercube sampling
Step7: Ratio optimized hypercube sampling
Step8: Halton sampling
Step9: Hammersly sampling
Step10: Grid sampling
Step11: Pdist boxplot of all methods
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
from skopt.space import Space
from skopt.sampler import Sobol
from skopt.sampler import Lhs
from skopt.sampler import Halton
from skopt.sampler import Hammersly
from skopt.sampler import Grid
from scipy.spatial.distance import pdist
def plot_searchspace(x, title):
fig, ax = plt.subplots()
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', label='samples')
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', markersize=80, alpha=0.5)
# ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plt.title(title)
n_samples = 10
space = Space([(-5., 10.), (0., 15.)])
# space.set_transformer("normalize")
x = space.rvs(n_samples)
plot_searchspace(x, "Random samples")
pdist_data = []
x_label = []
pdist_data.append(pdist(x).flatten())
x_label.append("random")
sobol = Sobol()
x = sobol.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Sobol')
pdist_data.append(pdist(x).flatten())
x_label.append("sobol")
lhs = Lhs(lhs_type="classic", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'classic LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("lhs")
lhs = Lhs(lhs_type="centered", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'centered LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("center")
lhs = Lhs(criterion="maximin", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'maximin LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("maximin")
lhs = Lhs(criterion="correlation", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'correlation LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("corr")
lhs = Lhs(criterion="ratio", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'ratio LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("ratio")
halton = Halton()
x = halton.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Halton')
pdist_data.append(pdist(x).flatten())
x_label.append("halton")
hammersly = Hammersly()
x = hammersly.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Hammersly')
pdist_data.append(pdist(x).flatten())
x_label.append("hammersly")
grid = Grid(border="include", use_full_layout=False)
x = grid.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Grid')
pdist_data.append(pdist(x).flatten())
x_label.append("grid")
fig, ax = plt.subplots()
ax.boxplot(pdist_data)
plt.grid(True)
plt.ylabel("pdist")
_ = ax.set_ylim(0, 12)
_ = ax.set_xticklabels(x_label, rotation=45, fontsize=8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SeqRecord objects contain metadata
Step2: The attributes can be modified as needed.
Step3: Unstructured annotations are organised into a dictionary by their name (dictionary key) and can contain basically anything.
Step4: Per-letter annotations are also organised by their name into a dictionary. The annotations must be iterable objects of the same length at the sequence.
Step5: Features are a list of SeqFeature objects (see below).
Step6: Cross-references are a list of strings.
Step7: SeqRecord can be modified (slicing, concatenation, etc.) to produce new records. In general, the letter annotations and features are preserved but annotations and cross-references are not. You must add them manually if needed. Remember to change the id, name, and description of the new record, too.
Step8: Locations can be specified in various ways
Step9: Locations can be queried for their details.
Step10: Specified region can be extracted from full sequence
Step11: Features (or locations, in fact) are also aware of the strand (if applicable).
Step12: Sequence records can be obtained from NCBI's Entrez databases
Step13: ESearch for searching databases
Step14: The output can be stored to a file for later use, which is a cost-effective approach for sequences in particular.
Step15: The read function in Bio.Entrez can parse the output of EUtils tools into a dictionary. It expects input that contains exactly one record.
Step16: EFetch for retrieving records
Step17: Full records can be obtained in the GenBank format in plain text (and in XML).
Step18: SeqIO produces SeqRecord objects
Step19: Since parse returns a generator, you can process each record at the time without storing all records in memory simultaneously.
Step20: The amount of information varies between formats. You should use the format that is the most appropriate for your application.
Step21: The SeqRecord objects can also be written to a file by Bio.SeqIO.
Step22: Large queries with history server and batch retrieve
Step23: The result will contain WebEnv and QueryKey information, which replace the list of IDs in efetch. These pieces of information identify the result set of your search.
Step24: A large number of records should be retrieved in batches, which can be achieved with the retstart and retmax arguments. The former indicates how many records to skip from the beginning and the latter how many records to retrieve.
|
<ASSISTANT_TASK:>
Python Code:
import Bio.SeqRecord as BSR
import Bio.Seq as BS
import Bio.Alphabet as BA
# sequence
seq = BS.Seq('MDGEDVQALVIDNGSGMCKA', BA.generic_protein)
# sequence record
record = BSR.SeqRecord(seq)
print(record)
# get sequence from record
print(record.seq)
# add identifier
record.id = "AC500001"
# add name
record.name = "DUMMY"
# add description
record.description = "Dummy sequence"
print(record)
# add annotation named 'source'
record.annotations['source'] = "Imaginary organism"
print(record)
print(record.annotations['source'])
# add letter annotation named 'structure'
record.letter_annotations['structure'] = '---EEE--EEEEE-EEEEEE'
print(record)
print(record.letter_annotations['structure'])
# no features yet
print(record.features)
# add cross-references
record.dbxrefs = ['DB:dummy', 'Project:imaginary']
print(record)
import Bio.SeqFeature as BSF
feature = BSF.SeqFeature(BSF.FeatureLocation(0,10), type='domain')
feature.qualifiers['evidence'] = 'experimental'
print(feature)
# type
print(feature.type)
# location
print(feature.location)
# qualifiers
for key, value in feature.qualifiers.items():
print(key, ':', value)
# position 5 of sequence (zero-based indexing)
pos = BSF.ExactPosition(5)
print(repr(pos))
# positions can treated as integers
print(repr(pos + 1))
# region 6-10 of sequence (one-based, inclusive)
# which is 5:10 in Python-style indexing (zero-based, end exclusive)
loc1 = BSF.FeatureLocation(5, 10)
print(loc1)
# region 21-25 (inclusive)
loc2 = BSF.FeatureLocation(20, 25)
print(loc2)
# join of two regions (such as gene with two exons)
loc3 = BSF.CompoundLocation([loc1, loc2])
print(loc3)
# start position
print(loc3.start)
# end position
print(loc3.end)
# continuous sub-regions
for p in loc3.parts:
print(p)
# is the position 4 in the region?
print(4 in loc3)
# is the position 8 in the region?
print(8 in loc3)
# is one region within another? (not supported)
try:
print(loc1 in loc3)
except ValueError as e:
print(e)
# full sequence
seq = BS.Seq('GGACTCTTAGCGGCTCACGCACTTTCTTCCGAAGACGGAACCCG', BA.generic_dna)
# two-exon gene within sequence
feature = BSF.SeqFeature(loc3, type='gene', strand=1)
# the nucleotides specified by the feature
print(repr(feature.extract(seq)))
# feature in reverse strand
loc4 = BSF.FeatureLocation(0, 5, strand=-1)
print(loc4)
# strand (1 == forward, -1 == reverse)
print(loc4.strand)
feature = BSF.SeqFeature(loc4, type='gene')
# the five nucleotides specified by the feature
# (note that the output is a reverse complement)
print(repr(feature.extract(seq)))
import Bio.Entrez as BE
# set email globally
BE.email = 'your.name@example.com'
# remember to set your real email address before continuing to the examples below
SET-YOUR-REAL-EMAIL-ADDRESS-BEFORE-COMMENTING-OUT-THIS-LINE
# search "nucleotide" database for the "accD" gene of "opuntia" organism
handle = BE.esearch(db="nucleotide",
term="opuntia[ORGN] accD[gene]",
# return ID list in XML format
rettype="uilist",
retmode="xml")
# save to file
with open('results.xml', 'w') as f:
f.write(handle.read())
# load from file and parse
with open('results.xml') as f:
results = BE.read(f)
# (you could also parse 'handle' directly)
# retrieved IDs
print(results['IdList'])
# the ids of three records from the previous search
ids = results['IdList']
# fetch records from "nucleotide" database
handle = BE.efetch(db="nucleotide",
id=ids,
# return sequences in FASTA format
rettype="fasta",
retmode="text")
# save to file
with open('sequences.fasta', 'w') as f:
f.write(handle.read())
# show file content
with open('sequences.fasta') as f:
print(f.read())
# fetch records from "nucleotide" database
handle = BE.efetch(db="nucleotide",
id=ids,
# return sequences in GenBank plain text format
rettype="gb",
retmode="text")
# save to file
with open('sequences.gb', 'w') as f:
f.write(handle.read())
# show file content
with open('sequences.gb') as f:
print(f.read())
import Bio.SeqIO as BSIO
# parse and process FASTA sequences one at the time
for record in BSIO.parse('sequences.fasta', 'fasta'):
print(record)
print()
# parse records in GenBank format, which contains more information than FASTA
for record in BSIO.parse('sequences.gb', 'gb'):
print(record)
print()
# write 'record' to file named 'record.gb' in a GenBank format
# (note that the first argument is a list of records)
BSIO.write([record], 'record.gb', 'gb')
# search "nucleotide" database for the "accD" gene of "opuntia" organism using history
handle = BE.esearch(db="nucleotide",
term="opuntia[ORGN] accD[gene]",
# use history
usehistory="y",
# return ID list in XML format
rettype="uilist",
retmode="xml")
results = BE.read(handle)
webenv = results['WebEnv']
print(webenv)
querykey = results['QueryKey']
print(querykey)
# fetch records from "nucleotide" database
handle = BE.efetch(db="nucleotide",
webenv=webenv,
query_key=querykey,
rettype="gb",
retmode="text")
# save to file
with open('sequences.gb', 'w') as f:
f.write(handle.read())
# function to query "nucleotide" database and retrieve a specific number of records in batches of a given size
def retrieve(query, retrieve_size, batch_size):
# search using history
handle = BE.esearch(db="nucleotide",
term=query,
usehistory="y",
rettype="uilist",
retmode="xml")
results = BE.read(handle)
webenv = results['WebEnv']
querykey = results['QueryKey']
# adjust maximum size if less than maximum number of hits available
retrieve_size = min(int(results['Count']), retrieve_size)
# split results into batches
for i in range(0, retrieve_size, batch_size):
# fetch records
handle = BE.efetch(db="nucleotide",
webenv=webenv,
query_key=querykey,
# fetch N records starting from i
retstart=i,
retmax=batch_size,
rettype="gb",
retmode="text")
for record in BSIO.parse(handle, 'gb'):
# acts as an iterator and give records out as they come
yield record
# example call to function
# (retrieve 10 records in batches of 5; usually these numbers are much larger)
for record in retrieve("opuntia[ORGN] accD[gene]", 10, 5):
print(record)
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally new data. We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production.
Step2: To get an accurate simulation of the real world, we will shuffle our data then split.
Step3: Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a prebuilt function to split data into training and testing for you. Here we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it has not seen during training!
Step4: By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the power of our system. This might lead to putting a system into production which fails at predicting new data! It is much better to use a train/test split in order to properly see how your trained model is doing on new data.
Step5: We can also visualize the correct and failed predictions
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
y
import numpy as np
rng = np.random.RandomState(0)
permutation = rng.permutation(len(X))
X, y = X[permutation], y[permutation]
print(y)
from sklearn.cross_validation import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y, train_size=0.5, random_state=1999)
print("Labels for training and testing data")
print(train_y)
print(test_y)
classifier.fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("Fraction Correct")
print(np.sum(pred_y == test_y) / float(len(test_y)))
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Plot two dimensions
colors = ["darkblue", "darkgreen", "gray"]
for n, color in enumerate(colors):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, 0], test_X[idx, 1], color=color, label="Class %s" % str(n))
plt.scatter(test_X[incorrect_idx, 0], test_X[incorrect_idx, 1], color="darkred")
# Make xlim larger to accommodate legend
plt.xlim(3, 9)
plt.legend(loc=3)
plt.title("Iris Classification results")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Base manifold (three dimensional)
Step2: Two dimensioanal submanifold - Unit sphere
Step3: Christoffel symbols of the first kind
Step4: One dimensioanal submanifold
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import sys
from galgebra.printer import Format, xpdf
Format()
from sympy import symbols, sin, pi, latex, Array, permutedims
from galgebra.ga import Ga
from IPython.display import Math
from sympy import cos, sin, symbols
g3coords = (x,y,z) = symbols('x y z')
g3 = Ga('ex ey ez', g = [1,1,1], coords = g3coords,norm=False) # Create g3
(e_x,e_y,e_z) = g3.mv()
Math(r'g =%s' % latex(g3.g))
sp2coords = (theta, phi) = symbols(r'{\color{airforceblue}\theta} {\color{applegreen}\phi}', real = True)
sp2param = [sin(theta)*cos(phi), sin(theta)*sin(phi), cos(theta)]
sp2 = g3.sm(sp2param, sp2coords, norm = False) # submanifold
(etheta, ephi) = sp2.mv() # sp2 basis vectors
(rtheta, rphi) = sp2.mvr() # sp2 reciprocal basis vectors
sp2grad = sp2.grad
sph_map = [1, theta, phi] # Coordinate map for sphere of r = 1
Math(r'(\theta,\phi)\rightarrow (r,\theta,\phi) = %s' % latex(sph_map))
Math(r'e_\theta \cdot e_\theta = %s' % (etheta|etheta))
Math(r'e_\phi \cdot e_\phi = %s' % (ephi|ephi))
Math('g = %s' % latex(sp2.g))
Math(r'g^{-1} = %s' % latex(sp2.g_inv))
Cf1 = sp2.Christoffel_symbols(mode=1)
Cf1 = permutedims(Array(Cf1), (2, 0, 1))
Math(r'\Gamma_{1, \alpha, \beta} = %s \quad \Gamma_{2, \alpha, \beta} = %s ' % (latex(Cf1[0, :, :]), latex(Cf1[1, :, :])))
Cf2 = sp2.Christoffel_symbols(mode=2)
Cf2 = permutedims(Array(Cf2), (2, 0, 1))
Math(r'\Gamma^{1}_{\phantom{1,}\alpha, \beta} = %s \quad \Gamma^{2}_{\phantom{2,}\alpha, \beta} = %s ' % (latex(Cf2[0, :, :]), latex(Cf2[1, :, :])))
F = sp2.mv('F','vector',f=True) #scalar function
f = sp2.mv('f','scalar',f=True) #vector function
Math(r'\nabla = %s' % sp2grad)
Math(r'\nabla f = %s' % (sp2.grad * f))
Math(r'F = %s' % F)
Math(r'\nabla F = %s' % (sp2.grad * F))
cir_th = phi = symbols(r'{\color{atomictangerine}\phi}',real = True)
cir_map = [pi/8, phi]
Math(r'(\phi)\rightarrow (\theta,\phi) = %s' % latex(cir_map))
cir1d = sp2.sm( cir_map , (cir_th,), norm = False) # submanifold
cir1dgrad = cir1d.grad
(ephi) = cir1d.mv()
Math(r'e_\phi \cdot e_\phi = %s' % latex(ephi[0] | ephi[0]))
Math('g = %s' % latex(cir1d.g))
h = cir1d.mv('h','scalar',f= True)
H = cir1d.mv('H','vector',f= True)
Math(r'\nabla = %s' % cir1dgrad)
Math(r'\nabla h = %s' %(cir1d.grad * h).simplify())
Math('H = %s' % H)
Math(r'\nabla H = %s' % (cir1d.grad * H).simplify())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data import
Step2: Fluctuation assay
Step3: Figure 5 - Loss of heterozygosity
|
<ASSISTANT_TASK:>
Python Code:
# Load external dependencies
from setup import *
# Load internal dependencies
import config,plot,utils
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Load data
loh_length_df = pd.read_csv(dir_data+'seq/loh/homozygosity_length.csv')
loh_length_df = loh_length_df.set_index("50kb_bin_center")
loh_length_df = loh_length_df.reindex(columns=['HU','RM','YPD'])
loh_length_df.head()
# Read csv file containing the competition assay data
loh_fluctuation_df = pd.read_csv(dir_data+'fluctuation/fluctuation_assay_rates.csv')
loh_fluctuation_df = loh_fluctuation_df.sort_values('background', ascending=False)
loh_fluctuation_df = loh_fluctuation_df.groupby(['background','environment'],sort=False)[['mean_LOH_rate','lower_LOH_rate','upper_LOH_rate']].mean()
loh_fluctuation_df = loh_fluctuation_df.ix[['WA/WA','NA/NA','WA/NA']].unstack('background')
loh_fluctuation_df = loh_fluctuation_df.ix[['HU','RM','YPD']]
loh_fluctuation_df
fig = plt.figure(figsize=(4,6))
grid = gridspec.GridSpec(nrows=3, ncols=2, height_ratios=[15, 7, 5], hspace=0.7, wspace=0.3)
gs = {}
gs['length'] = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[0,0])
gs['fluctuation'] = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[0,1])
gs[('loh','WAxNA_F12_1_HU_3')] = gridspec.GridSpecFromSubplotSpec(7, 1, subplot_spec=grid[1:2,:], hspace=0)
gs[('loh','WAxNA_F12_2_RM_1')] = gridspec.GridSpecFromSubplotSpec(5, 1, subplot_spec=grid[2:3,:], hspace=0)
### Left panel ###
ax = plt.subplot(gs['length'][:])
ax.text(-0.185, 1.055, 'A', transform=ax.transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
data = loh_length_df.rename(columns=config.selection['short_label'])
kwargs = {
'color': [config.selection['color'][e] for e in loh_length_df.columns]
}
plot.loh_length(data, ax, **kwargs)
### Right panel ###
ax = plt.subplot(gs['fluctuation'][:])
ax.text(-0.2, 1.05, 'B', transform=ax.transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
data = loh_fluctuation_df['mean_LOH_rate']
kwargs = {
'yerr': loh_fluctuation_df[['lower_LOH_rate','upper_LOH_rate']].T.values,
'color': [config.background['color'][b] for b in loh_fluctuation_df['mean_LOH_rate'].columns]
}
plot.loh_fluctuation(data, ax, **kwargs)
# Axes limits
for ax in fig.get_axes():
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=3, labelsize=6)
ax.tick_params(axis='both', which='minor', size=2, labelsize=4)
plot.save_figure(dir_paper+'figures/figure5/figure5')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 0
Step2: Part 1
Step3: Try to use the following example of the scikit-learn help, to plot the classification regions for different pairs of input features. Modify the necessary code line to plot our training data over the decision regions.
Step4: Run the above code again, do you obtain the same accuracy?
Step5: Part 3
Step6: Analyze the final ensemble performance according to the number of learners. Average the result over 20 or more different runs to obtain statically significant results (note that the above accuracy change if you run the code again).
Step7: 3.2. Adaboost
Step8: Unlike BaggingClassifier() method, AdaBoostClassifier() let you analyze the evolution of error without having to train the ensemble for different number of learners. For this task, you can use the classifier method .staged_score() which returns the evolution of the ensemble accuracy. Note that it returns this information with a generator object, so you have to iterate over it to access to each element.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
# Initialize the random generator seed to compare results
np.random.seed(0)
iris = datasets.load_iris()
X = iris.data # All input features are used
Y = iris.target
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.4)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn import tree
clf_tree = # <FILL IN>
acc_tree= # <FILL IN>
print("The test accuracy of the decision tree is %2.2f" %(100*acc_tree))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(acc_tree, 2), 0.95 , 'incorrect result: The value of C_opt is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
rang_n_trees=np.arange(1,10)
tuned_parameters = [{'n_estimators': rang_n_trees}]
nfold = 10
clf_RF = #<FILL IN>
n_trees_opt = #<FILL IN>
acc_RF = #<FILL IN>
print "The number of selected trees is " + str(n_trees_opt)
print("The test accuracy of the RF is %2.2f" %(100*acc_RF))
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Initialize the random generator seed to compare results
np.random.seed(0)
print 'This can take a some minutes, be patient'
# Create RF classifier object with CV
clf_RF = # <FILL IN>
acc_RF_vector=[]
n_trees_vector=[]
for run in np.arange(50):
# For each run, train it, compute its accuracy and examine the number of optimal trees
clf_RF.# <FILL IN>
acc = # <FILL IN>
acc_RF_vector.append(acc)
n_trees = # <FILL IN>
n_trees_vector.append(n_trees)
# Compute averaged accuracies and number of used trees
mean_acc_RF = # <FILL IN>
std_acc_RF = # <FILL IN>
mean_n_trees = # <FILL IN>
std_n_trees = # <FILL IN>
# Print the results
print('Averaged accuracy for RF classifier is %2.2f +/- %2.2f '%(100*mean_acc_RF, 100*std_acc_RF))
print('Averaged number of selected trees is %2.2f +/- %2.2f '%(mean_n_trees, std_n_trees))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
Test.assertEquals(np.round(mean_acc_RF, 1), 0.9 , 'incorrect result: The value of mean_acc_RF is uncorrect')
Test.assertEquals(np.round(std_acc_RF, 2), 0.03 , 'incorrect result: The value of std_acc_RF is uncorrect')
Test.assertEquals(np.round(mean_n_trees, 1), 4.2 , 'incorrect result: The value of mean_n_trees is uncorrect')
Test.assertEquals(np.round(std_n_trees, 1), 2.0 , 'incorrect result: The value of std_n_trees is uncorrect')
from sklearn.ensemble import BaggingClassifier
from sklearn import tree
base_learner = tree.DecisionTreeClassifier(max_depth=1)
bagging = BaggingClassifier(base_learner, n_estimators = 10, max_samples=0.5, max_features = 0.5)
bagging.fit(X_train, Y_train)
acc_test = bagging.score(X_test, Y_test)
print('Accuracy of bagged ensemble is %2.2f '%(100*acc_test))
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Initialize the random generator seed to test results
np.random.seed(0)
acc_test_evol = []
rang_n_learners = range(1,50,2)
for n_learners in rang_n_learners:
acc_test_run=[]
for run in range(50):
bagging = # <FILL IN>
acc = # <FILL IN>
acc_test_run.append(acc)
acc_test_evol.append(np.mean(acc_test_run))
# Ploting results
plt.figure()
plt.plot(rang_n_learners,acc_test_evol)
plt.xlabel('Number of learners')
plt.ylabel('Accuracy')
plt.title('Evolution of the bagged ensemble accuracy with the number of learners ')
plt.show()
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(acc_test_evol[-1], 2), 0.94 , 'incorrect result: The value final of acc_test_evol is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Initialize the random generator seed to test results
np.random.seed(0)
from sklearn.ensemble import AdaBoostClassifier
base_learner = tree.DecisionTreeClassifier(max_depth=1)
# Train a discrete Adaboost classifier and obtain its accuracy
AB_D = #<FILL IN>
acc_AB_D = # <FILL IN>
# Train a real Adaboost classifier and obtain its accuracy
AB_R = # <FILL IN>
acc_AB_R = # <FILL IN>
print('Accuracy of discrete adaboost ensemble is %2.2f '%(100*acc_AB_D))
print('Accuracy of real adaboost ensemble is %2.2f '%(100*acc_AB_R))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(acc_AB_D, 2), 0.95 , 'incorrect result: The value of acc_AB_D is uncorrect')
Test.assertEquals(np.round(acc_AB_R, 2), 0.88 , 'incorrect result: The value of acc_AB_R is uncorrect')
acc_AB_D_evol=[acc for acc in AB_D.staged_score(X_test, Y_test)]
acc_AB_R_evol=[acc for acc in AB_R.staged_score(X_test, Y_test)]
# Ploting results
rang_n_learners=np.arange(50)+1
plt.figure()
plt.subplot(211)
plt.plot(rang_n_learners,acc_AB_D_evol)
plt.xlabel('Number of learners')
plt.ylabel('Accuracy')
plt.title('Discrete AB accuracy')
plt.subplot(212)
plt.plot(rang_n_learners,acc_AB_R_evol)
plt.xlabel('Number of learners')
plt.ylabel('Accuracy')
plt.title('Real AB accuracy')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load a raw miniSASP file
Step2: Get sun_intensities
Step3: Plot as a function of Altitude
|
<ASSISTANT_TASK:>
Python Code:
from atmPy.instruments.miniSASP import miniSASP
from atmPy.tools import plt_tools
from atmPy.instruments.piccolo import piccolo
%matplotlib inline
plt_tools.setRcParams(plt)
ms_raw = miniSASP.read_csv('./data/miniSASP_raw.txt')
ms_raw.data.PhotoAsh.plot()
sun_intensities = ms_raw.find_peaks()
ax = sun_intensities.data['550.4'].plot(marker = 'o', linestyle = '')
ax.set_ylim((15,30))
picco = piccolo.read_csv('./data/piccolo.log')
sun_intensities = sun_intensities.add_sun_elevetion(picco)
axs = sun_intensities.plot(offset=[3.29,3.44,3.99,4.16],
airmassfct=True,
move_max = False)
for ax in axs:
ax.set_xlim((0.1,0.6))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first define the Tensorflow graph, and create some data.
Step2: Export TensorFlow SavedModel
Step3: Deploy Cluster Serving
Step4: We config the model path in config.yaml to following (the detail of config is at Cluster Serving Configuration)
Step5: Start Cluster Serving
Step6: After configuration, start Cluster Serving by cluster-serving-start (the detail is at Cluster Serving Programming Guide)
Step7: Prediction using Cluster Serving
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
tf.__version__
g = tf.Graph()
with g.as_default():
# Graph Inputs
features = tf.placeholder(dtype=tf.float32,
shape=[None, 2], name='features')
targets = tf.placeholder(dtype=tf.float32,
shape=[None, 1], name='targets')
# Model Parameters
weights = tf.Variable(tf.zeros(shape=[2, 1],
dtype=tf.float32), name='weights')
bias = tf.Variable([[0.]], dtype=tf.float32, name='bias')
# Forward Pass
linear = tf.add(tf.matmul(features, weights), bias, name='linear')
ones = tf.ones(shape=tf.shape(linear))
zeros = tf.zeros(shape=tf.shape(linear))
prediction = tf.where(condition=tf.less(linear, 0.),
x=zeros,
y=ones,
name='prediction')
# Backward Pass
errors = targets - prediction
weight_update = tf.assign_add(weights,
tf.reshape(errors * features, (2, 1)),
name='weight_update')
bias_update = tf.assign_add(bias, errors,
name='bias_update')
train = tf.group(weight_update, bias_update, name='train')
saver = tf.train.Saver(name='saver')
import numpy as np
x_train, y_train = np.array([[1,2],[3,4],[1,3]]), np.array([1,2,1])
x_train.shape, y_train.shape
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(5):
for example, target in zip(x_train, y_train):
feed_dict = {'features:0': example.reshape(-1, 2),
'targets:0': target.reshape(-1, 1)}
_ = sess.run(['train'], feed_dict=feed_dict)
w, b = sess.run(['weights:0', 'bias:0'])
print('Model parameters:\n')
print('Weights:\n', w)
print('Bias:', b)
saver.save(sess, save_path='perceptron')
pred = sess.run('prediction:0', feed_dict={features: x_train})
print(pred)
# in this session, save the model to savedModel format
inputs = dict([(features.name, features)])
outputs = dict([(prediction.name, prediction)])
inputs, outputs
tf.saved_model.simple_save(sess, "/tmp/mlp_tf1", inputs, outputs)
! pip install analytics-zoo-serving
import os
! mkdir cluster-serving
os.chdir('cluster-serving')
! cluster-serving-init
! tail wget-log
# if you encounter slow download issue like above, you can just use following command to download
# ! wget https://repo1.maven.org/maven2/com/intel/analytics/zoo/analytics-zoo-bigdl_0.12.1-spark_2.4.3/0.9.0/analytics-zoo-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar
# if you are using wget to download, or get "analytics-zoo-xxx-serving.jar" after "ls", please call mv *serving.jar zoo.jar after downloaded.
# After initialization finished, check the directory
! ls
# Call mv *serving.jar zoo.jar as mentioned above
! mv *serving.jar zoo.jar
! ls
## Analytics-zoo Cluster Serving
model:
# model path must be provided
path: /tmp/mlp_tf1
! head config.yaml
! $FLINK_HOME/bin/start-cluster.sh
! cluster-serving-start
from zoo.serving.client import InputQueue, OutputQueue
input_queue = InputQueue()
# Use async api to put and get, you have pass a name arg and use the name to get
arr = np.array([1,2])
input_queue.enqueue('my-input', t=arr)
output_queue = OutputQueue()
prediction = output_queue.query('my-input')
# Use sync api to predict, this will block until the result is get or timeout
prediction = input_queue.predict(arr)
prediction
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Es ist also sehr einfach in Python diese Funktion zu programmieren.Das hat aber nichts mit Machine Learning zu tun, sondern das ist das klassische Programmier Paradigma
Step2: <h2> Aufsetzen des Neuronalen Netzes als nn</h2>
Step3: <h2>Vorhersage von Werten</h2>
Step4: <h3 style="color
Step5: In diesem Single Layer Neuronalen Netz lรคsst sich die Formel<br>
Step6: <h2> Vorhersage von Werten</h2>
Step7: <h3 style="color
Step8: Wie sehen aber die Parameter in den Layern aus ?
Step9: Alle Parameter eines Layers kรถnnen รผber Ihre Methoden<br>
Step10: <h2>Auslesen der Werte fรผr Layer b</h2>
Step11: <h2>Auslesen der Werte fรผr Layer c</h2>
Step12: <h2>Aufsetzen eines Neuronalen Netzes nn3 und รberprรผfen
Step13: Warum stehen da Werte ? Warum sind die Bias Werte 0 ?
|
<ASSISTANT_TASK:>
Python Code:
# Definition der Funktion umrechnung
def umrechnung(C):
F = #Ihr Code hier#
return #Ihr Code hier#
#
# Rufen Sie die definierte Funktion mit unterschiedlichen Werten (8,12.5,23,44.6)
# einmal auf. Sie sollten die Ergebnisse (46.4, 54.5, 73.4, 112.28) erhalten.
# Aufruf - Ersetzen Sie #Wert# durch Ihre Zahl - Berechnen Sie die Zelle
#
umrechnung(#Wert#)
# Importieren Sie numpy as np
import numpy as np
# Importieren Sie tensorflow as tf
# Import des Modells Sequential
# from keras.models import Sequential
# Import des Dense Layers
# from keras.layers import Dense
# importieren Sie matplotlib.pyplot as plt
# Setzen Sie den magic Befehl: %matplotlib inline
# รbergabe der zu lernenden Werte und deren Umrechnungsergebnisse an numpy arrays
celsius_i = np.array([-40, -10, 0, 8, 12.5, 15, 22, 38, 49.5], dtype=float)
fahrenheit_o = np.array([-40, 14, 32, 46.4, 54.5, 59, 71.6, 100.4, 121.1], dtype=float)
# Ausgabe der zu lernenden Werte
print("Die zu lernenden Werte sind:")
for i,c in enumerate(celsius_i):
print("{} Grad Celsius = {} Grad Fahrenheit".format(c, fahrenheit_o[i]))
# Initialisieren Sie ein Neuronales Netz nn mit Sequential()
# nn = Sequential()
# Fรผgen Sie einen Dense Layer mit einem Neuron hinzu, (units=1,input_dim=1)
#
# Wieviel Parameter hat dieser Layer ?
#
# nn.add(Dense(units=1, input_dim=1))
# Wir wollen den adam optimizer mit einer bestimmten Lernrate einsetzen
# mit der Angabe optimizer='adam' wรผrden nur die default Werte genommen werden.
# Definition des optimzers mit รbergabe einer Lernrate 0.1
optimizer_adam=tf.keras.optimizers.Adam(0.1)
# Kompilieren Sie das Modell
nn.compile(optimizer=optimizer_adam, loss='mean_squared_error')
# รberprรผfen der Konfiguration
# nn.summary()
# Wieviel Parameter sind trainierbar ?
# Trainieren Sie das Modell mit 1000 epochs
epoch_num = #Wert#
history = nn.fit(celsius_i,fahrenheit_o,epochs=epoch_num, verbose=1 )
print("Das Training ist beendet")
# Graphische Darstellung der Ergebnisentwicklung
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
# Vorhersage fรผr den Wert 100.0
ergebnis = nn.predict([#Wert#])
print(ergebnis)
# Vorhersage fรผr den Wert 85.5
ergebnis = nn.predict([#Wert#])
print(ergebnis)
# Ausgabe der Layer Parameter die gelernt wurden
print("Dies sind die Layer Variablen: {}".format(nn.get_weights()))
# erwartetes Ergebnis [1.8, 32]
# Initialisieren des Neuronalen Netzes nn2
# nn2=Sequential()
#
# Hinufรผgen der Layer (Vergabe von Namen)
#
# nn2.add(Dense(units=4, input_dim=1, name='a'))
# nn2.add(Dense(units=4, name='b'))
# nn2.add(Dense(units=1, name='c'))
# รberlegung:
# Wieviel Parameter hat Layer a ?
# Wieviel Parameter hat Layer b ?
# Wieviel Parameter hat Layer c ?
# Wieviel Parameter hat das Modell insgesamt ?
#
# Definieren des optimizers mit einer Lernrate 0.1
optimizer_a=tf.keras.optimizers.Adam(0.1)
# Kompilieren Sie das Modell
nn2.compile(optimizer=optimizer_a, loss='mean_squared_error')
#
# รberprรผfen der Konfiguration
# Wieviel Parameter hat Layer a ?
# Wieviel Parameter hat Layer b ?
# Wieviel Parameter hat Layer c ?
#
# nn2.summary()
# Trainieren Sie das Modell nn2 mit 1000 epochs
epoch_num = #Wert#
history = nn2.fit(celsius_i,fahrenheit_o,epochs=epoch_num, verbose=1 )
print("Das Training ist beendet")
# Vorhersage fรผr den Wert 100.0
ergebnis = nn2.predict([#Wert#])
print(ergebnis)
# Vorhersage fรผr den Wert 85.5
ergebnis = nn2.predict([#Wert#])
print(ergebnis)
print("Model nn2 ermittelt, dass 100 Grad Celsius: {} Grad Fahrenheit sind.".format(nn2.predict([100.0])))
# Ausgabe der Layer
# nn2.layers
nn2.layers
# Auslesen der Layer 1-3 in Variablen - Index 0-2
input1 = nn2.layers[0]
input2 = nn2.layers[1]
input3 = nn2.layers[2]
# Einlesen der Gewichte und Bias Werte aus Layer a h hat den Index 0
weights, biases = input1.get_weights()
# Ausgabe der Gewichte in Layer a
weights
# Aus gabe der Bias Werte in Layer a
biases
# Einlesen der Gewichte und Bias Werte aus Layer b - hat den Index 1
# und wurde in die Variable input2 ausgelesen
weights1, biases1 = input2.get_weights()
# Ausgabe der Gewichte in Layer b
weights1
# Aus gabe der Bias Werte in Layer b
biases1
# Einlesen der Gewichte und Bias Werte aus Layer c - hat den Index 2 und
# wurde in die Variable input3 ausgelesen
weights2, biases2 = input3.get_weights()
# Ausgabe der Gewichte in Layer c
weights2
# Aus gabe der Bias Werte in Layer b
biases2
print("Das sind die Parameter in Layer a: {}".format(input1.get_weights()))
print("Das sind die Parameter in Layer b: {}".format(input2.get_weights()))
print("Das sind die Parameter in Layer c: {}".format(input3.get_weights()))
#
# Versuchen Sie einmal in unserer Aufgabe 10 die
# Werte des Layers im Modell nn zu ermitteln !
#
# Initialisierung des Modells nn3 als Sequential
# nn3 = Sequential()
# Hinzufรผgen der Layer
#nn3.add(Dense(units=16, input_dim=1))
#nn3.add(Dense(units=8))
#nn3.add(Dense(units=1))
#
# รberprรผfen Sie gedanklich einmal die Parameter
#
# รberprรผfen der Konfiguration
# nn3.summary()
# Definieren des optimizers mit einer Lernrate 0.1
optimizer_a=tf.keras.optimizers.Adam(0.1)
# Kompilieren Sie das Modell
nn3.compile(optimizer=optimizer_a, loss='mean_squared_error')
# Auslesen der Layer
nn3.layers
# Auslesen der Layer 1-3 in Variablen - Index 0-2
inp1 = nn3.layers[0]
inp2 = nn3.layers[1]
inp3 = nn3.layers[2]
print("Das sind die Parameter in Layer 0: {}".format(inp1.get_weights()))
print("Das sind die Parameter in Layer 1: {}".format(inp2.get_weights()))
print("Das sind die Parameter in Layer 2: {}".format(inp3.get_weights()))
# Trainieren Sie das Modell nn2 mit 800 epochs
epoch_num = #Wert#
history = nn3.fit(celsius_i,fahrenheit_o,epochs=epoch_num, verbose=1 )
print("Das Training ist beendet")
# Graphische Darstellung der Ergebnisentwicklung
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
# Vorhersage fรผr den Wert 100.0
ergebnis = nn3.predict([#Wert#])
print(ergebnis)
#
# Versuchen Sie einmal die Parameter der Layer des Modells nn3 jetzt nach dem Training
# auszugeben !
#
i1 = nn3.layers[0]
i2 = nn3.layers[1]
i3 = nn3.layers[2]
print("Das sind die Parameter in Layer 2 nach dem Training: {}".format(i3.get_weights()))
#
# Wie hat sich der Bias Wert nach dem Training verรคndert ?
#
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the mutations that are segregating in each population
Step2: Look at the raw data in the first element of each list
Step3: Let's make that nicer, and convert each list of dictionaries to a Pandas DataFrame object
Step4: The columns are
Step5: We can also take views of gametes
Step6: The format is really ugly. v Each gamete is a dict with two elements
Step7: OK, let's clean that up. We'll focus on the selected mutations for each individual, and turn everything into a pd.DataFrame.
Step8: We now have a list of lists stored in 'smuts'.
Step9: That's much better. We can use the index to figure out which individual has which mutations, and their effect sizes, etc.
Step10: Again, the format here is ugly. Each diploid view is a dictionary
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import fwdpy as fp
import pandas as pd
from background_selection_setup import *
mutations = [fp.view_mutations(i) for i in pops]
for i in mutations:
print(i[0])
mutations2 = [pd.DataFrame(i) for i in mutations]
for i in mutations2:
print(i.head())
nmuts = [i[i.neutral == True] for i in mutations2]
for i in nmuts:
print(i.head())
gametes = [fp.view_gametes(i) for i in pops]
for i in gametes:
print(i[0])
smuts = [i['selected'] for i in gametes[0]]
smutsdf = pd.DataFrame()
ind=0
##Add the non-empty individuals to the df
for i in smuts:
if len(i)>0:
smutsdf = pd.concat([smutsdf,pd.DataFrame(i,index=[ind]*len(i))])
ind += 1
smutsdf.head()
dips = [fp.view_diploids(i,[0,1]) for i in pops]
for key in dips[0][0]:
print(key)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
def sigmoid(x):
return 1 / ( 1 + np.exp(-x) )
def sigmoid_derivative(x):
return x * (1-x)
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = sigmoid
self.activation_derivative = sigmoid_derivative
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot( self.weights_input_to_hidden, inputs )
hidden_outputs = self.activation_function( hidden_inputs )
# print("hidden_inputs=",hidden_inputs.shape) #10,1
# print("hidden_outputs=",hidden_outputs.shape) #10,1
# TODO: Output layer
final_inputs = np.dot( self.weights_hidden_to_output, hidden_outputs )
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
#print("target=",targets.shape)
#print("final_output=",final_outputs.shape)
output_errors = targets - final_outputs
#print("output_errors=",output_errors.shape)
# TODO: Backpropagated error
hidden_errors = np.dot( self.weights_hidden_to_output.T, output_errors )
# print("hidden_errors=",hidden_errors.shape) # 10,1
hidden_grad = self.activation_derivative(hidden_outputs)
# TODO: Update the weights
# print("hidden_grad=",hidden_grad.shape) #10, 1
# print("output_errors=",output_errors.shape) #1,1
# print(self.weights_hidden_to_output.shape) #1,10
# print(self.weights_input_to_hidden.shape)#10,56
self.weights_hidden_to_output += self.lr * hidden_outputs.T * output_errors
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot( self.weights_input_to_hidden, inputs )
hidden_outputs = self.activation_function( hidden_inputs )
# TODO: Output layer
final_inputs = np.dot( self.weights_hidden_to_output, hidden_outputs )
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 7000
learning_rate = 0.002
hidden_nodes = 15
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras provides a loading function similar to train_test_split from scikit-learn's
Step2: The neural nets in Keras act on the feature matrix slightly differently than the standard
Step3: Here we have reshaped the feature matrix into a four-dimensional matrix with dimensions
Step4: Creating a convolutional neural network
Step5: However, this time, we will be smarter about the individual layers. We will design our
Step6: After that, we will use a linear rectified unit as an activation function
Step7: In a deep convolutional neural net, we can have as many layers as we want. A popular
Step8: Finally, we will pool the activations and add a Dropout layer
Step9: Then we will flatten the model and finally pass it through a softmax function to arrive at
Step10: Here, we will use the cross-entropy loss and the Adadelta algorithm
Step11: Fitting the model
Step12: After training completes, we can evaluate the classifier
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
img_rows, img_cols = 28, 28
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0
from keras.utils import np_utils
n_classes = 10
Y_train = np_utils.to_categorical(y_train, n_classes)
Y_test = np_utils.to_categorical(y_test, n_classes)
from keras.models import Sequential
model = Sequential()
from keras.layers import Conv2D
n_filters = 32
kernel_size = (3, 3)
model.add(Conv2D(n_filters, (kernel_size[0], kernel_size[1]),
padding='valid',
input_shape=input_shape))
from keras.layers import Activation
model.add(Activation('relu'))
model.add(Conv2D(n_filters, (kernel_size[0], kernel_size[1])))
model.add(Activation('relu'))
from keras.layers import MaxPooling2D, Dropout
pool_size = (2, 2)
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
from keras.layers import Flatten, Dense
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(n_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=128, epochs=12,
verbose=1, validation_data=(X_test, Y_test))
model.evaluate(X_test, Y_test, verbose=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: quantulum3
Step2: Finding quantity statements in large texts
Step3: Annotating a dataset
Step4: We could then do something to split multiple amounts into multiple rows or columns...
|
<ASSISTANT_TASK:>
Python Code:
sentences = [
'4 years and 6 monthsโ imprisonment with a licence extension of 2 years and 6 months',
'No quantities here',
'I measured it as 2 meters and 30 centimeters.',
"four years and six months' imprisonment with a licence extension of 2 years and 6 months",
'it cost ยฃ250... bargain...',
'it weighs four hundred kilograms.',
'It weighs 400kg.',
'three million, two hundred & forty, you say?',
'it weighs four hundred and twenty kilograms.'
]
#!pip3 install quantulum3
from quantulum3 import parser
for sent in sentences:
print(sent)
p = parser.parse(sent)
if p:
print('\tSpoken:',parser.inline_parse_and_expand(sent))
print('\tNumeric elements:')
for q in p:
display(q)
print('\t\t{} :: {}'.format(q.surface, q))
print('\n---------\n')
import spacy
nlp = spacy.load('en_core_web_lg', disable = ['ner'])
text = '''
Once upon a time, there was a thing. The thing weighed forty kilogrammes and cost ยฃ250.
It was blue. It took forty five minutes to get it home.
What a day that was. I didn't get back until 2.15pm. Then I had some cake for tea.
'''
doc = nlp(text)
for sent in doc.sents:
print(sent)
for sent in doc.sents:
sent = sent.text
p = parser.parse(sent)
if p:
print('\tSpoken:',parser.inline_parse_and_expand(sent))
print('\tNumeric elements:')
for q in p:
display(q)
print('\t\t{} :: {}'.format(q.surface, q))
print('\n---------\n')
url = 'https://raw.githubusercontent.com/BBC-Data-Unit/unduly-lenient-sentences/master/ULS%20for%20Sankey.csv'
import pandas as pd
df = pd.read_csv(url)
df.head()
#get a row
df.iloc[1]
#and a, erm. sentence...
df.iloc[1]['Original sentence (refined)']
parser.parse(df.iloc[1]['Original sentence (refined)'])
def amountify(txt):
#txt may be some flavout of nan...
#handle scruffily for now...
try:
if txt:
p = parser.parse(txt)
x=[]
for q in p:
x.append( '{} {}'.format(q.value, q.unit.name))
return '::'.join(x)
return ''
except:
return
df['amounts'] = df['Original sentence (refined)'].apply(amountify)
df.head()
df['Original sentence (refined)'][:20].apply(print);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a name="multipanel"></a>
Step2: So even with the sharing of axis information, there's still a lot of repeated code. This current version with just two parameters might still be ok, but
Step3: Using the zip function we can even iterate over multiple lists at the same time with ease
Step4: That's really handy, but needing to access each part of each item with an index like item[0] isn't very flexible, requires us to remember the layout of the item, and isn't best practice. Instead we can use Python's unpacking syntax to make things nice and intuitive.
Step5: <div class="alert alert-success">
Step6: Solution
Step7: zip can also be used to "unzip" items.
Step8: Let's break down what happened there. Zip pairs elements from all of the input arguements and hands those back to us. So effectively out zip(*zipped_list) is zip((1, 2), (3, 4), (5, 6)), so the first element from each input is paired (1, 3, 5), etc. You can think of it like unzipping or transposing.
Step9: <div class="alert alert-success">
Step10: Solution
Step11: <a name="functions"></a>
Step12: We've re-implemented add which isn't incredibly exiciting, but that could be hundreds of lines of a numerical method, making a plot, or some other task. Using the function is simple
Step13: <div class="alert alert-success">
Step14: Solution
Step15: Reading buoy data with a function
Step16: <a name="argskwargs"></a>
Step17: Kwargs Example
Step18: Kwargs are commonly used in MetPy, matplotlib, pandas, and many other Python libraries (in fact we've used them in almost every notebook so far!).
Step19: It's a step forward, but we've lost a lot of formatting information. The lines are both blue, the labels as less ideal, and the title just uses the variable name. We can use some of Python's features like dictionaries, functions, and string manipulation to help improve the versatility of the plotter.
Step20: To access the value, just access that element of the dictionary with the key.
Step21: Now let's apply that to our plot. We'll use the same code from the previous example, but now look up the line color in the dictionary.
Step22: That's already much better. We need to be able to plot multiple variables on the wind speed/gust plot though. In this case, we'll allow a list of variables for each plot to be given and iterate over them. We'll store this in a list of lists. Each plot has its own list of variables!
Step23: <div class="alert alert-success">
Step24: Solution
Step25: We're almost back to where to started, but in a much more versatile form! We just need to make the labels and titles look nice. To do that, let's write a function that uses some string manipulation to clean up the variable names and give us an axis/plot title and legend label.
Step26: <a href="#top">Top</a>
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter, DayLocator
from siphon.simplewebservice.ndbc import NDBC
%matplotlib inline
# Read in some data
df = NDBC.realtime_observations('42039')
# Trim to the last 7 days
df = df[df['time'] > (pd.Timestamp.utcnow() - pd.Timedelta(days=7))]
# ShareX means that the axes will share range, ticking, etc. for the x axis
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, figsize=(18, 6))
# Panel 1
ax1.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')
ax1.set_xlabel('Time')
ax1.set_ylabel('Speed')
ax1.set_title('Measured Winds')
ax1.legend(loc='upper left')
ax1.grid(True)
# Not repeated only by sharing x
ax1.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax1.xaxis.set_major_locator(DayLocator())
# Panel 2
ax2.plot(df.time, df.pressure, color='black', label='Pressure')
ax2.set_xlabel('Time')
ax2.set_ylabel('hPa')
ax2.set_title('Atmospheric Pressure')
ax2.legend(loc='upper left')
ax2.grid(True)
plt.suptitle('Buoy 42039 Data', fontsize=24)
my_list = ['2001 A Space Obyssey',
'The Princess Bride',
'Monty Python and the Holy Grail']
for item in my_list:
print(item)
my_other_list = ['I\'m sorry, Dave. I\'m afraid I can\'t do that.',
'My name is Inigo Montoya.',
'It\'s only a flesh wound.']
for item in zip(my_list, my_other_list):
print(item)
for reference, quote in zip(my_list, my_other_list):
print(reference, '-', quote)
# Your code goes here
# %load solutions/zip.py
zipped_list = [(1, 2),
(3, 4),
(5, 6)]
unzipped = zip(*zipped_list)
print(list(unzipped))
for i, quote in enumerate(my_other_list):
print(i, ' - ', quote)
# Your code goes here
# %load solutions/enumerate.py
def silly_add(a, b):
return a + b
result = silly_add(3, 4)
print(result)
# Your code goes here
# %load solutions/functions.py
def read_buoy_data(buoy, days=7):
# Read in some data
df = NDBC.realtime_observations(buoy)
# Trim to the last 7 days
df = df[df['time'] > (pd.Timestamp.utcnow() - pd.Timedelta(days=days))]
return df
df = read_buoy_data('42039')
df
def arg_func(*argv):
for arg in argv:
print (arg)
arg_func('Welcome', 'to', 'the', 'Python', 'Workshop')
# Create a function to conduct all basic math operations, using a kwarg
def silly_function(a, b, operation=None):
if operation == 'add':
return a + b
elif operation == 'subtract':
return a - b
elif operation == 'multiply':
return a * b
elif operation == 'division':
return a / b
else:
raise ValueError('Incorrect value for "operation" provided.')
print(silly_function(3, 4, operation='add'))
print(silly_function(3, 4, operation='multiply'))
# A list of names of variables we want to plot
plot_variables = ['wind_speed', 'pressure']
# Make our figure, now choosing number of subplots based on length of variable name list
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(18, 6))
# Loop over the list of subplots and names together
for ax, var_name in zip(axes, plot_variables):
ax.plot(df.time, df[var_name])
# Set label/title based on variable name--no longer hard-coded
ax.set_ylabel(var_name)
ax.set_title(f'Buoy {var_name}')
# Set up our formatting--note lack of repetition
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
colors = {'wind_speed': 'tab:orange', 'wind_gust': 'tab:olive', 'pressure': 'black'}
colors['pressure']
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(18, 6))
for ax, var_name in zip(axes, plot_variables):
# Grab the color from our dictionary and pass it to plot()
color = colors[var_name]
ax.plot(df.time, df[var_name], color)
ax.set_ylabel(var_name)
ax.set_title(f'Buoy {var_name}')
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
plot_variables = [['wind_speed', 'wind_gust'], ['pressure']]
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(18, 6))
for ax, var_names in zip(axes, plot_variables):
for var_name in var_names:
# Grab the color from our dictionary and pass it to plot()
color = colors[var_name]
ax.plot(df.time, df[var_name], color)
ax.set_ylabel(var_name)
ax.set_title(f'Buoy {var_name}')
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
# Create your linestyles dictionary and modify the code below
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(18, 6))
for ax, var_names in zip(axes, plot_variables):
for var_name in var_names:
# Grab the color from our dictionary and pass it to plot()
color = colors[var_name]
ax.plot(df.time, df[var_name], color)
ax.set_ylabel(var_name)
ax.set_title(f'Buoy {var_name}')
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
# %load solutions/looping1.py
def format_varname(varname):
parts = varname.split('_')
title = parts[0].title()
label = varname.replace('_', ' ').title()
return title, label
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(18, 6))
linestyles = {'wind_speed': '-', 'wind_gust': '--', 'pressure': '-'}
for ax, var_names in zip(axes, plot_variables):
for var_name in var_names:
title, label = format_varname(var_name)
color = colors[var_name]
linestyle = linestyles[var_name]
ax.plot(df.time, df[var_name], color, linestyle=linestyle, label=label)
ax.set_ylabel(title)
ax.set_title(f'Buoy {title}')
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
ax.legend(loc='upper left')
buoys = ['42039', '42022']
fig, axes = plt.subplots(len(buoys), len(plot_variables), sharex=True, figsize=(14, 10))
for row, buoy in enumerate(buoys):
df = read_buoy_data(buoy)
for col, var_names in enumerate(plot_variables):
ax = axes[row,col]
for var_name in var_names:
title, label = format_varname(var_name)
color = colors[var_name]
linestyle = linestyles[var_name]
ax.plot(df.time, df[var_name], color, linestyle=linestyle, label=label)
ax.set_ylabel(title)
ax.set_title(f'Buoy {buoy} {title}')
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we read in a plasmid from Havens et al. 2012 and isolate the EYFP sequence.
Step2: Designing primers is straightforward - you just call design.design_primer with a sequence.DNA object as the input.
Step3: Designing primers and getting a string output is just the first step in primer design - we want to know whether the primers actually work and write them out to a file. The point of programming DNA is that you never copy and paste!
Step4: Now that we have verified that our primers should at least amplify the DNA that we want, let's write out our primers to file so they can be submitted to an oligo synthesis company.
Step5: The csv file can then be opened in a spreadsheet application like Excel or processed by a downstream program. This is the format of the csv
|
<ASSISTANT_TASK:>
Python Code:
import coral as cor
plasmid = cor.seqio.read_dna("../files_for_tutorial/maps/pGP4G-EYFP.ape")
eyfp_f = [f for f in plasmid.features if f.name == 'EYFP'][0]
eyfp = plasmid.extract(eyfp_f)
print len(eyfp)
eyfp
# Forward and reverse, one at a time using design_primer()
forward = cor.design.primer(eyfp)
reverse = cor.design.primer(eyfp.reverse_complement())
# Both at once using design_primers()
forward, reverse = cor.design.primers(eyfp)
# design_primer has many options, including adding overhangs
custom_forward = cor.design.primer(eyfp, tm=65, min_len=12,
tm_undershoot=1, tm_overshoot=1,
end_gc=True, tm_parameters="santalucia98",
overhang=cor.DNA("GGGGGATCGAT"))
print forward
print
print custom_forward
amplicon = cor.reaction.pcr(plasmid, forward, reverse)
amplicon == eyfp
# First we give our primers names (the `.name` attribute is empty by default)
forward.name = "EYFP_forward"
reverse.name = "EYFP_reverse"
# Then we write to file - a csv (comma separated value file)
cor.seqio.write_primers([forward, reverse], "./designed_primers.csv", ["Forward EYFP primer", "Reverse EYFP primer"])
import csv
with open("./designed_primers.csv", "r") as csv_file:
reader = csv.reader(csv_file)
lines = [line for line in reader]
for line in lines:
print line
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
|
<ASSISTANT_TASK:>
Python Code:
%reset
################################################################################
### Import packages
################################################################################
import os
import pandas as pd
from scipy.spatial import distance as dist
from scipy.cluster import hierarchy as sch
################################################################################
### Define folder structure
################################################################################
externalDataDir = '../data/externalData'
taxonFile = externalDataDir+'/taxonomy.csv'
orthomclDir = '../data/orthoMCL'
resultsDir = '../results/'
exprDir = resultsDir+'/expression'
figureDir = '../figures/fig3-workflow'
taxonLevel = 'Clade'
clade = 'acI-C'
# Check that figureDir exists, results will be placed there
if not os.path.exists(figureDir):
os.makedirs(figureDir)
################################################################################
### Create list of genomes in the specified clade
################################################################################
# Define a function to import taxonomy files
def importTaxonomy(taxonFile, level):
# Read in the taxonomic classification
taxonClass = pd.DataFrame.from_csv(taxonFile, sep=',')
taxonClass = taxonClass.dropna()
# Extract the unique tribes found in the dataset
groupList = pd.unique(taxonClass[level].values)
groupList.sort(axis=0)
groupList = [ group for group in groupList if not group.startswith('Unknown') ]
# For each tribe, return the list of samples. Creates a dict and adds an entry
# for each tribe.
groupSampleDict = {}
for group in groupList:
# Identify the samples belonging to this tribe
samples = taxonClass.loc[taxonClass[level] == group]
samples = [sample for sample in samples.index]
groupSampleDict[group] = samples
return groupSampleDict
genomeSampleDict = importTaxonomy(taxonFile, taxonLevel)
genomeList = genomeSampleDict[clade]
# Determine the set of COGs found within the specified clade
# Read in the cog table. Subset columns belonging to the clade. Subset rows
# with COGs in that clade.
cogTable = pd.read_csv(orthomclDir+'/cogTable.csv', delimiter=',', index_col=0)
cogTable = cogTable[genomeList]
cogTable = cogTable[~pd.isnull(cogTable).all(axis=1)]
# Replace CDS with '1' and 'nan' with 0
cogTable = cogTable.fillna(0)
cogTable = cogTable.replace(to_replace='.+', value='1', regex=True)
# Make index a column and rearrange. Anvio requires the first row/column of
# view data file to say 'contig'
cogTable['contig'] = cogTable.index
cogTable = cogTable[['contig']+genomeList]
cogTable.to_csv(figureDir+'/view_data.txt', sep='\t', index=False)
# Determine the set of COGs found within the specified clade
# Read in the cog table. Subset columns belonging to the clade. Subset rows
# with COGs in that clade.
cogTable = pd.read_csv(orthomclDir+'/cogTable.csv', delimiter=',', index_col=0)
cogTable = cogTable[genomeList]
cogTable = cogTable[~pd.isnull(cogTable).all(axis=1)]
# Establish data table of RPKM values
rpkmTable = pd.read_csv(exprDir+'/'+clade+'.norm', delimiter=',', index_col=1)
rpkmTable = rpkmTable['Log2 Avg RPKM']
# Create empty dataframe and populate with values from rpkmTable
addlViewDataDF = pd.DataFrame(0, index=cogTable.index, columns=['log2_avg_rpkm'], dtype=float)
for cog in addlViewDataDF.index:
if cog in rpkmTable.index:
addlViewDataDF.set_value(cog, 'log2_avg_rpkm', rpkmTable.loc[cog])
addlViewDataDF.to_csv(figureDir+'/additional_view_data.txt', sep='\t')
# Import the observations
obsDF = pd.read_csv(figureDir+'/view_data.txt', sep='\t', index_col=0)
leafNames = obsDF.index.tolist()
obsMatrix = obsDF.values
# Compute the distance matrix
distMatrix = dist.pdist(obsMatrix, metric='euclidean')
# Compute the linkage matrix
linkMatrix = sch.linkage(distMatrix, method='single', metric='euclidean')
# Export the linakge matrix as a newick file
# Stolen from StackOverflow: http://stackoverflow.com/questions/28222179/save-dendrogram-to-newick-format
def getNewick(node, newick, parentdist, leaf_names):
if node.is_leaf():
return "%s:%.2f%s" % (leaf_names[node.id], parentdist - node.dist, newick)
else:
if len(newick) > 0:
newick = "):%.2f%s" % (parentdist - node.dist, newick)
else:
newick = ");"
newick = getNewick(node.get_left(), newick, node.dist, leaf_names)
newick = getNewick(node.get_right(), ",%s" % (newick), node.dist, leaf_names)
newick = "(%s" % (newick)
return newick
tree = sch.to_tree(linkMatrix, False)
newickTree = getNewick(tree, "", tree.dist, leafNames)
# Write to file
with open(figureDir+'/tree.txt', 'w') as outFileHandle:
outFileHandle.write(newickTree)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this notebook we demo two equivalent ways of performing a two-sample Bayesian t-test to compare the mean value of two Gaussian populations using Bambi.
Step2: When we carry out a two sample t-test we are implicitly using a linear model that can be specified in different ways. One of these approaches is the following
Step3: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following
Step4: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary.
Step5: In the summary table we can see the 94% highest density interval for $\beta_1$ ranges from 1.511 to 2.499. Thus, according to the data and the model used, we conclude the difference between the two population means is somewhere between 1.2 and 2.2 and hence we support the hypotehsis that $\beta_1 \ne 0$.
Step6: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different.
Step7: The linear model implicit in the t-test can also be specified without an intercept term, such is the case of Model 2.
Step8: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following
Step9: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary.
Step10: In this summary we can observe the estimated distribution of means for each population. A simple way to compare them is subtracting one to the other. In the next plot we can se that the entirety of the distribution of differences is higher than zero and that the mean of population 2 is higher than the mean of population 1 by a mean of 2.
Step11: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 - \beta_0 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different.
|
<ASSISTANT_TASK:>
Python Code:
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
az.style.use("arviz-darkgrid")
np.random.seed(1234)
a = np.random.normal(6, 2.5, 160)
b = np.random.normal(8, 2, 120)
df = pd.DataFrame({"Group": ["a"] * 160 + ["b"] * 120, "Val": np.hstack([a, b])})
df.head()
az.plot_violin({"a": a, "b": b});
model_1 = bmb.Model("Val ~ Group", df)
results_1 = model_1.fit()
model_1
model_1.plot_priors();
az.plot_trace(results_1, kind="rank_vlines");
az.summary(results_1)
# Grab just the posterior of the term of interest (group)
group_posterior = results_1.posterior['Group']
az.plot_posterior(group_posterior, ref_val=0);
# Probabiliy that posterior is > 0
(group_posterior.values > 0).mean()
model_2 = bmb.Model("Val ~ 0 + Group", df)
results_2 = model_2.fit()
model_2
model_2.plot_priors();
az.plot_trace(results_2, kind="rank_vlines");
az.summary(results_2)
# Grab just the posterior of the term of interest (group)
group_posterior = results_2.posterior['Group'][:,:,1] - results_2.posterior['Group'][:,:,0]
az.plot_posterior(group_posterior, ref_val=0);
# Probabiliy that posterior is > 0
(group_posterior.values > 0).mean()
%load_ext watermark
%watermark -n -u -v -iv -w
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adjacency Matrices
Step2: A little visualization, just to double check.
Step3: Steady-State Probability of Random Walker
Step4: The resulting probabilities
Step5: Steady-State Probabilities for Undirected Graph
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import networkx as nx
import seaborn as sns
sns.set_style('ticks')
sns.set_context('poster')
A_directed = np.array( [[0, 1, 0, 0, 1],
[0, 0, 1, 0, 0],
[1, 0, 0, 1, 1],
[0, 1, 1, 0, 0],
[0, 0, 0, 0, 0]] )
print(A_directed)
nx.draw_networkx(nx.DiGraph(data=A_directed.T),
labels=dict(zip(range(len(A_directed)), range(1, len(A_directed)+1))),
node_size=600,
node_color=sns.xkcd_rgb["pale orange"])
def calc_steady_state(A, i=100):
M = A / A.sum(axis=0)
M = np.linalg.matrix_power(M, i)
return M
def print_probs(probs):
print('\n'.join('Node {0}: {1:.4f}'.format(node, p) for node, p in \
zip(range(1, len(probs)+1), probs)))
probs = calc_steady_state(A_directed)[:,0]
print_probs(probs)
A_undirected = np.array( [[0, 1, 1, 0, 1],
[1, 0, 1, 1, 0],
[1, 1, 0, 1, 1],
[0, 1, 1, 0, 0],
[1, 0, 1, 0, 0]] )
print(A_undirected)
probs = calc_steady_state(A_undirected)[:,0]
print_probs(probs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll start by generating some fake data (from a sinusoidal model) with error bars
Step2: Now, we'll choose a kernel (covariance) function to model these data, assume a zero mean model, and predict the function values across the full range. The full kernel specification language is documented here but here's an example for this dataset
Step3: The gp model provides a handler for computing the marginalized likelihood of the data under this model
Step4: So we can use thisโcombined with scipy's minimize functionโto fit for the maximum likelihood parameters
Step5: And plot the maximum likelihood model
|
<ASSISTANT_TASK:>
Python Code:
import george
george.__version__
import numpy as np
import matplotlib.pyplot as pl
np.random.seed(1234)
x = 10 * np.sort(np.random.rand(15))
yerr = 0.2 * np.ones_like(x)
y = np.sin(x) + yerr * np.random.randn(len(x))
pl.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
pl.xlim(0, 10)
pl.ylim(-1.45, 1.45)
pl.xlabel("x")
pl.ylabel("y");
from george import kernels
kernel = np.var(y) * kernels.ExpSquaredKernel(0.5)
gp = george.GP(kernel)
gp.compute(x, yerr)
x_pred = np.linspace(0, 10, 500)
pred, pred_var = gp.predict(y, x_pred, return_var=True)
pl.fill_between(x_pred, pred - np.sqrt(pred_var), pred + np.sqrt(pred_var),
color="k", alpha=0.2)
pl.plot(x_pred, pred, "k", lw=1.5, alpha=0.5)
pl.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
pl.plot(x_pred, np.sin(x_pred), "--g")
pl.xlim(0, 10)
pl.ylim(-1.45, 1.45)
pl.xlabel("x")
pl.ylabel("y");
print("Initial ln-likelihood: {0:.2f}".format(gp.log_likelihood(y)))
from scipy.optimize import minimize
def neg_ln_like(p):
gp.set_parameter_vector(p)
return -gp.log_likelihood(y)
def grad_neg_ln_like(p):
gp.set_parameter_vector(p)
return -gp.grad_log_likelihood(y)
result = minimize(neg_ln_like, gp.get_parameter_vector(), jac=grad_neg_ln_like)
print(result)
gp.set_parameter_vector(result.x)
print("\nFinal ln-likelihood: {0:.2f}".format(gp.log_likelihood(y)))
pred, pred_var = gp.predict(y, x_pred, return_var=True)
pl.fill_between(x_pred, pred - np.sqrt(pred_var), pred + np.sqrt(pred_var),
color="k", alpha=0.2)
pl.plot(x_pred, pred, "k", lw=1.5, alpha=0.5)
pl.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
pl.plot(x_pred, np.sin(x_pred), "--g")
pl.xlim(0, 10)
pl.ylim(-1.45, 1.45)
pl.xlabel("x")
pl.ylabel("y");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading cleaned data
Step2: Preparing data
Step3: Regression approach
Step4: Classification approach
Step5: Binary solution
|
<ASSISTANT_TASK:>
Python Code:
import turicreate as tc
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
sf = tc.SFrame.read_csv('electrodes_clean.csv')
sf.explore() # in GUI
# optional save to SFrame
# sf = tc.SFrame('electrodes_clean.sframe')
sf_reg = sf.remove_column('TPLE category')
sf_class = sf.remove_column('TPLE')
sf_reg_train, sf_reg_test = sf_reg.random_split(0.8)
reg_model = tc.regression.create(sf_reg_train, target = 'TPLE')
reg_model.evaluate(sf_reg_test)
reg_model.summary()
sf_class_train, sf_class_test = sf_class.random_split(0.8)
class_model = tc.classifier.create(sf_class_train, target = 'TPLE category')
metrics = class_model.evaluate(sf_class_test)
metrics
# metrics['confusion_matrix']
class_model.summary()
# create new dataset - easier when experimenting with different cutoff values
# remove column 'TPLE category' - otherwise we severely bias results
sf_dev = sf_class.remove_column('TPLE category')
def evaluate_classification_for_cutoff(value):
'''Creates dataframe with predefined cutoff value.
Useful to play with different cutoffs. Value represents the deviation in mm.
Returns metrics of model'''
sf_dev['Deviated'] = sf['TPLE'].apply(lambda tple: 'yes' if tple > value else 'no')
sf_dev_train, sf_dev_test = sf_dev.random_split(0.8)
model = tc.classifier.create(sf_dev_train, target = 'Deviated', verbose = False)
metrics = model.evaluate(sf_dev_test)
return metrics
cutoff_values = [1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
results = {}
for cv in cutoff_values:
metr = evaluate_classification_for_cutoff(cv)
results.update({cv: metr})
plt.figure()
for cutoff, metric in results.items():
acc = metric['accuracy']; auc = metric['auc']
print(f"Cutoff {cutoff} - Accuracy: {acc:.2f} | AUC: {auc:.2f}")
plt.plot(cutoff, acc, 'bo', label = 'Accuracy') # Accuracy in BLUE
plt.plot(cutoff, auc, 'ro', label = 'AUC') # AUC in RED
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute the coronal averages for temperature and density over the whole parameter space
Step2: Define a function to do the spatial averaging.
Step3: Now, if the raw HYDRAD results are in the appropriate directory, do the time average over all of them and save them to a data structure. Otherwise, just load the time-dependent spatial averages from a binary blob.
Step4: Finally, save the results as a serialized pickle file.
|
<ASSISTANT_TASK:>
Python Code:
import os
import pickle
import numpy as np
hfRes_format = '../results/static/HYDRAD_raw/%s/HYDRAD_%d/Results/profile%d.phy'
hydrad_labs = [20,40,200,500]
hydrad_res = {'electron':{},'ion':{},'single':{},
'loop_midpoint':4.5e+9, 'time':np.arange(0,5001)}
int_perc = 0.9
def spatial_average(s,f,mp,eps_mp):
#calculate bounds
mp_lower = mp - eps_mp*mp*(1.-1.e9/(1.e9 + 2.*mp))
mp_upper = mp + eps_mp*mp*(1.-1.e9/(1.e9 + 2.*mp))
#find f and s within specified bounds
i_eb = np.where((s>=mp_lower) & (s<=mp_upper))[0]
s_eb = s[i_eb]
f_eb = f[i_eb]
#take average
delta_s = np.gradient(s_eb)
return np.average(f_eb,weights=delta_s)
if os.path.isdir('../results/static/HYDRAD_raw') and not os.path.isfile('../results/static/hydrad_varying_tau_results.pickle'):
for key in hydrad_res:
if key=='loop_midpoint' or key=='time':
continue
for hl in hydrad_labs:
Te_avg = []
Ti_avg = []
n_avg = []
for t in hydrad_res['time']:
#Load results
temp = np.loadtxt(hfRes_format%(key,hl,t))
#slice
s_temp = temp[:,0]
Te_temp = temp[:,7]
Ti_temp = temp[:,8]
n_temp = temp[:,3]
#save averages
Te_avg.append(spatial_average(s_temp,Te_temp,hydrad_res['loop_midpoint'],int_perc))
Ti_avg.append(spatial_average(s_temp,Ti_temp,hydrad_res['loop_midpoint'],int_perc))
n_avg.append(spatial_average(s_temp,n_temp,hydrad_res['loop_midpoint'],int_perc))
hydrad_res[key]['tau%ds'%hl] = {'Te':Te_avg,'Ti':Ti_avg,'n':n_avg}
else:
with open('../results/static/hydrad_varying_tau_results.pickle','rb') as f:
hydrad_res = pickle.load(f)
with open(__dest__,'wb') as f:
pickle.dump(hydrad_res,f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import math
def Area_Parallelogram1(a , b , theta ) :
area =(abs(math . tan(math . radians(theta ) ) ) / 2 ) * abs(a ** 2 - b ** 2 )
return area
def Area_Parallelogram2(a , b , gamma ) :
area =(abs(math . sin(math . radians(gamma ) ) ) ) * abs(a * b )
return area
def Area_Parallelogram3(d1 , d2 , theta ) :
area =(abs(math . sin(math . radians(theta ) ) ) / 2 ) * abs(d1 * d2 )
return area
d1 = 3
d2 = 5
theta = 90
area = Area_Parallelogram3(d1 , d2 , theta )
print(round(area , 2 ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transform data with PCA computed on the average ie evoked response
Step2: Transform data with ICA computed on the raw epochs (no averaging)
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Asish Panda <asishrocks95@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.decoding import UnsupervisedSpatialFilter
from sklearn.decomposition import PCA, FastICA
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
X = epochs.get_data()
pca = UnsupervisedSpatialFilter(PCA(30), average=False)
pca_data = pca.fit_transform(X)
ev = mne.EvokedArray(np.mean(pca_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev.plot(show=False, window_title="PCA")
ica = UnsupervisedSpatialFilter(FastICA(30), average=False)
ica_data = ica.fit_transform(X)
ev1 = mne.EvokedArray(np.mean(ica_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev1.plot(show=False, window_title='ICA')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load train data
Step2: Define columns
Step3: Parse the weird date format of the date column
Step4: Drop duplicate rows
Step5: Drop duplicate columns
Step6: Remove columns with only NaNs
Step7: Vectorize Datetime colums
Step8: Delete the original date columns and join the years, months and days DataFrames with the original DataFrame
Step9: Eliminate columns with 0 variance
Step10: Eliminate all columns with standard deviation equal to 0
Step11: Save preprocessed data to another csv file
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from collections import defaultdict, namedtuple
import cPickle as pickle
from datetime import datetime, timedelta
import dateutil
from functools import partial
import inspect
import json
import os
import re
import sys
import numpy as np
import pandas as pd
if os.name == 'nt':
TRAIN_PATH = r'D:\train.csv'
PTRAIN_PATH = r'D:\train_preprocessed_float_string_date.csv'
TEST_PATH = r'D:\test.csv'
GOOGNEWS_PATH = r'D:\GoogleNews-vectors-negative300.bin.gz'
VOCAB_PATH = r'D:\big.txt'
else:
TRAIN_PATH = r'/media/mtambos/speedy/train.csv'
PTRAIN_PATH = r'/media/mtambos/speedy/train_preprocessed_float_string_date.csv'
TEST_PATH = r'/media/mtambos/speedy/test.csv'
GOOGNEWS_PATH = r'/media/mtambos/speedy/GoogleNews-vectors-negative300.bin.gz'
VOCAB_PATH = r'/media/mtambos/speedy/big.txt'
#df_orig = pd.read_csv(TRAIN_PATH, index_col="ID")
df = pd.read_csv(PTRAIN_PATH, index_col="ID")
#df
date_cols = ['VAR_0073', 'VAR_0075', 'VAR_0156',
'VAR_0157', 'VAR_0158', 'VAR_0159',
'VAR_0166', 'VAR_0167', 'VAR_0168',
'VAR_0169', 'VAR_0176', 'VAR_0177',
'VAR_0178', 'VAR_0179', 'VAR_0204',
'VAR_0217', 'VAR_0294', 'VAR_0314']
def parse_date_str(date_val):
if isinstance(date_val, datetime):
return date_val
date_val = str(date_val).lower()
date_val = None if date_val == 'nan' else date_val
if date_val is None:
return pd.NaT
date_val = date_val[:-2] if date_val[-2:] == '.0' else date_val
try:
return datetime.strptime(date_val, '%d%b%y:%H:%M:%S')
except:
try:
return datetime.strptime(date_val, '%d%b%y')
except:
try:
return datetime.strptime(date_val, '%Y')
except:
print date_val
return pd.NaT
df_date_cols = df[date_cols].applymap(parse_date_str).astype('datetime64[ns]')
df_date_cols.describe()
df[date_cols] = df_date_cols
df.drop_duplicates(inplace=True)
cols_to_drop = set()
for i, col in enumerate(date_cols):
for col2 in date_cols[i+1:]:
if (df[col] == df[col2]).all():
cols_to_drop.add(col2)
cols_to_drop
nan_cols = df.isnull().all()
nan_cols = nan_cols.index[nan_cols].tolist()
nan_cols
df[date_cols] = df[date_cols].astype('datetime64[ns]')
years = pd.DataFrame(columns=[c+'_year' for c in date_cols], index=df.index, dtype=np.int)
months = pd.DataFrame(columns=[c+'_month' for c in date_cols], index=df.index, dtype=np.int)
days = pd.DataFrame(columns=[c+'_day' for c in date_cols], index=df.index, dtype=np.int)
for c in date_cols:
dateIndex = pd.DatetimeIndex(df[c])
years[c+'_year'] = dateIndex.year
months[c+'_month'] = dateIndex.month
days[c+'_day'] = dateIndex.day
df = df.drop(date_cols, axis=1)
df = df.join(years)
df = df.join(months)
df = df.join(days)
date_cols = years.columns.tolist() + months.columns.tolist() + days.columns.tolist()
df_desc = df.describe()
df_desc[sorted(df_desc.columns, key=lambda x: df_desc.loc['std', x])]
std_series = df_desc.loc['std', :]
null_std_cols = std_series[std_series == 0]
df = df.drop(null_std_cols.index, axis=1)
df.to_csv(PTRAIN_PATH)
with open('date_cols.pickle', 'wb') as fp:
pickle.dump(date_cols, fp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="font-family
Step2: Let's see what is in this image.
Step3: The shape of the ndarray show that it is a three layered matrix. The first two numbers here are length and width, and the third number (i.e. 3) is for three layers
Step4: <p style="font-family
Step5: <p style="font-family
Step6: <p style="font-family
Step7: <p style="font-family
Step8: <p style="font-family
Step9: <p style="font-family
Step10: <p style="font-family
Step11: <p style="font-family
Step12: <p style="font-family
Step13: <p style="font-family
Step14: <p style="font-family
Step15: <p style="font-family
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from scipy import misc
import matplotlib.pyplot as plt
from skimage import data
photo_data = misc.imread('./wifire/sd-3layers.jpg')
type(photo_data)
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
photo_data.shape
#print(photo_data)
photo_data.size
photo_data.min(), photo_data.max()
photo_data.mean()
photo_data[150, 250]
photo_data[150, 250, 1]
#photo_data = misc.imread('./wifire/sd-3layers.jpg')
photo_data[150, 250] = 0
plt.figure(figsize=(10,10))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
photo_data[200:800, : ,1] = 255
plt.figure(figsize=(10,10))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
photo_data[200:800, :] = 255
plt.figure(figsize=(10,10))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
photo_data[200:800, :] = 0
plt.figure(figsize=(10,10))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
print("Shape of photo_data:", photo_data.shape)
low_value_filter = photo_data < 200
print("Shape of low_value_filter:", low_value_filter.shape)
#import random
plt.figure(figsize=(10,10))
plt.imshow(photo_data)
photo_data[low_value_filter] = 0
plt.figure(figsize=(10,10))
plt.imshow(photo_data)
rows_range = np.arange(len(photo_data))
cols_range = rows_range
print(type(rows_range))
photo_data[rows_range, cols_range] = 255
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
total_rows, total_cols, total_layers = photo_data.shape
#print("photo_data = ", photo_data.shape)
X, Y = np.ogrid[:total_rows, :total_cols]
#print("X = ", X.shape, " and Y = ", Y.shape)
center_row, center_col = total_rows / 2, total_cols / 2
#print("center_row = ", center_row, "AND center_col = ", center_col)
#print(X - center_row)
#print(Y - center_col)
dist_from_center = (X - center_row)**2 + (Y - center_col)**2
#print(dist_from_center)
radius = (total_rows / 2)**2
#print("Radius = ", radius)
circular_mask = (dist_from_center > radius)
#print(circular_mask)
print(circular_mask[1500:1700,2000:2200])
photo_data = misc.imread('./wifire/sd-3layers.jpg')
photo_data[circular_mask] = 0
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
X, Y = np.ogrid[:total_rows, :total_cols]
half_upper = X < center_row # this line generates a mask for all rows above the center
half_upper_mask = np.logical_and(half_upper, circular_mask)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
photo_data[half_upper_mask] = 255
#photo_data[half_upper_mask] = random.randint(200,255)
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
red_mask = photo_data[:, : ,0] < 150
photo_data[red_mask] = 0
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
green_mask = photo_data[:, : ,1] < 150
photo_data[green_mask] = 0
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
blue_mask = photo_data[:, : ,2] < 150
photo_data[blue_mask] = 0
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
photo_data = misc.imread('./wifire/sd-3layers.jpg')
red_mask = photo_data[:, : ,0] < 150
green_mask = photo_data[:, : ,1] > 100
blue_mask = photo_data[:, : ,2] < 100
final_mask = np.logical_and(red_mask, green_mask, blue_mask)
photo_data[final_mask] = 0
plt.figure(figsize=(15,15))
plt.imshow(photo_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์ด ํ๋ฅ ๋ถํฌ๋ ๋ค์ ์์์ ๋ณด๋ค์ํผ ์ฃผ๋ณ ํ๋ฅ ๋ถํฌ์ ๊ณฑ์ผ๋ก ํํ๋๋ค.
Step2: ์ฌ๋ฌ ๊ฐ์ง Y๊ฐ์ ๋ฐ๊พธ์ด๋ ์กฐ๊ฑด๋ถ ํ๋ฅ ์ ๋ณํ์ง ์๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
Step3: ์ด๋ฒ์๋ ๋ค์๊ณผ ๊ฐ์ ๊ฒฐํฉ ํ๋ฅ ๋ถํฌ๋ฅผ ๋ณด์. ์ด ๊ฒฝ์ฐ์๋ ๋
๋ฆฝ ์กฐ๊ฑด์ด ์ฑ๋ฆฝํ์ง ์๋๋ค.
Step4: ์ด ๊ฒฝ์ฐ์๋ y๊ฐ์ ๋ฐ๋ผ x์ ์กฐ๊ฑด๋ถ ํ๋ฅ ๋ถํฌ๊ฐ ๋ฌ๋ผ์ง๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
|
<ASSISTANT_TASK:>
Python Code:
np.set_printoptions(precision=4)
pmf1 = np.array([[0, 1, 2, 3, 2, 1],
[0, 2, 4, 6, 4, 2],
[0, 4, 8,12, 8, 4],
[0, 2, 4, 6, 4, 2],
[0, 1, 2, 3, 2, 1]])
pmf1 = pmf1/pmf1.sum()
pmf1
sns.heatmap(pmf1)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Joint Probability (Independent)")
plt.show()
pmf1_marginal_x = pmf1.sum(axis=0)
pmf1_marginal_y = pmf1.sum(axis=1)
pmf = pmf1_marginal_x * pmf1_marginal_y[:, np.newaxis]
pmf/pmf.sum()
cond_x_y0 = pmf1[0, :]/pmf1_marginal_y[0]
cond_x_y0
cond_x_y1 = pmf1[1, :]/pmf1_marginal_y[1]
cond_x_y1
cond_x_y2 = pmf1[2, :]/pmf1_marginal_y[2]
cond_x_y2
pmf2 = np.array([[0, 0, 0, 0, 1, 1],
[0, 0, 1, 2, 1, 0],
[0, 1, 3, 3, 1, 0],
[0, 1, 2, 1, 0, 0],
[1, 1, 0, 0, 0, 0]])
pmf2 = pmf2/pmf2.sum()
pmf2
sns.heatmap(pmf2)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Joint Probability (Dependent)")
plt.show()
pmf2_marginal_x = pmf2.sum(axis=0)
pmf2_marginal_y = pmf2.sum(axis=1)
cond_x_y0 = pmf2[0, :]/pmf2_marginal_y[0]
cond_x_y0
cond_x_y1 = pmf2[1, :]/pmf2_marginal_y[1]
cond_x_y1
cond_x_y2 = pmf2[2, :]/pmf2_marginal_y[2]
cond_x_y2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="ref0"></a>
Step2: Create a tensor ranging from -10 to 10
Step3: When you use sequential, you can create a sigmoid object
Step4: Apply the element-wise function Sigmoid with the object
Step5: Plot the results
Step6: For custom modules, call the sigmoid from the torch (nn.functional for the old version), which applies the element-wise sigmoid from the function module and plots the results
Step7: w=torch.te
Step8: Call the object and plot it
Step9: For custom modules, call the Tanh object from the torch (nn.functional for the old version), which applies the element-wise sigmoid from the function module and plots the results
Step10: <a id="ref3"></a>
Step11: For custom modules, call the relu object from the nn.functional, which applies the element-wise sigmoid from the function module and plots the results
Step12: <a id="ref3"></a>
|
<ASSISTANT_TASK:>
Python Code:
import torch.nn as nn
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
torch.manual_seed(2)
z=torch.arange(-10,10,0.1).view(-1, 1)
sig=nn.Sigmoid()
yhat=sig(z)
sig(torch.tensor(-1.0))
plt.plot(z.numpy(),yhat.numpy())
plt.xlabel('z')
plt.ylabel('yhat')
yhat=torch.sigmoid(z)
plt.plot(z.numpy(),yhat.numpy())
TANH=nn.Tanh()
yhat=TANH(z)
plt.plot(z.numpy(),yhat.numpy())
yhat=torch.tanh(z)
plt.plot(z.numpy(),yhat.numpy())
RELU=nn.ReLU()
yhat=RELU(z)
plt.plot(z.numpy(),yhat.numpy())
yhat=F.relu(z)
plt.plot(z.numpy(),yhat.numpy())
x=torch.arange(-2,2,0.1).view(-1, 1)
plt.plot(x.numpy(),F.relu(x).numpy(),label='relu')
plt.plot(x.numpy(),torch.sigmoid(x).numpy(),label='sigmoid')
plt.plot(x.numpy(),torch.tanh(x).numpy(),label='tanh')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prep 2
Step2: Prep 3
Step3: Prep 4
Step4: UMAP
Step5: Visualize 1
Step6: Fascinatingly, when Graphistry's force-directed graph layout algorithm reuses UMAP's inferred edge connectivities, the layout does not significantly change from what UMAP computes. Try hitting the "play" button in the tool to see for yourself! That means the graph-based intuitions for subsequent interactions, such as removing key nodes/edges and reclustering, should be consistent.
|
<ASSISTANT_TASK:>
Python Code:
# Already installed in Graphistry & RAPIDS distros
# ! pip install --user umap-learn
# ! pip install --user graphistry
import graphistry, pandas as pd, umap
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
df = pd.read_csv('../../data/honeypot.csv')
df['victimPort'] = df['victimPort'].astype('uint32')
df['time(max)'] = pd.to_datetime(df['time(max)'] * 1000 * 1000 * 1000)
df['time(min)'] = pd.to_datetime(df['time(min)'] * 1000 * 1000 * 1000)
print(df.info())
df.sample(5)
dummmies = [
pd.get_dummies(df[c], prefix=f'{c}_oh')
for c in ['victimIP', 'victimPort', 'vulnName']
]
encoded_ips = ([
df[[]].assign(
attackerIP_a = df['attackerIP'].str.extract("^(\d+)\.").astype('uint8'),
attackerIP_b = df['attackerIP'].str.extract("^\d+\.(\d+)\.").astype('uint8'),
attackerIP_c = df['attackerIP'].str.extract("^\d+\.\d+\.(\d+)\.").astype('uint8'),
attackerIP_d = df['attackerIP'].str.extract("^\d+\.\d+\.\d+\.(\d+)$").astype('uint8'),
victimIP_a = df['victimIP'].str.extract("^(\d+)\.").astype('uint8'),
victimIP_b = df['victimIP'].str.extract("^\d+\.(\d+)\.").astype('uint8'),
victimIP_c = df['victimIP'].str.extract("^\d+\.\d+\.(\d+)\.").astype('uint8'),
victimIP_d = df['victimIP'].str.extract("^\d+\.\d+\.\d+\.(\d+)$").astype('uint8')
)
])
orig_continuous = [
df[['victimPort', 'count', 'time(max)', 'time(min)']].assign(
duration=df['time(max)'] - df['time(min)']
)
]
df2 = pd.concat(encoded_ips + dummmies + orig_continuous, axis=1)
print('new shape:', df2.info())
df2.sample(5)
df3 = df2.copy()
for c in df3:
#print(c)
df3[c] = ((df3[c] - df3[c].min())/(df3[c].max() - df3[c].min())).fillna(0)
print(df3.info())
df3.sample(5)
# see help(umap.UMAP)
umap_options = {
'n_components': 2,
'metric': 'euclidean'
}
%%time
embedding = umap.UMAP(**umap_options).fit(df3)
embedding
%%time
coo = embedding.graph_.tocoo()
print('coo lens', len(coo.row), len(coo.col), len(coo.data))
print(coo.row[0:5], coo.col[0:5], coo.data[0:5])
weighted_edges_df = pd.DataFrame({
's': coo.row,
'd': coo.col,
'w': coo.data
})
weighted_edges_df.sample(3)
nodes_df = pd.concat([
df,
pd.DataFrame(embedding.embedding_).rename(columns={0: 'x', 1: 'y'})
], axis=1)
nodes_df['x'] = nodes_df['x'] * 100
nodes_df['y'] = nodes_df['y'] * 100
nodes_df = nodes_df.reset_index().rename(columns={'index': 'n'})
print(nodes_df.info())
nodes_df.sample(5)
# Most of the settings are optional and can be changed on-the-fly in the UI
g = (
graphistry
.nodes(nodes_df, 'n')
.edges(weighted_edges_df, 's', 'd')
.bind(point_x='x', point_y='y', edge_weight='w')
.settings(url_params={'play': 0, 'edgeInfluence': 5})
.encode_edge_color('w', ['maroon', 'pink', 'white'], as_continuous=True)
.encode_point_size('count')
)
g.plot()
#triple: src_node_EDGE_dst_node
edge_triples = (g
._edges
.merge(g._nodes, left_on=g._source, right_on=g._node)
.rename(columns={c: f'src_{c}' for c in g._nodes})
.merge(g._nodes, left_on=g._destination, right_on=g._node)
.rename(columns={c: f'dst_{c}' for c in g._nodes})
)
#print(edge_triplescolumns)
equivs = []
for c in g._nodes:
equiv = edge_triples[ edge_triples[f'src_{c}'] == edge_triples[f'dst_{c}'] ]
if len(equiv) > 0:
equiv = equiv[[g._source, g._destination]].assign(
type=c,
match_val=edge_triples[f'src_{c}'],
w=0.1)
equiv[c] = edge_triples[f'src_{c}']
print('adding', c, len(equiv))
equivs.append(equiv)
else:
print('no hits on col', c)
equivs_df = pd.concat(equivs)
equivs_df['match_val'] = equivs_df['match_val'].astype(str) # ensure arrow works
#equivs_df.sample(10)
edges2 = pd.concat([g._edges.assign(type='umap', match_val='1'), equivs_df])
g2 = (g
.edges(edges2)
#.edges(edges2[edges2['type'] == 'attackerIP'])
.bind(edge_label='match_val')
.encode_edge_color('type', categorical_mapping={
'umap': 'grey',
'victimIP': 'blue',
'attackerIP': 'lightblue',
'victimPort': 'green',
'vulnName': 'yellow',
'count': 'white'
})
)
g2.plot()
print(g2._edges.info())
g2._edges.sample(3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enumerate the parameter combinations
Step2: Specify the model changes
Step3: Specify the result 'y' that we want to retrieve
Step4: Trigger the run...
Step5: The results
|
<ASSISTANT_TASK:>
Python Code:
## Veneer started elsewhere (probably from a command line using veneer.manager.start)
ports = list(range(15004,15008))
ports
bv = BulkVeneer(ports)
v = bv.veneers[1]
network = v.network()
network.as_dataframe().plot()
network.outlet_nodes()
outlet_node = network.outlet_nodes()[0]['properties']['name'] + '$'
import numpy as np
N_RUNS=100
params = {
'x1':np.random.uniform(1.0,1500.0,size=N_RUNS),
'x2':np.random.uniform(1.0,5.0,size=N_RUNS),
'x3':np.random.uniform(1.0,200.0,size=N_RUNS),
'x4':np.random.uniform(0.5,3.0,size=N_RUNS)
}
params = pd.DataFrame(params)
params
runner = BatchRunner(bv.veneers)
v.model.catchment.runoff.set_param_values?
for p in ['x1','x2','x3','x4']:
runner.parameters.model.catchment.runoff.set_param_values(p,'$%s$'%p,fus=['Grazing'])
runner.retrieve('y').retrieve_multiple_time_series(criteria={'NetworkElement':outlet_node,'RecordingVariable':'Downstream Flow Volume'}).sum()[0]
%xmode Verbose
print(runner._retrieval.script())
jobs,results = runner.run(params)
#jobs
pd.DataFrame(results)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.